Jan 03 05:40:20 crc systemd[1]: Starting Kubernetes Kubelet... Jan 03 05:40:20 crc restorecon[4693]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:20 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 03 05:40:21 crc restorecon[4693]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 03 05:40:21 crc restorecon[4693]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 03 05:40:21 crc kubenswrapper[4854]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 03 05:40:21 crc kubenswrapper[4854]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 03 05:40:21 crc kubenswrapper[4854]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 03 05:40:21 crc kubenswrapper[4854]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 03 05:40:21 crc kubenswrapper[4854]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 03 05:40:21 crc kubenswrapper[4854]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.804009 4854 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807037 4854 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807053 4854 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807057 4854 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807062 4854 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807066 4854 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807070 4854 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807094 4854 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807098 4854 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807104 4854 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807109 4854 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807114 4854 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807118 4854 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807124 4854 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807129 4854 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807133 4854 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807137 4854 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807140 4854 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807144 4854 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807148 4854 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807151 4854 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807155 4854 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807160 4854 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807165 4854 feature_gate.go:330] unrecognized feature gate: Example Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807169 4854 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807173 4854 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807177 4854 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807181 4854 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807185 4854 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807191 4854 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807195 4854 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807199 4854 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807203 4854 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807207 4854 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807211 4854 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807215 4854 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807218 4854 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807222 4854 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807227 4854 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807230 4854 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807234 4854 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807238 4854 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807242 4854 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807245 4854 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807250 4854 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807253 4854 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807257 4854 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807260 4854 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807264 4854 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807268 4854 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807272 4854 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807275 4854 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807278 4854 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807282 4854 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807286 4854 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807289 4854 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807293 4854 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807297 4854 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807302 4854 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807305 4854 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807310 4854 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807315 4854 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807319 4854 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807322 4854 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807326 4854 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807330 4854 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807333 4854 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807338 4854 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807342 4854 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807346 4854 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807349 4854 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.807353 4854 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807425 4854 flags.go:64] FLAG: --address="0.0.0.0" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807433 4854 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807444 4854 flags.go:64] FLAG: --anonymous-auth="true" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807450 4854 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807455 4854 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807459 4854 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807465 4854 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807470 4854 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807474 4854 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807479 4854 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807483 4854 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807487 4854 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807492 4854 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807496 4854 flags.go:64] FLAG: --cgroup-root="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807500 4854 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807504 4854 flags.go:64] FLAG: --client-ca-file="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807508 4854 flags.go:64] FLAG: --cloud-config="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807513 4854 flags.go:64] FLAG: --cloud-provider="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807517 4854 flags.go:64] FLAG: --cluster-dns="[]" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807522 4854 flags.go:64] FLAG: --cluster-domain="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807526 4854 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807530 4854 flags.go:64] FLAG: --config-dir="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807534 4854 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807539 4854 flags.go:64] FLAG: --container-log-max-files="5" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807545 4854 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807548 4854 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807553 4854 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807557 4854 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807561 4854 flags.go:64] FLAG: --contention-profiling="false" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807566 4854 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807570 4854 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807574 4854 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807579 4854 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807584 4854 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807589 4854 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807593 4854 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807597 4854 flags.go:64] FLAG: --enable-load-reader="false" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807602 4854 flags.go:64] FLAG: --enable-server="true" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807606 4854 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807612 4854 flags.go:64] FLAG: --event-burst="100" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807616 4854 flags.go:64] FLAG: --event-qps="50" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807620 4854 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807624 4854 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807629 4854 flags.go:64] FLAG: --eviction-hard="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807634 4854 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807638 4854 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807642 4854 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807646 4854 flags.go:64] FLAG: --eviction-soft="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807651 4854 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807655 4854 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807659 4854 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807663 4854 flags.go:64] FLAG: --experimental-mounter-path="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807668 4854 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807672 4854 flags.go:64] FLAG: --fail-swap-on="true" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807677 4854 flags.go:64] FLAG: --feature-gates="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807682 4854 flags.go:64] FLAG: --file-check-frequency="20s" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807687 4854 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807692 4854 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807696 4854 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807701 4854 flags.go:64] FLAG: --healthz-port="10248" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807706 4854 flags.go:64] FLAG: --help="false" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807711 4854 flags.go:64] FLAG: --hostname-override="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807715 4854 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807720 4854 flags.go:64] FLAG: --http-check-frequency="20s" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807724 4854 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807729 4854 flags.go:64] FLAG: --image-credential-provider-config="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807734 4854 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807739 4854 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807743 4854 flags.go:64] FLAG: --image-service-endpoint="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807747 4854 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807751 4854 flags.go:64] FLAG: --kube-api-burst="100" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807755 4854 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807759 4854 flags.go:64] FLAG: --kube-api-qps="50" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807764 4854 flags.go:64] FLAG: --kube-reserved="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807768 4854 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807772 4854 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807776 4854 flags.go:64] FLAG: --kubelet-cgroups="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807780 4854 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807784 4854 flags.go:64] FLAG: --lock-file="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807788 4854 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807792 4854 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807796 4854 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807802 4854 flags.go:64] FLAG: --log-json-split-stream="false" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807806 4854 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807810 4854 flags.go:64] FLAG: --log-text-split-stream="false" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807814 4854 flags.go:64] FLAG: --logging-format="text" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807818 4854 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807823 4854 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807827 4854 flags.go:64] FLAG: --manifest-url="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807830 4854 flags.go:64] FLAG: --manifest-url-header="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807836 4854 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807840 4854 flags.go:64] FLAG: --max-open-files="1000000" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807845 4854 flags.go:64] FLAG: --max-pods="110" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807849 4854 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807854 4854 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807858 4854 flags.go:64] FLAG: --memory-manager-policy="None" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807862 4854 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807867 4854 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807871 4854 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807876 4854 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807884 4854 flags.go:64] FLAG: --node-status-max-images="50" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807888 4854 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807893 4854 flags.go:64] FLAG: --oom-score-adj="-999" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807897 4854 flags.go:64] FLAG: --pod-cidr="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807901 4854 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807909 4854 flags.go:64] FLAG: --pod-manifest-path="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807913 4854 flags.go:64] FLAG: --pod-max-pids="-1" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807917 4854 flags.go:64] FLAG: --pods-per-core="0" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807921 4854 flags.go:64] FLAG: --port="10250" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807926 4854 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807931 4854 flags.go:64] FLAG: --provider-id="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807934 4854 flags.go:64] FLAG: --qos-reserved="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807939 4854 flags.go:64] FLAG: --read-only-port="10255" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807943 4854 flags.go:64] FLAG: --register-node="true" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807948 4854 flags.go:64] FLAG: --register-schedulable="true" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807952 4854 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807959 4854 flags.go:64] FLAG: --registry-burst="10" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807963 4854 flags.go:64] FLAG: --registry-qps="5" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807968 4854 flags.go:64] FLAG: --reserved-cpus="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807972 4854 flags.go:64] FLAG: --reserved-memory="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807978 4854 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807982 4854 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807986 4854 flags.go:64] FLAG: --rotate-certificates="false" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807991 4854 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807995 4854 flags.go:64] FLAG: --runonce="false" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.807999 4854 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.808003 4854 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.808008 4854 flags.go:64] FLAG: --seccomp-default="false" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.808013 4854 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.808017 4854 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.808022 4854 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.808026 4854 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.808030 4854 flags.go:64] FLAG: --storage-driver-password="root" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.808049 4854 flags.go:64] FLAG: --storage-driver-secure="false" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.808054 4854 flags.go:64] FLAG: --storage-driver-table="stats" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.808058 4854 flags.go:64] FLAG: --storage-driver-user="root" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.808062 4854 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.808067 4854 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.808071 4854 flags.go:64] FLAG: --system-cgroups="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.808088 4854 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.808095 4854 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.808099 4854 flags.go:64] FLAG: --tls-cert-file="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.808103 4854 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.808109 4854 flags.go:64] FLAG: --tls-min-version="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.808113 4854 flags.go:64] FLAG: --tls-private-key-file="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.808117 4854 flags.go:64] FLAG: --topology-manager-policy="none" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.808121 4854 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.808125 4854 flags.go:64] FLAG: --topology-manager-scope="container" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.808130 4854 flags.go:64] FLAG: --v="2" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.808135 4854 flags.go:64] FLAG: --version="false" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.808144 4854 flags.go:64] FLAG: --vmodule="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.808149 4854 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.808153 4854 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808268 4854 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808273 4854 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808277 4854 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808281 4854 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808285 4854 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808290 4854 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808295 4854 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808299 4854 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808303 4854 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808306 4854 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808310 4854 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808314 4854 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808317 4854 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808321 4854 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808324 4854 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808327 4854 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808331 4854 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808335 4854 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808339 4854 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808342 4854 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808346 4854 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808350 4854 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808353 4854 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808357 4854 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808360 4854 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808363 4854 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808367 4854 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808370 4854 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808374 4854 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808380 4854 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808383 4854 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808387 4854 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808390 4854 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808394 4854 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808398 4854 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808401 4854 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808405 4854 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808408 4854 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808412 4854 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808415 4854 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808418 4854 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808422 4854 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808427 4854 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808431 4854 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808434 4854 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808438 4854 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808441 4854 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808445 4854 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808449 4854 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808452 4854 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808456 4854 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808459 4854 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808467 4854 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808471 4854 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808474 4854 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808478 4854 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808482 4854 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808485 4854 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808489 4854 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808492 4854 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808495 4854 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808500 4854 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808503 4854 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808507 4854 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808515 4854 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808519 4854 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808523 4854 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808527 4854 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808532 4854 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808537 4854 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.808541 4854 feature_gate.go:330] unrecognized feature gate: Example Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.808553 4854 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.819648 4854 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.819691 4854 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.819832 4854 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.819849 4854 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.819860 4854 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.819869 4854 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.819877 4854 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.819887 4854 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.819895 4854 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.819903 4854 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.819910 4854 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.819918 4854 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.819926 4854 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.819933 4854 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.819942 4854 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.819950 4854 feature_gate.go:330] unrecognized feature gate: Example Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.819958 4854 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.819965 4854 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.819973 4854 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.819981 4854 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.819989 4854 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.819997 4854 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820004 4854 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820013 4854 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820023 4854 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820033 4854 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820043 4854 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820107 4854 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820118 4854 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820129 4854 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820138 4854 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820147 4854 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820156 4854 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820167 4854 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820181 4854 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820194 4854 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820205 4854 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820214 4854 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820221 4854 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820229 4854 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820237 4854 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820245 4854 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820253 4854 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820261 4854 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820268 4854 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820277 4854 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820285 4854 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820292 4854 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820300 4854 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820310 4854 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820320 4854 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820330 4854 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820339 4854 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820347 4854 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820355 4854 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820365 4854 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820373 4854 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820381 4854 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820388 4854 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820396 4854 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820404 4854 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820412 4854 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820419 4854 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820428 4854 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820435 4854 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820445 4854 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820454 4854 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820464 4854 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820473 4854 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820481 4854 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820489 4854 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820497 4854 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820505 4854 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.820519 4854 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820803 4854 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820819 4854 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820831 4854 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820842 4854 feature_gate.go:330] unrecognized feature gate: Example Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820850 4854 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820882 4854 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820892 4854 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820901 4854 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820909 4854 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820919 4854 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820927 4854 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820935 4854 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820943 4854 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820950 4854 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820958 4854 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820965 4854 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820973 4854 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820982 4854 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820990 4854 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.820997 4854 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.821005 4854 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.821018 4854 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.821028 4854 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.821038 4854 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.821049 4854 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.821118 4854 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.821126 4854 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.821134 4854 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.821142 4854 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.821150 4854 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.821157 4854 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.821165 4854 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.821175 4854 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.821186 4854 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.821198 4854 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.821207 4854 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.821217 4854 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.821225 4854 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.821233 4854 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.821241 4854 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.821249 4854 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.821257 4854 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.821265 4854 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.821274 4854 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.821282 4854 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.821291 4854 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.821300 4854 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.821308 4854 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.821316 4854 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.821324 4854 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.821331 4854 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.821339 4854 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.821347 4854 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.821355 4854 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.821363 4854 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.821371 4854 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.821379 4854 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.821387 4854 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.821394 4854 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.821402 4854 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.821410 4854 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.821418 4854 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.821425 4854 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.821433 4854 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.821441 4854 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.821448 4854 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.821456 4854 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.821464 4854 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.821471 4854 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.821479 4854 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.821487 4854 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.821500 4854 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.822016 4854 server.go:940] "Client rotation is on, will bootstrap in background" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.826212 4854 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.826334 4854 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.827155 4854 server.go:997] "Starting client certificate rotation" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.827191 4854 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.827395 4854 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-10 07:37:52.229034659 +0000 UTC Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.827555 4854 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 169h57m30.401484767s for next certificate rotation Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.852201 4854 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.854668 4854 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.865719 4854 log.go:25] "Validated CRI v1 runtime API" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.887402 4854 log.go:25] "Validated CRI v1 image API" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.889464 4854 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.893228 4854 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-03-05-35-34-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.893275 4854 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:42 fsType:tmpfs blockSize:0}] Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.918564 4854 manager.go:217] Machine: {Timestamp:2026-01-03 05:40:21.916574234 +0000 UTC m=+0.243150886 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:70824611-2ad5-40b8-af0f-fb136ff2a322 BootID:50a66242-f853-4864-8639-b84ace4c39eb Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:42 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:7b:f4:a6 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:7b:f4:a6 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:7d:c3:7a Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:24:30:1e Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:58:8c:35 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:46:ad:5f Speed:-1 Mtu:1496} {Name:eth10 MacAddress:6e:a5:33:c5:be:ce Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:1a:dc:97:05:36:65 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.918983 4854 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.919175 4854 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.920273 4854 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.920567 4854 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.920609 4854 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.921016 4854 topology_manager.go:138] "Creating topology manager with none policy" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.921040 4854 container_manager_linux.go:303] "Creating device plugin manager" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.921256 4854 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.921308 4854 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.921517 4854 state_mem.go:36] "Initialized new in-memory state store" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.921654 4854 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.922562 4854 kubelet.go:418] "Attempting to sync node with API server" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.922593 4854 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.922617 4854 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.922637 4854 kubelet.go:324] "Adding apiserver pod source" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.922655 4854 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.924642 4854 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.925221 4854 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.926144 4854 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.102:6443: connect: connection refused Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.926260 4854 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 03 05:40:21 crc kubenswrapper[4854]: E0103 05:40:21.926264 4854 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.102:6443: connect: connection refused" logger="UnhandledError" Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.926250 4854 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.102:6443: connect: connection refused Jan 03 05:40:21 crc kubenswrapper[4854]: E0103 05:40:21.926380 4854 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.102:6443: connect: connection refused" logger="UnhandledError" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.926993 4854 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.927034 4854 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.927049 4854 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.927063 4854 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.927112 4854 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.927126 4854 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.927166 4854 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.927190 4854 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.927205 4854 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.927218 4854 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.927252 4854 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.927267 4854 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.927575 4854 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.928234 4854 server.go:1280] "Started kubelet" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.928767 4854 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.102:6443: connect: connection refused Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.929344 4854 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 03 05:40:21 crc systemd[1]: Started Kubernetes Kubelet. Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.931007 4854 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.931701 4854 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.932861 4854 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.932929 4854 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.933137 4854 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 13:01:29.134910783 +0000 UTC Jan 03 05:40:21 crc kubenswrapper[4854]: E0103 05:40:21.933512 4854 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.933561 4854 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.933531 4854 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.933612 4854 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.934535 4854 factory.go:55] Registering systemd factory Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.934608 4854 factory.go:221] Registration of the systemd container factory successfully Jan 03 05:40:21 crc kubenswrapper[4854]: E0103 05:40:21.934620 4854 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.102:6443: connect: connection refused" interval="200ms" Jan 03 05:40:21 crc kubenswrapper[4854]: W0103 05:40:21.934931 4854 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.102:6443: connect: connection refused Jan 03 05:40:21 crc kubenswrapper[4854]: E0103 05:40:21.935062 4854 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.102:6443: connect: connection refused" logger="UnhandledError" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.935294 4854 factory.go:153] Registering CRI-O factory Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.935338 4854 factory.go:221] Registration of the crio container factory successfully Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.935470 4854 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.935529 4854 factory.go:103] Registering Raw factory Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.935557 4854 manager.go:1196] Started watching for new ooms in manager Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.936687 4854 manager.go:319] Starting recovery of all containers Jan 03 05:40:21 crc kubenswrapper[4854]: E0103 05:40:21.941744 4854 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.102:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1887220dea5e9ed7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-03 05:40:21.928197847 +0000 UTC m=+0.254774449,LastTimestamp:2026-01-03 05:40:21.928197847 +0000 UTC m=+0.254774449,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.943794 4854 server.go:460] "Adding debug handlers to kubelet server" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.952537 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.952675 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.952699 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.952719 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.952738 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.952757 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.952774 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.952792 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.952811 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.952830 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.952848 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.952871 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.952889 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.952913 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.952932 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.952951 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.952973 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.952991 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.953009 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.953027 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.953046 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.953064 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.953133 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.953158 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.953177 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.953195 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.953289 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.953311 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.953330 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.953355 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.953374 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.953396 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.953418 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.953439 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.953459 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.953480 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.953499 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.953519 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.953541 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.953561 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.953580 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.953599 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.953618 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.953637 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.953656 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.953680 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.953701 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.953719 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.953738 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.953758 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.953776 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.953795 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.953821 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.953843 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.953863 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.953884 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.953904 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.953923 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.953943 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.953962 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.953980 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.954001 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.954019 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.954071 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.954223 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.954243 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.954291 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.954311 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.954330 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.954348 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.954366 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.954388 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.954407 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.954430 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.954449 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.954468 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.954485 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.954503 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.954521 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.954539 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.954559 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.954577 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.954596 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.954614 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.954632 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.954651 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.954669 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.954688 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.954705 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.954723 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.954741 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.954761 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.954781 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.954800 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.954818 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.954838 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.954858 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.954879 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.954898 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.954917 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.954937 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.954956 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.954974 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.954992 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.955021 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.955042 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.955063 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.955108 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.955882 4854 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.955923 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.955946 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.955970 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.955992 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.956014 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.956034 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.956055 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.956109 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.956129 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.956148 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.956168 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.956213 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.956233 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.956252 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.956270 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.956291 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.956311 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.956329 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.956348 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.956368 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.956389 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.956407 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.956425 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.956443 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.956461 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.956479 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.956498 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.956534 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.956554 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.956573 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.956591 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.956610 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.956628 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.956648 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.956667 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.956687 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.956705 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.956722 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.956740 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.956760 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.956777 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.956796 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.956814 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.956835 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.956854 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.956871 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.956892 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.956911 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.956928 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.956946 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.956965 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.956984 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.957004 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.957022 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.957040 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.957058 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.957099 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.957118 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.957136 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.957154 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.957172 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.957193 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.957211 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.957229 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.957246 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.957265 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.957284 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.957306 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.957324 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.957347 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.957364 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.957384 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.957401 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.957419 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.957436 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.957453 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.957471 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.957489 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.957507 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.957528 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.957545 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.957562 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.957580 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.957599 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.957616 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.957633 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.957651 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.957669 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.957687 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.957703 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.957721 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.957743 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.957762 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.957779 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.957798 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.957814 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.957832 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.957850 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.957868 4854 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.957887 4854 reconstruct.go:97] "Volume reconstruction finished" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.957900 4854 reconciler.go:26] "Reconciler: start to sync state" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.969290 4854 manager.go:324] Recovery completed Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.985829 4854 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.989428 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.989507 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.989547 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.992717 4854 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.992841 4854 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 03 05:40:21 crc kubenswrapper[4854]: I0103 05:40:21.992923 4854 state_mem.go:36] "Initialized new in-memory state store" Jan 03 05:40:22 crc kubenswrapper[4854]: E0103 05:40:22.034459 4854 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.113363 4854 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.116282 4854 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.116701 4854 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.116782 4854 policy_none.go:49] "None policy: Start" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.116803 4854 kubelet.go:2335] "Starting kubelet main sync loop" Jan 03 05:40:22 crc kubenswrapper[4854]: E0103 05:40:22.117307 4854 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.118614 4854 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.118832 4854 state_mem.go:35] "Initializing new in-memory state store" Jan 03 05:40:22 crc kubenswrapper[4854]: W0103 05:40:22.118603 4854 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.102:6443: connect: connection refused Jan 03 05:40:22 crc kubenswrapper[4854]: E0103 05:40:22.118953 4854 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.102:6443: connect: connection refused" logger="UnhandledError" Jan 03 05:40:22 crc kubenswrapper[4854]: E0103 05:40:22.134858 4854 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 03 05:40:22 crc kubenswrapper[4854]: E0103 05:40:22.135383 4854 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.102:6443: connect: connection refused" interval="400ms" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.184508 4854 manager.go:334] "Starting Device Plugin manager" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.185249 4854 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.185312 4854 server.go:79] "Starting device plugin registration server" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.186219 4854 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.186246 4854 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.186452 4854 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.186573 4854 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.186589 4854 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 03 05:40:22 crc kubenswrapper[4854]: E0103 05:40:22.201256 4854 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.218469 4854 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.218813 4854 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.220608 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.220683 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.220699 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.220969 4854 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.221072 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.221118 4854 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.222063 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.222099 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.222112 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.222270 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.222285 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.222293 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.222386 4854 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.222562 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.222608 4854 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.223043 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.223063 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.223070 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.223165 4854 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.223349 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.223419 4854 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.224479 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.224524 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.224539 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.225179 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.225202 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.225212 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.225286 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.225323 4854 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.225385 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.225410 4854 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.225354 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.225439 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.226746 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.226774 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.226785 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.226935 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.226965 4854 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.227106 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.227138 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.227151 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.228026 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.228056 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.228068 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.269043 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.269141 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.269194 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.269234 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.269266 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.269339 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.269414 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.269457 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.269484 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.269513 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.269616 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.269672 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.269707 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.269744 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.269794 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.286995 4854 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.289410 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.289469 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.289487 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.289527 4854 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 03 05:40:22 crc kubenswrapper[4854]: E0103 05:40:22.290775 4854 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.102:6443: connect: connection refused" node="crc" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.370596 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.370656 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.370694 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.370731 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.370761 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.370791 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.370821 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.370852 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.370885 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.370858 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.370947 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.370904 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.370969 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.371000 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.370961 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.370904 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.370980 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.370915 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.371244 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.370951 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.371006 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.371298 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.371324 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.371353 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.371406 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.371434 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.371447 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.371405 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.371531 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.371620 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.491897 4854 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.494009 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.494131 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.494158 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.494203 4854 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 03 05:40:22 crc kubenswrapper[4854]: E0103 05:40:22.494958 4854 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.102:6443: connect: connection refused" node="crc" Jan 03 05:40:22 crc kubenswrapper[4854]: E0103 05:40:22.537306 4854 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.102:6443: connect: connection refused" interval="800ms" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.558579 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.580364 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 03 05:40:22 crc kubenswrapper[4854]: W0103 05:40:22.593590 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-ee1dd0c1a3b406040dabc2d9ed0a2b0086b57e2c3a8b6c2cd85bcfc1b042d0e4 WatchSource:0}: Error finding container ee1dd0c1a3b406040dabc2d9ed0a2b0086b57e2c3a8b6c2cd85bcfc1b042d0e4: Status 404 returned error can't find the container with id ee1dd0c1a3b406040dabc2d9ed0a2b0086b57e2c3a8b6c2cd85bcfc1b042d0e4 Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.596175 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.615058 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.626201 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 03 05:40:22 crc kubenswrapper[4854]: W0103 05:40:22.670603 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-64790143e744feecc69beb4a54703726f497886c2e48af6f98cbc04a2c021ff0 WatchSource:0}: Error finding container 64790143e744feecc69beb4a54703726f497886c2e48af6f98cbc04a2c021ff0: Status 404 returned error can't find the container with id 64790143e744feecc69beb4a54703726f497886c2e48af6f98cbc04a2c021ff0 Jan 03 05:40:22 crc kubenswrapper[4854]: W0103 05:40:22.674383 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-46412c25a2b239fa036681d0e6a94d752b80a2d4989e8f44a1cde177ea42962b WatchSource:0}: Error finding container 46412c25a2b239fa036681d0e6a94d752b80a2d4989e8f44a1cde177ea42962b: Status 404 returned error can't find the container with id 46412c25a2b239fa036681d0e6a94d752b80a2d4989e8f44a1cde177ea42962b Jan 03 05:40:22 crc kubenswrapper[4854]: W0103 05:40:22.689772 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-e8f4a0e5c48e6c2d24d6d3e46c05300dd93bf3f0e197d0e1a3acc1b0faf6a0c6 WatchSource:0}: Error finding container e8f4a0e5c48e6c2d24d6d3e46c05300dd93bf3f0e197d0e1a3acc1b0faf6a0c6: Status 404 returned error can't find the container with id e8f4a0e5c48e6c2d24d6d3e46c05300dd93bf3f0e197d0e1a3acc1b0faf6a0c6 Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.895709 4854 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.897419 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.897467 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.897480 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.897511 4854 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 03 05:40:22 crc kubenswrapper[4854]: E0103 05:40:22.897888 4854 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.102:6443: connect: connection refused" node="crc" Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.930371 4854 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.102:6443: connect: connection refused Jan 03 05:40:22 crc kubenswrapper[4854]: I0103 05:40:22.933429 4854 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 10:26:33.099934044 +0000 UTC Jan 03 05:40:23 crc kubenswrapper[4854]: W0103 05:40:23.043394 4854 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.102:6443: connect: connection refused Jan 03 05:40:23 crc kubenswrapper[4854]: E0103 05:40:23.043730 4854 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.102:6443: connect: connection refused" logger="UnhandledError" Jan 03 05:40:23 crc kubenswrapper[4854]: W0103 05:40:23.087538 4854 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.102:6443: connect: connection refused Jan 03 05:40:23 crc kubenswrapper[4854]: E0103 05:40:23.087617 4854 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.102:6443: connect: connection refused" logger="UnhandledError" Jan 03 05:40:23 crc kubenswrapper[4854]: I0103 05:40:23.125480 4854 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="ed66804a466bc2221b411502b22b2c7c043b334818c258491594f979e17aadec" exitCode=0 Jan 03 05:40:23 crc kubenswrapper[4854]: I0103 05:40:23.125513 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"ed66804a466bc2221b411502b22b2c7c043b334818c258491594f979e17aadec"} Jan 03 05:40:23 crc kubenswrapper[4854]: I0103 05:40:23.125641 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e8f4a0e5c48e6c2d24d6d3e46c05300dd93bf3f0e197d0e1a3acc1b0faf6a0c6"} Jan 03 05:40:23 crc kubenswrapper[4854]: I0103 05:40:23.125743 4854 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 05:40:23 crc kubenswrapper[4854]: I0103 05:40:23.126916 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:23 crc kubenswrapper[4854]: I0103 05:40:23.126962 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:23 crc kubenswrapper[4854]: I0103 05:40:23.126974 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:23 crc kubenswrapper[4854]: I0103 05:40:23.129018 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d23ed4cba923a9b14622426134be2d60f5a835b8bbabc385821b9cfbeead4b13"} Jan 03 05:40:23 crc kubenswrapper[4854]: I0103 05:40:23.129043 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"46412c25a2b239fa036681d0e6a94d752b80a2d4989e8f44a1cde177ea42962b"} Jan 03 05:40:23 crc kubenswrapper[4854]: I0103 05:40:23.131334 4854 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="16935ee336fab386a68b1d6138b6131561872b3157340cd17d9f3fe44127c365" exitCode=0 Jan 03 05:40:23 crc kubenswrapper[4854]: I0103 05:40:23.131392 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"16935ee336fab386a68b1d6138b6131561872b3157340cd17d9f3fe44127c365"} Jan 03 05:40:23 crc kubenswrapper[4854]: I0103 05:40:23.131412 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"64790143e744feecc69beb4a54703726f497886c2e48af6f98cbc04a2c021ff0"} Jan 03 05:40:23 crc kubenswrapper[4854]: I0103 05:40:23.131500 4854 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 05:40:23 crc kubenswrapper[4854]: I0103 05:40:23.132250 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:23 crc kubenswrapper[4854]: I0103 05:40:23.132278 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:23 crc kubenswrapper[4854]: I0103 05:40:23.132290 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:23 crc kubenswrapper[4854]: I0103 05:40:23.133512 4854 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="da18a154409efb65de00eca85d7a31c22d143cd9b9b192ec16685f22feb3d453" exitCode=0 Jan 03 05:40:23 crc kubenswrapper[4854]: I0103 05:40:23.133582 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"da18a154409efb65de00eca85d7a31c22d143cd9b9b192ec16685f22feb3d453"} Jan 03 05:40:23 crc kubenswrapper[4854]: I0103 05:40:23.133602 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ecf9036ccd70191030d810f95f4fd73ad7c7b5d06b9eb6b2edd52b073b8d277d"} Jan 03 05:40:23 crc kubenswrapper[4854]: I0103 05:40:23.133702 4854 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 05:40:23 crc kubenswrapper[4854]: I0103 05:40:23.134633 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:23 crc kubenswrapper[4854]: I0103 05:40:23.134663 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:23 crc kubenswrapper[4854]: I0103 05:40:23.134674 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:23 crc kubenswrapper[4854]: I0103 05:40:23.135432 4854 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="8c7ff250efb3da0b802d9c30a8f11402f516dd1cd3a89305d7cc5c6bbdb27782" exitCode=0 Jan 03 05:40:23 crc kubenswrapper[4854]: I0103 05:40:23.135460 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"8c7ff250efb3da0b802d9c30a8f11402f516dd1cd3a89305d7cc5c6bbdb27782"} Jan 03 05:40:23 crc kubenswrapper[4854]: I0103 05:40:23.135477 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"ee1dd0c1a3b406040dabc2d9ed0a2b0086b57e2c3a8b6c2cd85bcfc1b042d0e4"} Jan 03 05:40:23 crc kubenswrapper[4854]: I0103 05:40:23.135534 4854 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 05:40:23 crc kubenswrapper[4854]: I0103 05:40:23.136596 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:23 crc kubenswrapper[4854]: I0103 05:40:23.136631 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:23 crc kubenswrapper[4854]: I0103 05:40:23.136643 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:23 crc kubenswrapper[4854]: I0103 05:40:23.138277 4854 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 05:40:23 crc kubenswrapper[4854]: I0103 05:40:23.138999 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:23 crc kubenswrapper[4854]: I0103 05:40:23.139032 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:23 crc kubenswrapper[4854]: I0103 05:40:23.139045 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:23 crc kubenswrapper[4854]: W0103 05:40:23.184571 4854 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.102:6443: connect: connection refused Jan 03 05:40:23 crc kubenswrapper[4854]: E0103 05:40:23.184661 4854 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.102:6443: connect: connection refused" logger="UnhandledError" Jan 03 05:40:23 crc kubenswrapper[4854]: E0103 05:40:23.338275 4854 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.102:6443: connect: connection refused" interval="1.6s" Jan 03 05:40:23 crc kubenswrapper[4854]: I0103 05:40:23.698236 4854 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 05:40:23 crc kubenswrapper[4854]: I0103 05:40:23.699818 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:23 crc kubenswrapper[4854]: I0103 05:40:23.699873 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:23 crc kubenswrapper[4854]: I0103 05:40:23.699887 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:23 crc kubenswrapper[4854]: I0103 05:40:23.699923 4854 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 03 05:40:23 crc kubenswrapper[4854]: E0103 05:40:23.700436 4854 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.102:6443: connect: connection refused" node="crc" Jan 03 05:40:23 crc kubenswrapper[4854]: I0103 05:40:23.933819 4854 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 14:08:55.486264301 +0000 UTC Jan 03 05:40:23 crc kubenswrapper[4854]: I0103 05:40:23.933937 4854 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 344h28m31.55233222s for next certificate rotation Jan 03 05:40:24 crc kubenswrapper[4854]: I0103 05:40:24.139285 4854 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="bdf5ad101a47c5460f04cebbfda47eba4666a0d708f56ddae282491e80c21bf6" exitCode=0 Jan 03 05:40:24 crc kubenswrapper[4854]: I0103 05:40:24.139367 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"bdf5ad101a47c5460f04cebbfda47eba4666a0d708f56ddae282491e80c21bf6"} Jan 03 05:40:24 crc kubenswrapper[4854]: I0103 05:40:24.139510 4854 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 05:40:24 crc kubenswrapper[4854]: I0103 05:40:24.140287 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:24 crc kubenswrapper[4854]: I0103 05:40:24.140311 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:24 crc kubenswrapper[4854]: I0103 05:40:24.140320 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:24 crc kubenswrapper[4854]: I0103 05:40:24.141992 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"b934cfdacefec92d0eb69c8cf5004f3ddfa12edf9f48af268c2707e16a6b7e7a"} Jan 03 05:40:24 crc kubenswrapper[4854]: I0103 05:40:24.142064 4854 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 05:40:24 crc kubenswrapper[4854]: I0103 05:40:24.142823 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:24 crc kubenswrapper[4854]: I0103 05:40:24.142836 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:24 crc kubenswrapper[4854]: I0103 05:40:24.142844 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:24 crc kubenswrapper[4854]: I0103 05:40:24.144957 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"52fb1f4395629d8f00f82a8324e7739636869285eabc05188b37ca6e51640d3f"} Jan 03 05:40:24 crc kubenswrapper[4854]: I0103 05:40:24.144976 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7d65fc3a0745d5d5875cf5e79b605e2927c694bca7ecef138651dcfc23c3327b"} Jan 03 05:40:24 crc kubenswrapper[4854]: I0103 05:40:24.144985 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"55270352f5471b764b0a82547a3e32e501cb02d67a0f9aa93cb3b0059abd41b5"} Jan 03 05:40:24 crc kubenswrapper[4854]: I0103 05:40:24.145042 4854 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 05:40:24 crc kubenswrapper[4854]: I0103 05:40:24.145681 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:24 crc kubenswrapper[4854]: I0103 05:40:24.145702 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:24 crc kubenswrapper[4854]: I0103 05:40:24.145710 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:24 crc kubenswrapper[4854]: I0103 05:40:24.147692 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"15993a4d28dfd9bd6fd9d3ecc22d1f921e48f19e6359ea378b0ba5ca47b283d0"} Jan 03 05:40:24 crc kubenswrapper[4854]: I0103 05:40:24.147710 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"96231fb89824a6f67751829855710db81ef40260a52c5366183c339af4856af0"} Jan 03 05:40:24 crc kubenswrapper[4854]: I0103 05:40:24.147720 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0dc74577e913605955b4ddbf4208de574ab14f6c77fff4c53687e70371d857ee"} Jan 03 05:40:24 crc kubenswrapper[4854]: I0103 05:40:24.147775 4854 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 05:40:24 crc kubenswrapper[4854]: I0103 05:40:24.148417 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:24 crc kubenswrapper[4854]: I0103 05:40:24.148431 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:24 crc kubenswrapper[4854]: I0103 05:40:24.148439 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:24 crc kubenswrapper[4854]: I0103 05:40:24.151904 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"34ffe91003d44a5658b1de915f0823abd5399b936ddc5e4696a08171e202fa92"} Jan 03 05:40:24 crc kubenswrapper[4854]: I0103 05:40:24.151924 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a547bb00f1c271e432cec6966b47decd29e1aa9e0c4f0ff7a517faed2f732b53"} Jan 03 05:40:24 crc kubenswrapper[4854]: I0103 05:40:24.151935 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b205d8e9458800979a8d964bee4251860e547baf9ae4a82816c7347b37484e57"} Jan 03 05:40:24 crc kubenswrapper[4854]: I0103 05:40:24.803574 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 05:40:25 crc kubenswrapper[4854]: I0103 05:40:25.174473 4854 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 05:40:25 crc kubenswrapper[4854]: I0103 05:40:25.175845 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5f5526160a5c68dcf472ca4562ba7cfc24aef8be3058acd28a2850f7d7abb674"} Jan 03 05:40:25 crc kubenswrapper[4854]: I0103 05:40:25.175942 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"150adb49200724a1aa45c990bba31412c42c85ffc9dfd355f85b38114962c9eb"} Jan 03 05:40:25 crc kubenswrapper[4854]: I0103 05:40:25.176744 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:25 crc kubenswrapper[4854]: I0103 05:40:25.176787 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:25 crc kubenswrapper[4854]: I0103 05:40:25.176808 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:25 crc kubenswrapper[4854]: I0103 05:40:25.180665 4854 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="6da39576fdbe1acf370ab64899df0c55397600619feaae338fa625dcaad4a88f" exitCode=0 Jan 03 05:40:25 crc kubenswrapper[4854]: I0103 05:40:25.180746 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"6da39576fdbe1acf370ab64899df0c55397600619feaae338fa625dcaad4a88f"} Jan 03 05:40:25 crc kubenswrapper[4854]: I0103 05:40:25.181172 4854 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 05:40:25 crc kubenswrapper[4854]: I0103 05:40:25.181175 4854 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 05:40:25 crc kubenswrapper[4854]: I0103 05:40:25.182899 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:25 crc kubenswrapper[4854]: I0103 05:40:25.182922 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:25 crc kubenswrapper[4854]: I0103 05:40:25.182957 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:25 crc kubenswrapper[4854]: I0103 05:40:25.182976 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:25 crc kubenswrapper[4854]: I0103 05:40:25.182986 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:25 crc kubenswrapper[4854]: I0103 05:40:25.182993 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:25 crc kubenswrapper[4854]: I0103 05:40:25.301574 4854 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 05:40:25 crc kubenswrapper[4854]: I0103 05:40:25.303212 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:25 crc kubenswrapper[4854]: I0103 05:40:25.303311 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:25 crc kubenswrapper[4854]: I0103 05:40:25.303338 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:25 crc kubenswrapper[4854]: I0103 05:40:25.303382 4854 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 03 05:40:26 crc kubenswrapper[4854]: I0103 05:40:26.190032 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"94e2e5aac4f318eebc95869a2fb2301f56571e7b52f1996be2f2d80c0e9fe578"} Jan 03 05:40:26 crc kubenswrapper[4854]: I0103 05:40:26.190169 4854 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 05:40:26 crc kubenswrapper[4854]: I0103 05:40:26.190171 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a694381b576de418c9972a7b4db718785c902db8a17c37c173e4370943cf0ee4"} Jan 03 05:40:26 crc kubenswrapper[4854]: I0103 05:40:26.190349 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1535f5d804da272f32468a4941f4f8fbc8d2a09d2b9f0ef0730a742f5ae083ce"} Jan 03 05:40:26 crc kubenswrapper[4854]: I0103 05:40:26.190387 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"afb56b54e21a5aff9e1341447526cd6a24d348475b32ef74e8df1ce2af6325ea"} Jan 03 05:40:26 crc kubenswrapper[4854]: I0103 05:40:26.190111 4854 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 03 05:40:26 crc kubenswrapper[4854]: I0103 05:40:26.190471 4854 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 05:40:26 crc kubenswrapper[4854]: I0103 05:40:26.191369 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:26 crc kubenswrapper[4854]: I0103 05:40:26.191448 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:26 crc kubenswrapper[4854]: I0103 05:40:26.191473 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:26 crc kubenswrapper[4854]: I0103 05:40:26.191961 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:26 crc kubenswrapper[4854]: I0103 05:40:26.192053 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:26 crc kubenswrapper[4854]: I0103 05:40:26.192112 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:26 crc kubenswrapper[4854]: I0103 05:40:26.790982 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 05:40:27 crc kubenswrapper[4854]: I0103 05:40:27.199831 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"dd30be1141fbd926ac80423210355f41447055faaccbdc196af8ad354ffe8581"} Jan 03 05:40:27 crc kubenswrapper[4854]: I0103 05:40:27.199907 4854 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 05:40:27 crc kubenswrapper[4854]: I0103 05:40:27.200783 4854 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 05:40:27 crc kubenswrapper[4854]: I0103 05:40:27.201250 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:27 crc kubenswrapper[4854]: I0103 05:40:27.201294 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:27 crc kubenswrapper[4854]: I0103 05:40:27.201312 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:27 crc kubenswrapper[4854]: I0103 05:40:27.202520 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:27 crc kubenswrapper[4854]: I0103 05:40:27.202690 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:27 crc kubenswrapper[4854]: I0103 05:40:27.202822 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:27 crc kubenswrapper[4854]: I0103 05:40:27.483648 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 05:40:27 crc kubenswrapper[4854]: I0103 05:40:27.483867 4854 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 03 05:40:27 crc kubenswrapper[4854]: I0103 05:40:27.483921 4854 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 05:40:27 crc kubenswrapper[4854]: I0103 05:40:27.485299 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:27 crc kubenswrapper[4854]: I0103 05:40:27.485361 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:27 crc kubenswrapper[4854]: I0103 05:40:27.485384 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:28 crc kubenswrapper[4854]: I0103 05:40:28.202679 4854 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 05:40:28 crc kubenswrapper[4854]: I0103 05:40:28.204250 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:28 crc kubenswrapper[4854]: I0103 05:40:28.204331 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:28 crc kubenswrapper[4854]: I0103 05:40:28.204353 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:28 crc kubenswrapper[4854]: I0103 05:40:28.408743 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 05:40:28 crc kubenswrapper[4854]: I0103 05:40:28.409265 4854 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 05:40:28 crc kubenswrapper[4854]: I0103 05:40:28.411019 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:28 crc kubenswrapper[4854]: I0103 05:40:28.411128 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:28 crc kubenswrapper[4854]: I0103 05:40:28.411188 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:28 crc kubenswrapper[4854]: I0103 05:40:28.607579 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 05:40:28 crc kubenswrapper[4854]: I0103 05:40:28.607773 4854 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 03 05:40:28 crc kubenswrapper[4854]: I0103 05:40:28.607826 4854 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 05:40:28 crc kubenswrapper[4854]: I0103 05:40:28.609126 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:28 crc kubenswrapper[4854]: I0103 05:40:28.609158 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:28 crc kubenswrapper[4854]: I0103 05:40:28.609171 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:29 crc kubenswrapper[4854]: I0103 05:40:29.613194 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 03 05:40:29 crc kubenswrapper[4854]: I0103 05:40:29.613484 4854 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 05:40:29 crc kubenswrapper[4854]: I0103 05:40:29.615309 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:29 crc kubenswrapper[4854]: I0103 05:40:29.615372 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:29 crc kubenswrapper[4854]: I0103 05:40:29.615390 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:30 crc kubenswrapper[4854]: I0103 05:40:30.212943 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 05:40:30 crc kubenswrapper[4854]: I0103 05:40:30.213162 4854 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 05:40:30 crc kubenswrapper[4854]: I0103 05:40:30.214492 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:30 crc kubenswrapper[4854]: I0103 05:40:30.214549 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:30 crc kubenswrapper[4854]: I0103 05:40:30.214570 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:31 crc kubenswrapper[4854]: I0103 05:40:31.341359 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 05:40:31 crc kubenswrapper[4854]: I0103 05:40:31.341694 4854 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 05:40:31 crc kubenswrapper[4854]: I0103 05:40:31.343870 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:31 crc kubenswrapper[4854]: I0103 05:40:31.344005 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:31 crc kubenswrapper[4854]: I0103 05:40:31.344029 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:31 crc kubenswrapper[4854]: I0103 05:40:31.348818 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 05:40:31 crc kubenswrapper[4854]: I0103 05:40:31.410226 4854 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 05:40:31 crc kubenswrapper[4854]: I0103 05:40:31.410321 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 05:40:31 crc kubenswrapper[4854]: I0103 05:40:31.869173 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 03 05:40:31 crc kubenswrapper[4854]: I0103 05:40:31.869522 4854 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 05:40:31 crc kubenswrapper[4854]: I0103 05:40:31.871248 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:31 crc kubenswrapper[4854]: I0103 05:40:31.871340 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:31 crc kubenswrapper[4854]: I0103 05:40:31.871362 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:32 crc kubenswrapper[4854]: E0103 05:40:32.201409 4854 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 03 05:40:32 crc kubenswrapper[4854]: I0103 05:40:32.214416 4854 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 05:40:32 crc kubenswrapper[4854]: I0103 05:40:32.215771 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:32 crc kubenswrapper[4854]: I0103 05:40:32.215858 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:32 crc kubenswrapper[4854]: I0103 05:40:32.215877 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:32 crc kubenswrapper[4854]: I0103 05:40:32.286799 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 03 05:40:32 crc kubenswrapper[4854]: I0103 05:40:32.287206 4854 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 05:40:32 crc kubenswrapper[4854]: I0103 05:40:32.288988 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:32 crc kubenswrapper[4854]: I0103 05:40:32.289129 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:32 crc kubenswrapper[4854]: I0103 05:40:32.289157 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:33 crc kubenswrapper[4854]: W0103 05:40:33.722683 4854 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 03 05:40:33 crc kubenswrapper[4854]: I0103 05:40:33.722848 4854 trace.go:236] Trace[1447915276]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Jan-2026 05:40:23.720) (total time: 10002ms): Jan 03 05:40:33 crc kubenswrapper[4854]: Trace[1447915276]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (05:40:33.722) Jan 03 05:40:33 crc kubenswrapper[4854]: Trace[1447915276]: [10.002685924s] [10.002685924s] END Jan 03 05:40:33 crc kubenswrapper[4854]: E0103 05:40:33.722888 4854 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 03 05:40:33 crc kubenswrapper[4854]: I0103 05:40:33.929954 4854 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 03 05:40:34 crc kubenswrapper[4854]: E0103 05:40:34.939268 4854 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Jan 03 05:40:35 crc kubenswrapper[4854]: I0103 05:40:35.006016 4854 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 03 05:40:35 crc kubenswrapper[4854]: I0103 05:40:35.006155 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 03 05:40:35 crc kubenswrapper[4854]: I0103 05:40:35.011851 4854 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 03 05:40:35 crc kubenswrapper[4854]: I0103 05:40:35.011914 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 03 05:40:36 crc kubenswrapper[4854]: I0103 05:40:36.799248 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 05:40:36 crc kubenswrapper[4854]: I0103 05:40:36.799422 4854 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 05:40:36 crc kubenswrapper[4854]: I0103 05:40:36.800740 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:36 crc kubenswrapper[4854]: I0103 05:40:36.800772 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:36 crc kubenswrapper[4854]: I0103 05:40:36.800781 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:37 crc kubenswrapper[4854]: I0103 05:40:37.492392 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 05:40:37 crc kubenswrapper[4854]: I0103 05:40:37.492723 4854 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 05:40:37 crc kubenswrapper[4854]: I0103 05:40:37.493071 4854 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 03 05:40:37 crc kubenswrapper[4854]: I0103 05:40:37.493171 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 03 05:40:37 crc kubenswrapper[4854]: I0103 05:40:37.494455 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:37 crc kubenswrapper[4854]: I0103 05:40:37.494530 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:37 crc kubenswrapper[4854]: I0103 05:40:37.494544 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:37 crc kubenswrapper[4854]: I0103 05:40:37.498972 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 05:40:38 crc kubenswrapper[4854]: I0103 05:40:38.392726 4854 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 05:40:38 crc kubenswrapper[4854]: I0103 05:40:38.393234 4854 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 03 05:40:38 crc kubenswrapper[4854]: I0103 05:40:38.393335 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 03 05:40:38 crc kubenswrapper[4854]: I0103 05:40:38.394341 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:38 crc kubenswrapper[4854]: I0103 05:40:38.394396 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:38 crc kubenswrapper[4854]: I0103 05:40:38.394416 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:40 crc kubenswrapper[4854]: I0103 05:40:40.001289 4854 trace.go:236] Trace[1893029563]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Jan-2026 05:40:25.544) (total time: 14456ms): Jan 03 05:40:40 crc kubenswrapper[4854]: Trace[1893029563]: ---"Objects listed" error: 14456ms (05:40:40.001) Jan 03 05:40:40 crc kubenswrapper[4854]: Trace[1893029563]: [14.456833769s] [14.456833769s] END Jan 03 05:40:40 crc kubenswrapper[4854]: I0103 05:40:40.001343 4854 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 03 05:40:40 crc kubenswrapper[4854]: I0103 05:40:40.002938 4854 trace.go:236] Trace[1235025818]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Jan-2026 05:40:25.253) (total time: 14749ms): Jan 03 05:40:40 crc kubenswrapper[4854]: Trace[1235025818]: ---"Objects listed" error: 14749ms (05:40:40.002) Jan 03 05:40:40 crc kubenswrapper[4854]: Trace[1235025818]: [14.749477516s] [14.749477516s] END Jan 03 05:40:40 crc kubenswrapper[4854]: I0103 05:40:40.002983 4854 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 03 05:40:40 crc kubenswrapper[4854]: I0103 05:40:40.006498 4854 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 03 05:40:40 crc kubenswrapper[4854]: I0103 05:40:40.008348 4854 trace.go:236] Trace[1729375128]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Jan-2026 05:40:25.607) (total time: 14401ms): Jan 03 05:40:40 crc kubenswrapper[4854]: Trace[1729375128]: ---"Objects listed" error: 14400ms (05:40:40.007) Jan 03 05:40:40 crc kubenswrapper[4854]: Trace[1729375128]: [14.401217222s] [14.401217222s] END Jan 03 05:40:40 crc kubenswrapper[4854]: I0103 05:40:40.008388 4854 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 03 05:40:40 crc kubenswrapper[4854]: E0103 05:40:40.010471 4854 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 03 05:40:40 crc kubenswrapper[4854]: I0103 05:40:40.049810 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 05:40:40 crc kubenswrapper[4854]: I0103 05:40:40.056592 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 05:40:40 crc kubenswrapper[4854]: I0103 05:40:40.275350 4854 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:47236->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 03 05:40:40 crc kubenswrapper[4854]: I0103 05:40:40.275408 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:47236->192.168.126.11:17697: read: connection reset by peer" Jan 03 05:40:40 crc kubenswrapper[4854]: I0103 05:40:40.400942 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 03 05:40:40 crc kubenswrapper[4854]: I0103 05:40:40.403370 4854 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5f5526160a5c68dcf472ca4562ba7cfc24aef8be3058acd28a2850f7d7abb674" exitCode=255 Jan 03 05:40:40 crc kubenswrapper[4854]: I0103 05:40:40.403422 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"5f5526160a5c68dcf472ca4562ba7cfc24aef8be3058acd28a2850f7d7abb674"} Jan 03 05:40:40 crc kubenswrapper[4854]: E0103 05:40:40.410890 4854 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 05:40:40 crc kubenswrapper[4854]: I0103 05:40:40.413047 4854 scope.go:117] "RemoveContainer" containerID="5f5526160a5c68dcf472ca4562ba7cfc24aef8be3058acd28a2850f7d7abb674" Jan 03 05:40:40 crc kubenswrapper[4854]: I0103 05:40:40.709979 4854 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 03 05:40:40 crc kubenswrapper[4854]: I0103 05:40:40.935054 4854 apiserver.go:52] "Watching apiserver" Jan 03 05:40:40 crc kubenswrapper[4854]: I0103 05:40:40.944060 4854 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 03 05:40:40 crc kubenswrapper[4854]: I0103 05:40:40.944568 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Jan 03 05:40:40 crc kubenswrapper[4854]: I0103 05:40:40.945155 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 03 05:40:40 crc kubenswrapper[4854]: I0103 05:40:40.945263 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 05:40:40 crc kubenswrapper[4854]: I0103 05:40:40.945405 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 05:40:40 crc kubenswrapper[4854]: I0103 05:40:40.945438 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 03 05:40:40 crc kubenswrapper[4854]: E0103 05:40:40.945571 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 05:40:40 crc kubenswrapper[4854]: E0103 05:40:40.945704 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 05:40:40 crc kubenswrapper[4854]: I0103 05:40:40.946465 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 03 05:40:40 crc kubenswrapper[4854]: I0103 05:40:40.946500 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 05:40:40 crc kubenswrapper[4854]: E0103 05:40:40.946560 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 05:40:40 crc kubenswrapper[4854]: I0103 05:40:40.948960 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 03 05:40:40 crc kubenswrapper[4854]: I0103 05:40:40.953807 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 03 05:40:40 crc kubenswrapper[4854]: I0103 05:40:40.954143 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 03 05:40:40 crc kubenswrapper[4854]: I0103 05:40:40.954400 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 03 05:40:40 crc kubenswrapper[4854]: I0103 05:40:40.954430 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 03 05:40:40 crc kubenswrapper[4854]: I0103 05:40:40.954497 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 03 05:40:40 crc kubenswrapper[4854]: I0103 05:40:40.954715 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 03 05:40:40 crc kubenswrapper[4854]: I0103 05:40:40.954847 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 03 05:40:40 crc kubenswrapper[4854]: I0103 05:40:40.955502 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 03 05:40:40 crc kubenswrapper[4854]: I0103 05:40:40.989146 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.003999 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef192bc6-bffe-4f90-84b6-918559c5f545\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dc74577e913605955b4ddbf4208de574ab14f6c77fff4c53687e70371d857ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d23ed4cba923a9b14622426134be2d60f5a835b8bbabc385821b9cfbeead4b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96231fb89824a6f67751829855710db81ef40260a52c5366183c339af4856af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15993a4d28dfd9bd6fd9d3ecc22d1f921e48f19e6359ea378b0ba5ca47b283d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T05:40:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.023821 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.034680 4854 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.037289 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.049402 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.063607 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.081466 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.096189 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.111892 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24758257-4839-46a6-836c-76b2208dda54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b205d8e9458800979a8d964bee4251860e547baf9ae4a82816c7347b37484e57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ffe91003d44a5658b1de915f0823abd5399b936ddc5e4696a08171e202fa92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a547bb00f1c271e432cec6966b47decd29e1aa9e0c4f0ff7a517faed2f732b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f5526160a5c68dcf472ca4562ba7cfc24aef8be3058acd28a2850f7d7abb674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f5526160a5c68dcf472ca4562ba7cfc24aef8be3058acd28a2850f7d7abb674\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 05:40:40.014595 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 05:40:40.014962 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 05:40:40.017243 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1571202385/tls.crt::/tmp/serving-cert-1571202385/tls.key\\\\\\\"\\\\nI0103 05:40:40.260303 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 05:40:40.263289 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 05:40:40.263307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 05:40:40.263328 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 05:40:40.263337 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 05:40:40.268449 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0103 05:40:40.268470 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0103 05:40:40.268495 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 05:40:40.268502 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 05:40:40.268508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 05:40:40.268513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 05:40:40.268519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 05:40:40.268523 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0103 05:40:40.270651 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T05:40:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://150adb49200724a1aa45c990bba31412c42c85ffc9dfd355f85b38114962c9eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16935ee336fab386a68b1d6138b6131561872b3157340cd17d9f3fe44127c365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16935ee336fab386a68b1d6138b6131561872b3157340cd17d9f3fe44127c365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T05:40:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T05:40:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.113149 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.113218 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.113278 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.113318 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.113352 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.113397 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.113444 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.113480 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.113516 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.113546 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.113585 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.113615 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.113643 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.113674 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.113705 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.113735 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.113766 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.113802 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.113833 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.113867 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.113897 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.113901 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.113929 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.114043 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.114130 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.114180 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.114217 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.114250 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.114255 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.114284 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.114362 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.114403 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.114438 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.114472 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.114473 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.114506 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.114538 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.114572 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.114605 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.114638 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.114675 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.114711 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.114746 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.114777 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.114808 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.114870 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.114904 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.114935 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.114986 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.115045 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.115130 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.115165 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.115196 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.115238 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.115300 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.115330 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.115364 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.115397 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.115432 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.115463 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.115495 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.115526 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.115557 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.115587 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.115623 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.115657 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.115692 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.115723 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.115755 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.115787 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.115860 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.115894 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.115927 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.115960 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.115996 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.116027 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.116065 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.116122 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.116156 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.116189 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.116223 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.116533 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.116576 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.116609 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.116635 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.116662 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.116688 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.116716 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.116742 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.116771 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.116801 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.116829 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.116857 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.116883 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.116908 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.116936 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.116959 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.116984 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.117006 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.117028 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.117054 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.117098 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.117126 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.117152 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.117177 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.117203 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.117226 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.117285 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.117312 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.117334 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.117358 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.117386 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.117410 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.117434 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.117457 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.117481 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.117503 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.120433 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.120529 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.120600 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.120675 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.120754 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.120833 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.120907 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.120982 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.121061 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.121161 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.121241 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.121336 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.121403 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.121485 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.121582 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.121681 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.121755 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.121833 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.121923 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.121990 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.122065 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.122183 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.122265 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.122341 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.122413 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.122495 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.122543 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.122606 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.122666 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.122741 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.122810 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.122872 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.114569 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.114662 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.123338 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.123622 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.123204 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.123808 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.123984 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.124013 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.124184 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.114821 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.114865 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.115033 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.115177 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.115612 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.115520 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.115693 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.115702 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.115719 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.115727 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.116148 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.116164 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.116207 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.116207 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.116197 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.116470 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.116487 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.116501 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.124823 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.116519 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.116559 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.116682 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.116711 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.117116 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.117352 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.117384 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.117409 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.118777 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.118847 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.119024 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.119243 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.119372 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.119484 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.119512 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.119630 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.120323 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.120606 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.120763 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.120872 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.120966 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.121132 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.121192 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.121353 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.121433 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.121479 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.121829 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.122014 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.122389 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.122718 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.122864 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.123046 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.123057 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.124973 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.126291 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.126577 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.126600 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.126911 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.127121 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.127168 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.127174 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.127248 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.127271 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.127279 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.127450 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.127530 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.127812 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.127902 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.127990 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.128245 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.128354 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.128605 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.128650 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.129704 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.129790 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.130118 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.130160 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.130168 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.130389 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.130632 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.128033 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.131046 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.131192 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.131754 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.132275 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.132357 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.129582 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.128545 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.132539 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.132631 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: E0103 05:40:41.132655 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 05:40:41.63262971 +0000 UTC m=+19.959206282 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.132760 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.132794 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.133025 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.130582 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.133212 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.133384 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.133400 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.133621 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.133657 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.131231 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.134003 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.134054 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.136164 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.137057 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.137279 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.137528 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.137564 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.137592 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.137624 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.137801 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.137828 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.137853 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.138869 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.138909 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.138937 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.138962 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.138985 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.139009 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.139031 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.139054 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.139151 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.139180 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.139343 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.139378 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.139405 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.139428 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.139453 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.139477 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.139502 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.139525 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.139550 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.139572 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.139596 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.139620 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.139644 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.139665 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.139690 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.139715 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.139741 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.139772 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.139798 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.139822 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.139848 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.139870 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.139892 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.139918 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.139944 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.140009 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.140046 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.140074 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.140123 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.140150 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.140179 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.140215 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.137863 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.140245 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.137991 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.140261 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.138161 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.138200 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.140220 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.140299 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.138395 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.138415 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.138894 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.138801 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.139187 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.139432 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.140457 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.140610 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.140700 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.140330 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.140842 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.140882 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.140928 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.140974 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.141002 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.141015 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.141159 4854 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.141185 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.141206 4854 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.141226 4854 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.141245 4854 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.141264 4854 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.141283 4854 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.141308 4854 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.141333 4854 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.141396 4854 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.141419 4854 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.141441 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.141459 4854 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.141477 4854 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.141496 4854 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.141515 4854 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.141534 4854 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.141553 4854 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.141575 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.141594 4854 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.141612 4854 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.141630 4854 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.141651 4854 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.141670 4854 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.141691 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.141710 4854 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.141730 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.141749 4854 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.141781 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.141804 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.141825 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.141844 4854 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.141862 4854 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.141881 4854 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.141899 4854 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.141918 4854 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.141937 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.141956 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.141975 4854 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.141995 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.142016 4854 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.142036 4854 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.142056 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.142075 4854 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.142118 4854 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.142138 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.142156 4854 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.142175 4854 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.142194 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.142214 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.141003 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.141359 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.141636 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.141647 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: E0103 05:40:41.141768 4854 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.141744 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.141980 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.142068 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.142159 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.139938 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.142768 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.142984 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.142993 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.143200 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.143846 4854 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 03 05:40:41 crc kubenswrapper[4854]: E0103 05:40:41.144797 4854 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 03 05:40:41 crc kubenswrapper[4854]: E0103 05:40:41.144948 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-03 05:40:41.644895118 +0000 UTC m=+19.971471780 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 03 05:40:41 crc kubenswrapper[4854]: E0103 05:40:41.147270 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-03 05:40:41.647245949 +0000 UTC m=+19.973822561 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.147378 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.147794 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.147990 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.149674 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.149840 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.149888 4854 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.149921 4854 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.149953 4854 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.149984 4854 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.150013 4854 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.150043 4854 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.150075 4854 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.150162 4854 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.150196 4854 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.150392 4854 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.150424 4854 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.150541 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.150565 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.151019 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.151175 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.151244 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.151288 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.151507 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.151533 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.151827 4854 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.151863 4854 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.151882 4854 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.151898 4854 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.151914 4854 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.151933 4854 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.151948 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.151965 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.151977 4854 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.151991 4854 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.152004 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.152017 4854 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.152029 4854 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.152045 4854 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.152058 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.152072 4854 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.152168 4854 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.153926 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.153955 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.153972 4854 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.153988 4854 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.154004 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.154022 4854 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.154037 4854 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.154049 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.154058 4854 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.154068 4854 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.154093 4854 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.154106 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.154118 4854 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.154133 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.154145 4854 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.154157 4854 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.154168 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.154180 4854 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.154192 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.154206 4854 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.154218 4854 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.154229 4854 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.154242 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.154253 4854 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.154264 4854 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.154277 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.154290 4854 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.154302 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.154314 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.154327 4854 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.154339 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.154352 4854 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.154364 4854 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.154376 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.154388 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.154402 4854 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.154416 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.154429 4854 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.154441 4854 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.154452 4854 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.154464 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.154477 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.154489 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.154501 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.157734 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.157926 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.158171 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.158484 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.164741 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.165549 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: E0103 05:40:41.165938 4854 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 03 05:40:41 crc kubenswrapper[4854]: E0103 05:40:41.165973 4854 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 03 05:40:41 crc kubenswrapper[4854]: E0103 05:40:41.165995 4854 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 05:40:41 crc kubenswrapper[4854]: E0103 05:40:41.166132 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-03 05:40:41.666103218 +0000 UTC m=+19.992679810 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.166318 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.166604 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: E0103 05:40:41.167855 4854 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 03 05:40:41 crc kubenswrapper[4854]: E0103 05:40:41.167889 4854 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 03 05:40:41 crc kubenswrapper[4854]: E0103 05:40:41.167904 4854 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 05:40:41 crc kubenswrapper[4854]: E0103 05:40:41.167959 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-03 05:40:41.667938605 +0000 UTC m=+19.994515177 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.169819 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.169867 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.170137 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.170161 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.170170 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.173643 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.174055 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.174186 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.174247 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.174301 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.174534 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.174811 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.174613 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.175072 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.175314 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.178520 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.178692 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.178733 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.178787 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.179069 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.179172 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.179704 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.181392 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.181125 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.181650 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.182056 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.183655 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.183853 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.184041 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.184465 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.185206 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.185240 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.185561 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.185646 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.187275 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.187469 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.187541 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.187748 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.188107 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.188380 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.200297 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.205966 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.220448 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.255815 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.255913 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.255964 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.255988 4854 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.256049 4854 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.256069 4854 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.256101 4854 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.256116 4854 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.256128 4854 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.256141 4854 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.256153 4854 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.256165 4854 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.256180 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.256193 4854 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.256205 4854 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.256217 4854 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.256230 4854 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.256253 4854 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.256267 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.256280 4854 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.256296 4854 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.256309 4854 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.256322 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.256334 4854 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.256346 4854 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.256228 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.256359 4854 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.256510 4854 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.256560 4854 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.256577 4854 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.256591 4854 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.256603 4854 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.256648 4854 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.256661 4854 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.256673 4854 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.256688 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.256726 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.256740 4854 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.256753 4854 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.256818 4854 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.256829 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.256845 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.256856 4854 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.256900 4854 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.256913 4854 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.256926 4854 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.256940 4854 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.256993 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.257006 4854 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.257019 4854 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.257032 4854 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.257067 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.257120 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.257133 4854 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.257145 4854 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.257157 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.257196 4854 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.257210 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.257222 4854 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.257233 4854 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.257244 4854 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.257283 4854 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.257297 4854 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.257309 4854 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.257322 4854 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.257333 4854 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.257371 4854 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.257382 4854 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.257394 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.257406 4854 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.257417 4854 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.257458 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.257471 4854 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.257483 4854 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.257532 4854 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.281817 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.294017 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 03 05:40:41 crc kubenswrapper[4854]: W0103 05:40:41.308548 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-0221552e0b13eb0d77c92e04530933f34f512d774e5725cef78d5a46a9219859 WatchSource:0}: Error finding container 0221552e0b13eb0d77c92e04530933f34f512d774e5725cef78d5a46a9219859: Status 404 returned error can't find the container with id 0221552e0b13eb0d77c92e04530933f34f512d774e5725cef78d5a46a9219859 Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.312176 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 03 05:40:41 crc kubenswrapper[4854]: W0103 05:40:41.325813 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-4bd9026145061469693038a0d8ea5d04d47fb4a048a7e2d1ccc73afbd1338e8a WatchSource:0}: Error finding container 4bd9026145061469693038a0d8ea5d04d47fb4a048a7e2d1ccc73afbd1338e8a: Status 404 returned error can't find the container with id 4bd9026145061469693038a0d8ea5d04d47fb4a048a7e2d1ccc73afbd1338e8a Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.409024 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"0221552e0b13eb0d77c92e04530933f34f512d774e5725cef78d5a46a9219859"} Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.410814 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"d1d2bd962ba76847ef17564eae1cb155d4cb84bb3080e6138625e855d2a71100"} Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.413724 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.416009 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7a2b7fcd26e43a60746db22efe9b0b3cc0cec70a9bfb52d27644ca850ca16e51"} Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.417326 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.419957 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"4bd9026145061469693038a0d8ea5d04d47fb4a048a7e2d1ccc73afbd1338e8a"} Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.440161 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.482693 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.503741 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef192bc6-bffe-4f90-84b6-918559c5f545\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dc74577e913605955b4ddbf4208de574ab14f6c77fff4c53687e70371d857ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d23ed4cba923a9b14622426134be2d60f5a835b8bbabc385821b9cfbeead4b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96231fb89824a6f67751829855710db81ef40260a52c5366183c339af4856af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15993a4d28dfd9bd6fd9d3ecc22d1f921e48f19e6359ea378b0ba5ca47b283d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T05:40:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.520447 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.533767 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.546388 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24758257-4839-46a6-836c-76b2208dda54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b205d8e9458800979a8d964bee4251860e547baf9ae4a82816c7347b37484e57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ffe91003d44a5658b1de915f0823abd5399b936ddc5e4696a08171e202fa92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a547bb00f1c271e432cec6966b47decd29e1aa9e0c4f0ff7a517faed2f732b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2b7fcd26e43a60746db22efe9b0b3cc0cec70a9bfb52d27644ca850ca16e51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f5526160a5c68dcf472ca4562ba7cfc24aef8be3058acd28a2850f7d7abb674\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 05:40:40.014595 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 05:40:40.014962 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 05:40:40.017243 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1571202385/tls.crt::/tmp/serving-cert-1571202385/tls.key\\\\\\\"\\\\nI0103 05:40:40.260303 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 05:40:40.263289 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 05:40:40.263307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 05:40:40.263328 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 05:40:40.263337 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 05:40:40.268449 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0103 05:40:40.268470 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0103 05:40:40.268495 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 05:40:40.268502 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 05:40:40.268508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 05:40:40.268513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 05:40:40.268519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 05:40:40.268523 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0103 05:40:40.270651 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T05:40:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://150adb49200724a1aa45c990bba31412c42c85ffc9dfd355f85b38114962c9eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16935ee336fab386a68b1d6138b6131561872b3157340cd17d9f3fe44127c365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16935ee336fab386a68b1d6138b6131561872b3157340cd17d9f3fe44127c365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T05:40:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T05:40:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.557743 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.571813 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.673423 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.673508 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.673545 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 05:40:41 crc kubenswrapper[4854]: E0103 05:40:41.673585 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 05:40:42.673555385 +0000 UTC m=+21.000131957 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:40:41 crc kubenswrapper[4854]: E0103 05:40:41.673619 4854 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.673624 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 05:40:41 crc kubenswrapper[4854]: E0103 05:40:41.673675 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-03 05:40:42.673660648 +0000 UTC m=+21.000237230 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.673696 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 05:40:41 crc kubenswrapper[4854]: E0103 05:40:41.673744 4854 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 03 05:40:41 crc kubenswrapper[4854]: E0103 05:40:41.673758 4854 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 03 05:40:41 crc kubenswrapper[4854]: E0103 05:40:41.673792 4854 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 03 05:40:41 crc kubenswrapper[4854]: E0103 05:40:41.673795 4854 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 03 05:40:41 crc kubenswrapper[4854]: E0103 05:40:41.673805 4854 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 05:40:41 crc kubenswrapper[4854]: E0103 05:40:41.673813 4854 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 03 05:40:41 crc kubenswrapper[4854]: E0103 05:40:41.673828 4854 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 05:40:41 crc kubenswrapper[4854]: E0103 05:40:41.673776 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-03 05:40:42.673767501 +0000 UTC m=+21.000344063 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 03 05:40:41 crc kubenswrapper[4854]: E0103 05:40:41.673865 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-03 05:40:42.673857593 +0000 UTC m=+21.000434165 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 05:40:41 crc kubenswrapper[4854]: E0103 05:40:41.673915 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-03 05:40:42.673899784 +0000 UTC m=+21.000476486 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.894886 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.911324 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.914044 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.919138 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24758257-4839-46a6-836c-76b2208dda54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b205d8e9458800979a8d964bee4251860e547baf9ae4a82816c7347b37484e57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ffe91003d44a5658b1de915f0823abd5399b936ddc5e4696a08171e202fa92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a547bb00f1c271e432cec6966b47decd29e1aa9e0c4f0ff7a517faed2f732b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2b7fcd26e43a60746db22efe9b0b3cc0cec70a9bfb52d27644ca850ca16e51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f5526160a5c68dcf472ca4562ba7cfc24aef8be3058acd28a2850f7d7abb674\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 05:40:40.014595 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 05:40:40.014962 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 05:40:40.017243 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1571202385/tls.crt::/tmp/serving-cert-1571202385/tls.key\\\\\\\"\\\\nI0103 05:40:40.260303 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 05:40:40.263289 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 05:40:40.263307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 05:40:40.263328 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 05:40:40.263337 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 05:40:40.268449 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0103 05:40:40.268470 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0103 05:40:40.268495 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 05:40:40.268502 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 05:40:40.268508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 05:40:40.268513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 05:40:40.268519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 05:40:40.268523 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0103 05:40:40.270651 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T05:40:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://150adb49200724a1aa45c990bba31412c42c85ffc9dfd355f85b38114962c9eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16935ee336fab386a68b1d6138b6131561872b3157340cd17d9f3fe44127c365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16935ee336fab386a68b1d6138b6131561872b3157340cd17d9f3fe44127c365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T05:40:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T05:40:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T05:40:41Z is after 2025-08-24T17:21:41Z" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.939510 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T05:40:41Z is after 2025-08-24T17:21:41Z" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.951511 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T05:40:41Z is after 2025-08-24T17:21:41Z" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.967566 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T05:40:41Z is after 2025-08-24T17:21:41Z" Jan 03 05:40:41 crc kubenswrapper[4854]: I0103 05:40:41.985417 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T05:40:41Z is after 2025-08-24T17:21:41Z" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.004519 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef192bc6-bffe-4f90-84b6-918559c5f545\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dc74577e913605955b4ddbf4208de574ab14f6c77fff4c53687e70371d857ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d23ed4cba923a9b14622426134be2d60f5a835b8bbabc385821b9cfbeead4b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96231fb89824a6f67751829855710db81ef40260a52c5366183c339af4856af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15993a4d28dfd9bd6fd9d3ecc22d1f921e48f19e6359ea378b0ba5ca47b283d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T05:40:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T05:40:42Z is after 2025-08-24T17:21:41Z" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.016738 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T05:40:42Z is after 2025-08-24T17:21:41Z" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.030509 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T05:40:42Z is after 2025-08-24T17:21:41Z" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.041414 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T05:40:42Z is after 2025-08-24T17:21:41Z" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.075038 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96e45f62-d7dc-4ca6-838a-cb31ce347131\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1535f5d804da272f32468a4941f4f8fbc8d2a09d2b9f0ef0730a742f5ae083ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a694381b576de418c9972a7b4db718785c902db8a17c37c173e4370943cf0ee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94e2e5aac4f318eebc95869a2fb2301f56571e7b52f1996be2f2d80c0e9fe578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd30be1141fbd926ac80423210355f41447055faaccbdc196af8ad354ffe8581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb56b54e21a5aff9e1341447526cd6a24d348475b32ef74e8df1ce2af6325ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da18a154409efb65de00eca85d7a31c22d143cd9b9b192ec16685f22feb3d453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da18a154409efb65de00eca85d7a31c22d143cd9b9b192ec16685f22feb3d453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T05:40:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdf5ad101a47c5460f04cebbfda47eba4666a0d708f56ddae282491e80c21bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf5ad101a47c5460f04cebbfda47eba4666a0d708f56ddae282491e80c21bf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T05:40:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6da39576fdbe1acf370ab64899df0c55397600619feaae338fa625dcaad4a88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6da39576fdbe1acf370ab64899df0c55397600619feaae338fa625dcaad4a88f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T05:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T05:40:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T05:40:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T05:40:42Z is after 2025-08-24T17:21:41Z" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.096149 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef192bc6-bffe-4f90-84b6-918559c5f545\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dc74577e913605955b4ddbf4208de574ab14f6c77fff4c53687e70371d857ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d23ed4cba923a9b14622426134be2d60f5a835b8bbabc385821b9cfbeead4b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96231fb89824a6f67751829855710db81ef40260a52c5366183c339af4856af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15993a4d28dfd9bd6fd9d3ecc22d1f921e48f19e6359ea378b0ba5ca47b283d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T05:40:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T05:40:42Z is after 2025-08-24T17:21:41Z" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.111680 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T05:40:42Z is after 2025-08-24T17:21:41Z" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.117198 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 05:40:42 crc kubenswrapper[4854]: E0103 05:40:42.117383 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.117222 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 05:40:42 crc kubenswrapper[4854]: E0103 05:40:42.117789 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.124948 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.126190 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.128751 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.130311 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.132504 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.132531 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T05:40:42Z is after 2025-08-24T17:21:41Z" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.133901 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.135262 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.137340 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.138721 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.140712 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.141839 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.144191 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.144931 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.145718 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.146939 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.147692 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.148681 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T05:40:42Z is after 2025-08-24T17:21:41Z" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.149247 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.149842 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.150712 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.154969 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.155692 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.157063 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.157737 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.159272 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.159910 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.161742 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.164036 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24758257-4839-46a6-836c-76b2208dda54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b205d8e9458800979a8d964bee4251860e547baf9ae4a82816c7347b37484e57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ffe91003d44a5658b1de915f0823abd5399b936ddc5e4696a08171e202fa92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a547bb00f1c271e432cec6966b47decd29e1aa9e0c4f0ff7a517faed2f732b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2b7fcd26e43a60746db22efe9b0b3cc0cec70a9bfb52d27644ca850ca16e51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f5526160a5c68dcf472ca4562ba7cfc24aef8be3058acd28a2850f7d7abb674\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 05:40:40.014595 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 05:40:40.014962 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 05:40:40.017243 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1571202385/tls.crt::/tmp/serving-cert-1571202385/tls.key\\\\\\\"\\\\nI0103 05:40:40.260303 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 05:40:40.263289 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 05:40:40.263307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 05:40:40.263328 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 05:40:40.263337 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 05:40:40.268449 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0103 05:40:40.268470 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0103 05:40:40.268495 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 05:40:40.268502 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 05:40:40.268508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 05:40:40.268513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 05:40:40.268519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 05:40:40.268523 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0103 05:40:40.270651 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T05:40:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://150adb49200724a1aa45c990bba31412c42c85ffc9dfd355f85b38114962c9eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16935ee336fab386a68b1d6138b6131561872b3157340cd17d9f3fe44127c365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16935ee336fab386a68b1d6138b6131561872b3157340cd17d9f3fe44127c365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T05:40:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T05:40:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T05:40:42Z is after 2025-08-24T17:21:41Z" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.164669 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.165392 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.166769 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.167449 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.168722 4854 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.168940 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.171329 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.172637 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.173282 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.175321 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.176343 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.177853 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.179073 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.180557 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T05:40:42Z is after 2025-08-24T17:21:41Z" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.180797 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.181519 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.182842 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.184308 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.185377 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.186047 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.187463 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.188952 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.189984 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.190862 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.192115 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.192798 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.194097 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.194905 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.195669 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.197845 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T05:40:42Z is after 2025-08-24T17:21:41Z" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.222204 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96e45f62-d7dc-4ca6-838a-cb31ce347131\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1535f5d804da272f32468a4941f4f8fbc8d2a09d2b9f0ef0730a742f5ae083ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a694381b576de418c9972a7b4db718785c902db8a17c37c173e4370943cf0ee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94e2e5aac4f318eebc95869a2fb2301f56571e7b52f1996be2f2d80c0e9fe578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd30be1141fbd926ac80423210355f41447055faaccbdc196af8ad354ffe8581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb56b54e21a5aff9e1341447526cd6a24d348475b32ef74e8df1ce2af6325ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da18a154409efb65de00eca85d7a31c22d143cd9b9b192ec16685f22feb3d453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da18a154409efb65de00eca85d7a31c22d143cd9b9b192ec16685f22feb3d453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T05:40:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdf5ad101a47c5460f04cebbfda47eba4666a0d708f56ddae282491e80c21bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf5ad101a47c5460f04cebbfda47eba4666a0d708f56ddae282491e80c21bf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T05:40:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6da39576fdbe1acf370ab64899df0c55397600619feaae338fa625dcaad4a88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6da39576fdbe1acf370ab64899df0c55397600619feaae338fa625dcaad4a88f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T05:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T05:40:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T05:40:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T05:40:42Z is after 2025-08-24T17:21:41Z" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.247781 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef192bc6-bffe-4f90-84b6-918559c5f545\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dc74577e913605955b4ddbf4208de574ab14f6c77fff4c53687e70371d857ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d23ed4cba923a9b14622426134be2d60f5a835b8bbabc385821b9cfbeead4b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96231fb89824a6f67751829855710db81ef40260a52c5366183c339af4856af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15993a4d28dfd9bd6fd9d3ecc22d1f921e48f19e6359ea378b0ba5ca47b283d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T05:40:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T05:40:42Z is after 2025-08-24T17:21:41Z" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.271552 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T05:40:42Z is after 2025-08-24T17:21:41Z" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.296661 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T05:40:42Z is after 2025-08-24T17:21:41Z" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.308540 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T05:40:42Z is after 2025-08-24T17:21:41Z" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.319908 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T05:40:42Z is after 2025-08-24T17:21:41Z" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.333983 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T05:40:42Z is after 2025-08-24T17:21:41Z" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.345637 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T05:40:42Z is after 2025-08-24T17:21:41Z" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.359053 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24758257-4839-46a6-836c-76b2208dda54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b205d8e9458800979a8d964bee4251860e547baf9ae4a82816c7347b37484e57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ffe91003d44a5658b1de915f0823abd5399b936ddc5e4696a08171e202fa92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a547bb00f1c271e432cec6966b47decd29e1aa9e0c4f0ff7a517faed2f732b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2b7fcd26e43a60746db22efe9b0b3cc0cec70a9bfb52d27644ca850ca16e51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f5526160a5c68dcf472ca4562ba7cfc24aef8be3058acd28a2850f7d7abb674\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 05:40:40.014595 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 05:40:40.014962 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 05:40:40.017243 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1571202385/tls.crt::/tmp/serving-cert-1571202385/tls.key\\\\\\\"\\\\nI0103 05:40:40.260303 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 05:40:40.263289 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 05:40:40.263307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 05:40:40.263328 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 05:40:40.263337 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 05:40:40.268449 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0103 05:40:40.268470 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0103 05:40:40.268495 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 05:40:40.268502 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 05:40:40.268508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 05:40:40.268513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 05:40:40.268519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 05:40:40.268523 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0103 05:40:40.270651 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T05:40:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://150adb49200724a1aa45c990bba31412c42c85ffc9dfd355f85b38114962c9eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16935ee336fab386a68b1d6138b6131561872b3157340cd17d9f3fe44127c365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16935ee336fab386a68b1d6138b6131561872b3157340cd17d9f3fe44127c365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T05:40:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T05:40:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T05:40:42Z is after 2025-08-24T17:21:41Z" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.423967 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"f41ceea8c91663421c66d8ce681ed09bbdf286376383d5a3ff19e606ec76bd06"} Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.427573 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"17f2eacd2ec85cb4998fb9f2b86f2f619e9708d40c28052bf7dd31ddd413eea7"} Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.427720 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"34e909744cea359e596e89317d29956b4eaf04aa68274ce94682dbaba76e666c"} Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.451916 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96e45f62-d7dc-4ca6-838a-cb31ce347131\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1535f5d804da272f32468a4941f4f8fbc8d2a09d2b9f0ef0730a742f5ae083ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a694381b576de418c9972a7b4db718785c902db8a17c37c173e4370943cf0ee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94e2e5aac4f318eebc95869a2fb2301f56571e7b52f1996be2f2d80c0e9fe578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd30be1141fbd926ac80423210355f41447055faaccbdc196af8ad354ffe8581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb56b54e21a5aff9e1341447526cd6a24d348475b32ef74e8df1ce2af6325ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da18a154409efb65de00eca85d7a31c22d143cd9b9b192ec16685f22feb3d453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da18a154409efb65de00eca85d7a31c22d143cd9b9b192ec16685f22feb3d453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T05:40:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdf5ad101a47c5460f04cebbfda47eba4666a0d708f56ddae282491e80c21bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf5ad101a47c5460f04cebbfda47eba4666a0d708f56ddae282491e80c21bf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T05:40:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6da39576fdbe1acf370ab64899df0c55397600619feaae338fa625dcaad4a88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6da39576fdbe1acf370ab64899df0c55397600619feaae338fa625dcaad4a88f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T05:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T05:40:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T05:40:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T05:40:42Z is after 2025-08-24T17:21:41Z" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.471259 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef192bc6-bffe-4f90-84b6-918559c5f545\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dc74577e913605955b4ddbf4208de574ab14f6c77fff4c53687e70371d857ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d23ed4cba923a9b14622426134be2d60f5a835b8bbabc385821b9cfbeead4b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96231fb89824a6f67751829855710db81ef40260a52c5366183c339af4856af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15993a4d28dfd9bd6fd9d3ecc22d1f921e48f19e6359ea378b0ba5ca47b283d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T05:40:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T05:40:42Z is after 2025-08-24T17:21:41Z" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.491713 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f41ceea8c91663421c66d8ce681ed09bbdf286376383d5a3ff19e606ec76bd06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T05:40:42Z is after 2025-08-24T17:21:41Z" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.503764 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T05:40:42Z is after 2025-08-24T17:21:41Z" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.516375 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T05:40:42Z is after 2025-08-24T17:21:41Z" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.527351 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T05:40:42Z is after 2025-08-24T17:21:41Z" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.539391 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T05:40:42Z is after 2025-08-24T17:21:41Z" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.552407 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T05:40:42Z is after 2025-08-24T17:21:41Z" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.566263 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24758257-4839-46a6-836c-76b2208dda54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b205d8e9458800979a8d964bee4251860e547baf9ae4a82816c7347b37484e57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ffe91003d44a5658b1de915f0823abd5399b936ddc5e4696a08171e202fa92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a547bb00f1c271e432cec6966b47decd29e1aa9e0c4f0ff7a517faed2f732b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2b7fcd26e43a60746db22efe9b0b3cc0cec70a9bfb52d27644ca850ca16e51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f5526160a5c68dcf472ca4562ba7cfc24aef8be3058acd28a2850f7d7abb674\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 05:40:40.014595 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 05:40:40.014962 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 05:40:40.017243 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1571202385/tls.crt::/tmp/serving-cert-1571202385/tls.key\\\\\\\"\\\\nI0103 05:40:40.260303 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 05:40:40.263289 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 05:40:40.263307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 05:40:40.263328 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 05:40:40.263337 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 05:40:40.268449 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0103 05:40:40.268470 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0103 05:40:40.268495 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 05:40:40.268502 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 05:40:40.268508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 05:40:40.268513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 05:40:40.268519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 05:40:40.268523 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0103 05:40:40.270651 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T05:40:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://150adb49200724a1aa45c990bba31412c42c85ffc9dfd355f85b38114962c9eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16935ee336fab386a68b1d6138b6131561872b3157340cd17d9f3fe44127c365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16935ee336fab386a68b1d6138b6131561872b3157340cd17d9f3fe44127c365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T05:40:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T05:40:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T05:40:42Z is after 2025-08-24T17:21:41Z" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.578807 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T05:40:42Z is after 2025-08-24T17:21:41Z" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.609054 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96e45f62-d7dc-4ca6-838a-cb31ce347131\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1535f5d804da272f32468a4941f4f8fbc8d2a09d2b9f0ef0730a742f5ae083ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a694381b576de418c9972a7b4db718785c902db8a17c37c173e4370943cf0ee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94e2e5aac4f318eebc95869a2fb2301f56571e7b52f1996be2f2d80c0e9fe578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd30be1141fbd926ac80423210355f41447055faaccbdc196af8ad354ffe8581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb56b54e21a5aff9e1341447526cd6a24d348475b32ef74e8df1ce2af6325ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da18a154409efb65de00eca85d7a31c22d143cd9b9b192ec16685f22feb3d453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da18a154409efb65de00eca85d7a31c22d143cd9b9b192ec16685f22feb3d453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T05:40:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdf5ad101a47c5460f04cebbfda47eba4666a0d708f56ddae282491e80c21bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf5ad101a47c5460f04cebbfda47eba4666a0d708f56ddae282491e80c21bf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T05:40:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6da39576fdbe1acf370ab64899df0c55397600619feaae338fa625dcaad4a88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6da39576fdbe1acf370ab64899df0c55397600619feaae338fa625dcaad4a88f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T05:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T05:40:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T05:40:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T05:40:42Z is after 2025-08-24T17:21:41Z" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.625904 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef192bc6-bffe-4f90-84b6-918559c5f545\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dc74577e913605955b4ddbf4208de574ab14f6c77fff4c53687e70371d857ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d23ed4cba923a9b14622426134be2d60f5a835b8bbabc385821b9cfbeead4b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96231fb89824a6f67751829855710db81ef40260a52c5366183c339af4856af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15993a4d28dfd9bd6fd9d3ecc22d1f921e48f19e6359ea378b0ba5ca47b283d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T05:40:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T05:40:42Z is after 2025-08-24T17:21:41Z" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.641069 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f41ceea8c91663421c66d8ce681ed09bbdf286376383d5a3ff19e606ec76bd06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T05:40:42Z is after 2025-08-24T17:21:41Z" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.660399 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T05:40:42Z is after 2025-08-24T17:21:41Z" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.681529 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.681640 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.681689 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.681729 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 05:40:42 crc kubenswrapper[4854]: E0103 05:40:42.681775 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 05:40:44.681733565 +0000 UTC m=+23.008310167 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:40:42 crc kubenswrapper[4854]: E0103 05:40:42.681812 4854 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 03 05:40:42 crc kubenswrapper[4854]: E0103 05:40:42.681887 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-03 05:40:44.681862339 +0000 UTC m=+23.008439021 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 03 05:40:42 crc kubenswrapper[4854]: E0103 05:40:42.681812 4854 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 03 05:40:42 crc kubenswrapper[4854]: E0103 05:40:42.681927 4854 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 03 05:40:42 crc kubenswrapper[4854]: E0103 05:40:42.681945 4854 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 05:40:42 crc kubenswrapper[4854]: E0103 05:40:42.681985 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-03 05:40:44.681971842 +0000 UTC m=+23.008548414 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 05:40:42 crc kubenswrapper[4854]: E0103 05:40:42.681995 4854 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 03 05:40:42 crc kubenswrapper[4854]: E0103 05:40:42.682048 4854 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 03 05:40:42 crc kubenswrapper[4854]: E0103 05:40:42.682106 4854 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 05:40:42 crc kubenswrapper[4854]: E0103 05:40:42.682136 4854 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 03 05:40:42 crc kubenswrapper[4854]: E0103 05:40:42.682168 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-03 05:40:44.682159076 +0000 UTC m=+23.008735648 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 03 05:40:42 crc kubenswrapper[4854]: E0103 05:40:42.682211 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-03 05:40:44.682177927 +0000 UTC m=+23.008754549 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.682166 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17f2eacd2ec85cb4998fb9f2b86f2f619e9708d40c28052bf7dd31ddd413eea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34e909744cea359e596e89317d29956b4eaf04aa68274ce94682dbaba76e666c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T05:40:42Z is after 2025-08-24T17:21:41Z" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.681870 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.703568 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24758257-4839-46a6-836c-76b2208dda54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b205d8e9458800979a8d964bee4251860e547baf9ae4a82816c7347b37484e57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ffe91003d44a5658b1de915f0823abd5399b936ddc5e4696a08171e202fa92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a547bb00f1c271e432cec6966b47decd29e1aa9e0c4f0ff7a517faed2f732b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2b7fcd26e43a60746db22efe9b0b3cc0cec70a9bfb52d27644ca850ca16e51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f5526160a5c68dcf472ca4562ba7cfc24aef8be3058acd28a2850f7d7abb674\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 05:40:40.014595 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 05:40:40.014962 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 05:40:40.017243 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1571202385/tls.crt::/tmp/serving-cert-1571202385/tls.key\\\\\\\"\\\\nI0103 05:40:40.260303 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 05:40:40.263289 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 05:40:40.263307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 05:40:40.263328 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 05:40:40.263337 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 05:40:40.268449 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0103 05:40:40.268470 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0103 05:40:40.268495 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 05:40:40.268502 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 05:40:40.268508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 05:40:40.268513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 05:40:40.268519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 05:40:40.268523 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0103 05:40:40.270651 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T05:40:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://150adb49200724a1aa45c990bba31412c42c85ffc9dfd355f85b38114962c9eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T05:40:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16935ee336fab386a68b1d6138b6131561872b3157340cd17d9f3fe44127c365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16935ee336fab386a68b1d6138b6131561872b3157340cd17d9f3fe44127c365\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T05:40:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T05:40:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T05:40:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T05:40:42Z is after 2025-08-24T17:21:41Z" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.717174 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T05:40:42Z is after 2025-08-24T17:21:41Z" Jan 03 05:40:42 crc kubenswrapper[4854]: I0103 05:40:42.730646 4854 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T05:40:42Z is after 2025-08-24T17:21:41Z" Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.117543 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 05:40:43 crc kubenswrapper[4854]: E0103 05:40:43.117684 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.212820 4854 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.215495 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.215550 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.215568 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.215676 4854 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.227107 4854 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.227428 4854 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.228938 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.228983 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.228999 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.229020 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.229038 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:43Z","lastTransitionTime":"2026-01-03T05:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:43 crc kubenswrapper[4854]: E0103 05:40:43.263249 4854 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T05:40:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T05:40:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T05:40:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T05:40:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"50a66242-f853-4864-8639-b84ace4c39eb\\\",\\\"systemUUID\\\":\\\"70824611-2ad5-40b8-af0f-fb136ff2a322\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T05:40:43Z is after 2025-08-24T17:21:41Z" Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.268158 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.268207 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.268221 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.268240 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.268251 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:43Z","lastTransitionTime":"2026-01-03T05:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:43 crc kubenswrapper[4854]: E0103 05:40:43.323710 4854 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T05:40:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T05:40:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T05:40:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T05:40:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"50a66242-f853-4864-8639-b84ace4c39eb\\\",\\\"systemUUID\\\":\\\"70824611-2ad5-40b8-af0f-fb136ff2a322\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T05:40:43Z is after 2025-08-24T17:21:41Z" Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.332065 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.332114 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.332125 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.332143 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.332156 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:43Z","lastTransitionTime":"2026-01-03T05:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:43 crc kubenswrapper[4854]: E0103 05:40:43.359330 4854 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T05:40:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T05:40:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T05:40:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T05:40:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"50a66242-f853-4864-8639-b84ace4c39eb\\\",\\\"systemUUID\\\":\\\"70824611-2ad5-40b8-af0f-fb136ff2a322\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T05:40:43Z is after 2025-08-24T17:21:41Z" Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.363070 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.363133 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.363143 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.363160 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.363172 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:43Z","lastTransitionTime":"2026-01-03T05:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:43 crc kubenswrapper[4854]: E0103 05:40:43.378301 4854 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T05:40:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T05:40:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T05:40:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T05:40:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"50a66242-f853-4864-8639-b84ace4c39eb\\\",\\\"systemUUID\\\":\\\"70824611-2ad5-40b8-af0f-fb136ff2a322\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T05:40:43Z is after 2025-08-24T17:21:41Z" Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.382803 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.382838 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.382852 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.382869 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.382881 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:43Z","lastTransitionTime":"2026-01-03T05:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:43 crc kubenswrapper[4854]: E0103 05:40:43.397881 4854 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T05:40:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T05:40:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T05:40:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T05:40:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T05:40:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"50a66242-f853-4864-8639-b84ace4c39eb\\\",\\\"systemUUID\\\":\\\"70824611-2ad5-40b8-af0f-fb136ff2a322\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T05:40:43Z is after 2025-08-24T17:21:41Z" Jan 03 05:40:43 crc kubenswrapper[4854]: E0103 05:40:43.398528 4854 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.402050 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.402189 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.402239 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.402283 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.402314 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:43Z","lastTransitionTime":"2026-01-03T05:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.505669 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.505717 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.505729 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.505747 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.505760 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:43Z","lastTransitionTime":"2026-01-03T05:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.607998 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.608059 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.608102 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.608128 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.608146 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:43Z","lastTransitionTime":"2026-01-03T05:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.710530 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.710580 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.710592 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.710632 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.710643 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:43Z","lastTransitionTime":"2026-01-03T05:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.813250 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.813293 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.813301 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.813318 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.813328 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:43Z","lastTransitionTime":"2026-01-03T05:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.915897 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.915956 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.915973 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.915998 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:43 crc kubenswrapper[4854]: I0103 05:40:43.916015 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:43Z","lastTransitionTime":"2026-01-03T05:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.019494 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.019562 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.019587 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.019624 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.019648 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:44Z","lastTransitionTime":"2026-01-03T05:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.117805 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 05:40:44 crc kubenswrapper[4854]: E0103 05:40:44.118039 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.118169 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 05:40:44 crc kubenswrapper[4854]: E0103 05:40:44.118405 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.121784 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.121836 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.121854 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.121874 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.121889 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:44Z","lastTransitionTime":"2026-01-03T05:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.225202 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.225264 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.225283 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.225313 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.225332 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:44Z","lastTransitionTime":"2026-01-03T05:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.328816 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.328883 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.328902 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.328927 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.328945 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:44Z","lastTransitionTime":"2026-01-03T05:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.432422 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.432502 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.432525 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.432555 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.432577 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:44Z","lastTransitionTime":"2026-01-03T05:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.434200 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"33c6c8fdd15c243d05dac5f2b358cb5dfc806cbb0ca6d1e3339fca68b1ef4df7"} Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.485399 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=3.4853774570000002 podStartE2EDuration="3.485377457s" podCreationTimestamp="2026-01-03 05:40:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:40:44.48471737 +0000 UTC m=+22.811294002" watchObservedRunningTime="2026-01-03 05:40:44.485377457 +0000 UTC m=+22.811954059" Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.533435 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=4.533406321 podStartE2EDuration="4.533406321s" podCreationTimestamp="2026-01-03 05:40:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:40:44.513500015 +0000 UTC m=+22.840076587" watchObservedRunningTime="2026-01-03 05:40:44.533406321 +0000 UTC m=+22.859982933" Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.535160 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.535232 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.535258 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.535287 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.535311 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:44Z","lastTransitionTime":"2026-01-03T05:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.638155 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.638541 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.638744 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.639007 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.639415 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:44Z","lastTransitionTime":"2026-01-03T05:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.664207 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=4.664177829 podStartE2EDuration="4.664177829s" podCreationTimestamp="2026-01-03 05:40:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:40:44.663457751 +0000 UTC m=+22.990034373" watchObservedRunningTime="2026-01-03 05:40:44.664177829 +0000 UTC m=+22.990754471" Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.703168 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.703351 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.703410 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.703463 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.703505 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 05:40:44 crc kubenswrapper[4854]: E0103 05:40:44.703686 4854 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 03 05:40:44 crc kubenswrapper[4854]: E0103 05:40:44.703717 4854 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 03 05:40:44 crc kubenswrapper[4854]: E0103 05:40:44.703754 4854 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 03 05:40:44 crc kubenswrapper[4854]: E0103 05:40:44.703772 4854 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 05:40:44 crc kubenswrapper[4854]: E0103 05:40:44.703781 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 05:40:48.703744224 +0000 UTC m=+27.030320836 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:40:44 crc kubenswrapper[4854]: E0103 05:40:44.703817 4854 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 03 05:40:44 crc kubenswrapper[4854]: E0103 05:40:44.703849 4854 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 03 05:40:44 crc kubenswrapper[4854]: E0103 05:40:44.703857 4854 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 03 05:40:44 crc kubenswrapper[4854]: E0103 05:40:44.703897 4854 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 05:40:44 crc kubenswrapper[4854]: E0103 05:40:44.703827 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-03 05:40:48.703813506 +0000 UTC m=+27.030390108 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 03 05:40:44 crc kubenswrapper[4854]: E0103 05:40:44.703959 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-03 05:40:48.703947 +0000 UTC m=+27.030523602 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 05:40:44 crc kubenswrapper[4854]: E0103 05:40:44.703989 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-03 05:40:48.703978961 +0000 UTC m=+27.030555563 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 03 05:40:44 crc kubenswrapper[4854]: E0103 05:40:44.704017 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-03 05:40:48.704006841 +0000 UTC m=+27.030583443 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.742609 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.742647 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.742659 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.742676 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.742687 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:44Z","lastTransitionTime":"2026-01-03T05:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.845530 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.845599 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.845619 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.845644 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.845665 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:44Z","lastTransitionTime":"2026-01-03T05:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.948042 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.948105 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.948117 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.948135 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:44 crc kubenswrapper[4854]: I0103 05:40:44.948147 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:44Z","lastTransitionTime":"2026-01-03T05:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.050686 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.050760 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.050787 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.050819 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.050843 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:45Z","lastTransitionTime":"2026-01-03T05:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.117328 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 05:40:45 crc kubenswrapper[4854]: E0103 05:40:45.117496 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.152949 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.153034 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.153053 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.153115 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.153134 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:45Z","lastTransitionTime":"2026-01-03T05:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.255295 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.255331 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.255340 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.255352 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.255361 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:45Z","lastTransitionTime":"2026-01-03T05:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.357389 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.357424 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.357432 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.357447 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.357457 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:45Z","lastTransitionTime":"2026-01-03T05:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.444958 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-627v7"] Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.445316 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-627v7" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.447302 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.447755 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.450240 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.457802 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-6rlbv"] Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.458146 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-6rlbv" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.460569 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.460597 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.460608 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.460622 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.460634 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:45Z","lastTransitionTime":"2026-01-03T05:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.461175 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.461301 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.461414 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.461672 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.505890 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-spn2r"] Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.506209 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-spn2r" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.508312 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.508646 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.508984 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.509567 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.510382 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qxnj\" (UniqueName: \"kubernetes.io/projected/ff010d23-c09f-4d08-9f41-551442410288-kube-api-access-7qxnj\") pod \"node-ca-6rlbv\" (UID: \"ff010d23-c09f-4d08-9f41-551442410288\") " pod="openshift-image-registry/node-ca-6rlbv" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.510459 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ff010d23-c09f-4d08-9f41-551442410288-host\") pod \"node-ca-6rlbv\" (UID: \"ff010d23-c09f-4d08-9f41-551442410288\") " pod="openshift-image-registry/node-ca-6rlbv" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.510503 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ff010d23-c09f-4d08-9f41-551442410288-serviceca\") pod \"node-ca-6rlbv\" (UID: \"ff010d23-c09f-4d08-9f41-551442410288\") " pod="openshift-image-registry/node-ca-6rlbv" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.510604 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/550b65cf-a393-46c8-a8d6-f521689da32a-hosts-file\") pod \"node-resolver-627v7\" (UID: \"550b65cf-a393-46c8-a8d6-f521689da32a\") " pod="openshift-dns/node-resolver-627v7" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.510791 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnpgj\" (UniqueName: \"kubernetes.io/projected/550b65cf-a393-46c8-a8d6-f521689da32a-kube-api-access-dnpgj\") pod \"node-resolver-627v7\" (UID: \"550b65cf-a393-46c8-a8d6-f521689da32a\") " pod="openshift-dns/node-resolver-627v7" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.515050 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.537711 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-xcb5t"] Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.539075 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-xcb5t" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.541489 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.543267 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.562903 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.562934 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.562944 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.562959 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.562968 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:45Z","lastTransitionTime":"2026-01-03T05:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.601893 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-6wgwf"] Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.602249 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6wgwf" Jan 03 05:40:45 crc kubenswrapper[4854]: E0103 05:40:45.602306 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6wgwf" podUID="c9a35ce4-6254-4744-b9a8-966399ae89cc" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.611427 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/550b65cf-a393-46c8-a8d6-f521689da32a-hosts-file\") pod \"node-resolver-627v7\" (UID: \"550b65cf-a393-46c8-a8d6-f521689da32a\") " pod="openshift-dns/node-resolver-627v7" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.611478 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9bfe5118-0560-4d0c-9f5a-8a77143dd58e-system-cni-dir\") pod \"multus-spn2r\" (UID: \"9bfe5118-0560-4d0c-9f5a-8a77143dd58e\") " pod="openshift-multus/multus-spn2r" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.611521 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9bfe5118-0560-4d0c-9f5a-8a77143dd58e-multus-socket-dir-parent\") pod \"multus-spn2r\" (UID: \"9bfe5118-0560-4d0c-9f5a-8a77143dd58e\") " pod="openshift-multus/multus-spn2r" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.611548 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9bfe5118-0560-4d0c-9f5a-8a77143dd58e-multus-conf-dir\") pod \"multus-spn2r\" (UID: \"9bfe5118-0560-4d0c-9f5a-8a77143dd58e\") " pod="openshift-multus/multus-spn2r" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.611568 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/96aa5b32-eb66-4453-9774-be58ee22cfce-os-release\") pod \"multus-additional-cni-plugins-xcb5t\" (UID: \"96aa5b32-eb66-4453-9774-be58ee22cfce\") " pod="openshift-multus/multus-additional-cni-plugins-xcb5t" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.611590 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/96aa5b32-eb66-4453-9774-be58ee22cfce-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xcb5t\" (UID: \"96aa5b32-eb66-4453-9774-be58ee22cfce\") " pod="openshift-multus/multus-additional-cni-plugins-xcb5t" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.611611 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9bfe5118-0560-4d0c-9f5a-8a77143dd58e-cni-binary-copy\") pod \"multus-spn2r\" (UID: \"9bfe5118-0560-4d0c-9f5a-8a77143dd58e\") " pod="openshift-multus/multus-spn2r" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.611632 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/96aa5b32-eb66-4453-9774-be58ee22cfce-cni-binary-copy\") pod \"multus-additional-cni-plugins-xcb5t\" (UID: \"96aa5b32-eb66-4453-9774-be58ee22cfce\") " pod="openshift-multus/multus-additional-cni-plugins-xcb5t" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.611701 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnpgj\" (UniqueName: \"kubernetes.io/projected/550b65cf-a393-46c8-a8d6-f521689da32a-kube-api-access-dnpgj\") pod \"node-resolver-627v7\" (UID: \"550b65cf-a393-46c8-a8d6-f521689da32a\") " pod="openshift-dns/node-resolver-627v7" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.611746 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/96aa5b32-eb66-4453-9774-be58ee22cfce-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xcb5t\" (UID: \"96aa5b32-eb66-4453-9774-be58ee22cfce\") " pod="openshift-multus/multus-additional-cni-plugins-xcb5t" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.611778 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9bfe5118-0560-4d0c-9f5a-8a77143dd58e-hostroot\") pod \"multus-spn2r\" (UID: \"9bfe5118-0560-4d0c-9f5a-8a77143dd58e\") " pod="openshift-multus/multus-spn2r" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.611799 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr2ql\" (UniqueName: \"kubernetes.io/projected/9bfe5118-0560-4d0c-9f5a-8a77143dd58e-kube-api-access-hr2ql\") pod \"multus-spn2r\" (UID: \"9bfe5118-0560-4d0c-9f5a-8a77143dd58e\") " pod="openshift-multus/multus-spn2r" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.611837 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/96aa5b32-eb66-4453-9774-be58ee22cfce-system-cni-dir\") pod \"multus-additional-cni-plugins-xcb5t\" (UID: \"96aa5b32-eb66-4453-9774-be58ee22cfce\") " pod="openshift-multus/multus-additional-cni-plugins-xcb5t" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.611874 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9bfe5118-0560-4d0c-9f5a-8a77143dd58e-etc-kubernetes\") pod \"multus-spn2r\" (UID: \"9bfe5118-0560-4d0c-9f5a-8a77143dd58e\") " pod="openshift-multus/multus-spn2r" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.611900 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9bfe5118-0560-4d0c-9f5a-8a77143dd58e-host-run-netns\") pod \"multus-spn2r\" (UID: \"9bfe5118-0560-4d0c-9f5a-8a77143dd58e\") " pod="openshift-multus/multus-spn2r" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.611922 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9bfe5118-0560-4d0c-9f5a-8a77143dd58e-host-var-lib-cni-bin\") pod \"multus-spn2r\" (UID: \"9bfe5118-0560-4d0c-9f5a-8a77143dd58e\") " pod="openshift-multus/multus-spn2r" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.611948 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9bfe5118-0560-4d0c-9f5a-8a77143dd58e-host-run-multus-certs\") pod \"multus-spn2r\" (UID: \"9bfe5118-0560-4d0c-9f5a-8a77143dd58e\") " pod="openshift-multus/multus-spn2r" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.611972 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kc78\" (UniqueName: \"kubernetes.io/projected/96aa5b32-eb66-4453-9774-be58ee22cfce-kube-api-access-8kc78\") pod \"multus-additional-cni-plugins-xcb5t\" (UID: \"96aa5b32-eb66-4453-9774-be58ee22cfce\") " pod="openshift-multus/multus-additional-cni-plugins-xcb5t" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.611998 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9bfe5118-0560-4d0c-9f5a-8a77143dd58e-host-var-lib-cni-multus\") pod \"multus-spn2r\" (UID: \"9bfe5118-0560-4d0c-9f5a-8a77143dd58e\") " pod="openshift-multus/multus-spn2r" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.612036 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/550b65cf-a393-46c8-a8d6-f521689da32a-hosts-file\") pod \"node-resolver-627v7\" (UID: \"550b65cf-a393-46c8-a8d6-f521689da32a\") " pod="openshift-dns/node-resolver-627v7" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.612052 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/96aa5b32-eb66-4453-9774-be58ee22cfce-cnibin\") pod \"multus-additional-cni-plugins-xcb5t\" (UID: \"96aa5b32-eb66-4453-9774-be58ee22cfce\") " pod="openshift-multus/multus-additional-cni-plugins-xcb5t" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.612104 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9bfe5118-0560-4d0c-9f5a-8a77143dd58e-multus-cni-dir\") pod \"multus-spn2r\" (UID: \"9bfe5118-0560-4d0c-9f5a-8a77143dd58e\") " pod="openshift-multus/multus-spn2r" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.612139 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9bfe5118-0560-4d0c-9f5a-8a77143dd58e-multus-daemon-config\") pod \"multus-spn2r\" (UID: \"9bfe5118-0560-4d0c-9f5a-8a77143dd58e\") " pod="openshift-multus/multus-spn2r" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.612166 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9bfe5118-0560-4d0c-9f5a-8a77143dd58e-host-var-lib-kubelet\") pod \"multus-spn2r\" (UID: \"9bfe5118-0560-4d0c-9f5a-8a77143dd58e\") " pod="openshift-multus/multus-spn2r" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.612214 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qxnj\" (UniqueName: \"kubernetes.io/projected/ff010d23-c09f-4d08-9f41-551442410288-kube-api-access-7qxnj\") pod \"node-ca-6rlbv\" (UID: \"ff010d23-c09f-4d08-9f41-551442410288\") " pod="openshift-image-registry/node-ca-6rlbv" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.612237 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9bfe5118-0560-4d0c-9f5a-8a77143dd58e-cnibin\") pod \"multus-spn2r\" (UID: \"9bfe5118-0560-4d0c-9f5a-8a77143dd58e\") " pod="openshift-multus/multus-spn2r" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.612259 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9bfe5118-0560-4d0c-9f5a-8a77143dd58e-os-release\") pod \"multus-spn2r\" (UID: \"9bfe5118-0560-4d0c-9f5a-8a77143dd58e\") " pod="openshift-multus/multus-spn2r" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.612282 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9bfe5118-0560-4d0c-9f5a-8a77143dd58e-host-run-k8s-cni-cncf-io\") pod \"multus-spn2r\" (UID: \"9bfe5118-0560-4d0c-9f5a-8a77143dd58e\") " pod="openshift-multus/multus-spn2r" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.612322 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ff010d23-c09f-4d08-9f41-551442410288-host\") pod \"node-ca-6rlbv\" (UID: \"ff010d23-c09f-4d08-9f41-551442410288\") " pod="openshift-image-registry/node-ca-6rlbv" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.612370 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ff010d23-c09f-4d08-9f41-551442410288-serviceca\") pod \"node-ca-6rlbv\" (UID: \"ff010d23-c09f-4d08-9f41-551442410288\") " pod="openshift-image-registry/node-ca-6rlbv" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.612423 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ff010d23-c09f-4d08-9f41-551442410288-host\") pod \"node-ca-6rlbv\" (UID: \"ff010d23-c09f-4d08-9f41-551442410288\") " pod="openshift-image-registry/node-ca-6rlbv" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.613324 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ff010d23-c09f-4d08-9f41-551442410288-serviceca\") pod \"node-ca-6rlbv\" (UID: \"ff010d23-c09f-4d08-9f41-551442410288\") " pod="openshift-image-registry/node-ca-6rlbv" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.631405 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnpgj\" (UniqueName: \"kubernetes.io/projected/550b65cf-a393-46c8-a8d6-f521689da32a-kube-api-access-dnpgj\") pod \"node-resolver-627v7\" (UID: \"550b65cf-a393-46c8-a8d6-f521689da32a\") " pod="openshift-dns/node-resolver-627v7" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.639591 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qxnj\" (UniqueName: \"kubernetes.io/projected/ff010d23-c09f-4d08-9f41-551442410288-kube-api-access-7qxnj\") pod \"node-ca-6rlbv\" (UID: \"ff010d23-c09f-4d08-9f41-551442410288\") " pod="openshift-image-registry/node-ca-6rlbv" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.649442 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-qdhfx"] Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.649858 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.653224 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.653398 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.654710 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.657022 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.657168 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.660206 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zffbr"] Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.660923 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.664955 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.664998 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.665008 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.665026 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.665043 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:45Z","lastTransitionTime":"2026-01-03T05:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.665823 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.666484 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.666542 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.666565 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.667038 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.667253 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.667380 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.713000 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9bfe5118-0560-4d0c-9f5a-8a77143dd58e-multus-cni-dir\") pod \"multus-spn2r\" (UID: \"9bfe5118-0560-4d0c-9f5a-8a77143dd58e\") " pod="openshift-multus/multus-spn2r" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.713037 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9bfe5118-0560-4d0c-9f5a-8a77143dd58e-multus-daemon-config\") pod \"multus-spn2r\" (UID: \"9bfe5118-0560-4d0c-9f5a-8a77143dd58e\") " pod="openshift-multus/multus-spn2r" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.713053 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/96aa5b32-eb66-4453-9774-be58ee22cfce-cnibin\") pod \"multus-additional-cni-plugins-xcb5t\" (UID: \"96aa5b32-eb66-4453-9774-be58ee22cfce\") " pod="openshift-multus/multus-additional-cni-plugins-xcb5t" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.713072 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-host-slash\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.713104 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-host-run-netns\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.713133 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9bfe5118-0560-4d0c-9f5a-8a77143dd58e-host-var-lib-kubelet\") pod \"multus-spn2r\" (UID: \"9bfe5118-0560-4d0c-9f5a-8a77143dd58e\") " pod="openshift-multus/multus-spn2r" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.713150 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e8c88d7d-092b-44f7-b4c8-3540be3c0e8b-rootfs\") pod \"machine-config-daemon-qdhfx\" (UID: \"e8c88d7d-092b-44f7-b4c8-3540be3c0e8b\") " pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.713165 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-run-ovn\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.713182 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e8c88d7d-092b-44f7-b4c8-3540be3c0e8b-proxy-tls\") pod \"machine-config-daemon-qdhfx\" (UID: \"e8c88d7d-092b-44f7-b4c8-3540be3c0e8b\") " pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.713200 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjwnm\" (UniqueName: \"kubernetes.io/projected/e8c88d7d-092b-44f7-b4c8-3540be3c0e8b-kube-api-access-fjwnm\") pod \"machine-config-daemon-qdhfx\" (UID: \"e8c88d7d-092b-44f7-b4c8-3540be3c0e8b\") " pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.713215 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-log-socket\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.713231 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9bfe5118-0560-4d0c-9f5a-8a77143dd58e-cnibin\") pod \"multus-spn2r\" (UID: \"9bfe5118-0560-4d0c-9f5a-8a77143dd58e\") " pod="openshift-multus/multus-spn2r" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.713256 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9bfe5118-0560-4d0c-9f5a-8a77143dd58e-os-release\") pod \"multus-spn2r\" (UID: \"9bfe5118-0560-4d0c-9f5a-8a77143dd58e\") " pod="openshift-multus/multus-spn2r" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.713272 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9bfe5118-0560-4d0c-9f5a-8a77143dd58e-host-run-k8s-cni-cncf-io\") pod \"multus-spn2r\" (UID: \"9bfe5118-0560-4d0c-9f5a-8a77143dd58e\") " pod="openshift-multus/multus-spn2r" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.713287 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/dea8fd3f-411f-44a8-a1d6-4881f41fc149-ovnkube-script-lib\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.713305 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-var-lib-openvswitch\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.713322 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-host-run-ovn-kubernetes\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.713343 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dea8fd3f-411f-44a8-a1d6-4881f41fc149-ovnkube-config\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.713361 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c9a35ce4-6254-4744-b9a8-966399ae89cc-metrics-certs\") pod \"network-metrics-daemon-6wgwf\" (UID: \"c9a35ce4-6254-4744-b9a8-966399ae89cc\") " pod="openshift-multus/network-metrics-daemon-6wgwf" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.713379 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-run-openvswitch\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.713397 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-host-cni-netd\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.713420 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9bfe5118-0560-4d0c-9f5a-8a77143dd58e-system-cni-dir\") pod \"multus-spn2r\" (UID: \"9bfe5118-0560-4d0c-9f5a-8a77143dd58e\") " pod="openshift-multus/multus-spn2r" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.713434 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-host-kubelet\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.713452 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-systemd-units\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.713469 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9bfe5118-0560-4d0c-9f5a-8a77143dd58e-multus-socket-dir-parent\") pod \"multus-spn2r\" (UID: \"9bfe5118-0560-4d0c-9f5a-8a77143dd58e\") " pod="openshift-multus/multus-spn2r" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.713486 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9bfe5118-0560-4d0c-9f5a-8a77143dd58e-multus-conf-dir\") pod \"multus-spn2r\" (UID: \"9bfe5118-0560-4d0c-9f5a-8a77143dd58e\") " pod="openshift-multus/multus-spn2r" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.713499 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/96aa5b32-eb66-4453-9774-be58ee22cfce-os-release\") pod \"multus-additional-cni-plugins-xcb5t\" (UID: \"96aa5b32-eb66-4453-9774-be58ee22cfce\") " pod="openshift-multus/multus-additional-cni-plugins-xcb5t" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.713514 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9bfe5118-0560-4d0c-9f5a-8a77143dd58e-cni-binary-copy\") pod \"multus-spn2r\" (UID: \"9bfe5118-0560-4d0c-9f5a-8a77143dd58e\") " pod="openshift-multus/multus-spn2r" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.713528 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/96aa5b32-eb66-4453-9774-be58ee22cfce-cni-binary-copy\") pod \"multus-additional-cni-plugins-xcb5t\" (UID: \"96aa5b32-eb66-4453-9774-be58ee22cfce\") " pod="openshift-multus/multus-additional-cni-plugins-xcb5t" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.713545 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/96aa5b32-eb66-4453-9774-be58ee22cfce-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xcb5t\" (UID: \"96aa5b32-eb66-4453-9774-be58ee22cfce\") " pod="openshift-multus/multus-additional-cni-plugins-xcb5t" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.713559 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-node-log\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.713575 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-run-systemd\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.713589 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9bfe5118-0560-4d0c-9f5a-8a77143dd58e-hostroot\") pod \"multus-spn2r\" (UID: \"9bfe5118-0560-4d0c-9f5a-8a77143dd58e\") " pod="openshift-multus/multus-spn2r" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.713604 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr2ql\" (UniqueName: \"kubernetes.io/projected/9bfe5118-0560-4d0c-9f5a-8a77143dd58e-kube-api-access-hr2ql\") pod \"multus-spn2r\" (UID: \"9bfe5118-0560-4d0c-9f5a-8a77143dd58e\") " pod="openshift-multus/multus-spn2r" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.713618 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/96aa5b32-eb66-4453-9774-be58ee22cfce-system-cni-dir\") pod \"multus-additional-cni-plugins-xcb5t\" (UID: \"96aa5b32-eb66-4453-9774-be58ee22cfce\") " pod="openshift-multus/multus-additional-cni-plugins-xcb5t" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.713634 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/96aa5b32-eb66-4453-9774-be58ee22cfce-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xcb5t\" (UID: \"96aa5b32-eb66-4453-9774-be58ee22cfce\") " pod="openshift-multus/multus-additional-cni-plugins-xcb5t" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.713652 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.713667 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dea8fd3f-411f-44a8-a1d6-4881f41fc149-ovn-node-metrics-cert\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.713681 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppv72\" (UniqueName: \"kubernetes.io/projected/c9a35ce4-6254-4744-b9a8-966399ae89cc-kube-api-access-ppv72\") pod \"network-metrics-daemon-6wgwf\" (UID: \"c9a35ce4-6254-4744-b9a8-966399ae89cc\") " pod="openshift-multus/network-metrics-daemon-6wgwf" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.713697 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9bfe5118-0560-4d0c-9f5a-8a77143dd58e-etc-kubernetes\") pod \"multus-spn2r\" (UID: \"9bfe5118-0560-4d0c-9f5a-8a77143dd58e\") " pod="openshift-multus/multus-spn2r" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.713714 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9bfe5118-0560-4d0c-9f5a-8a77143dd58e-host-run-netns\") pod \"multus-spn2r\" (UID: \"9bfe5118-0560-4d0c-9f5a-8a77143dd58e\") " pod="openshift-multus/multus-spn2r" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.713743 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9bfe5118-0560-4d0c-9f5a-8a77143dd58e-host-var-lib-cni-bin\") pod \"multus-spn2r\" (UID: \"9bfe5118-0560-4d0c-9f5a-8a77143dd58e\") " pod="openshift-multus/multus-spn2r" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.713758 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-host-cni-bin\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.713772 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dea8fd3f-411f-44a8-a1d6-4881f41fc149-env-overrides\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.713792 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9bfe5118-0560-4d0c-9f5a-8a77143dd58e-host-var-lib-cni-multus\") pod \"multus-spn2r\" (UID: \"9bfe5118-0560-4d0c-9f5a-8a77143dd58e\") " pod="openshift-multus/multus-spn2r" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.713807 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9bfe5118-0560-4d0c-9f5a-8a77143dd58e-host-run-multus-certs\") pod \"multus-spn2r\" (UID: \"9bfe5118-0560-4d0c-9f5a-8a77143dd58e\") " pod="openshift-multus/multus-spn2r" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.713822 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kc78\" (UniqueName: \"kubernetes.io/projected/96aa5b32-eb66-4453-9774-be58ee22cfce-kube-api-access-8kc78\") pod \"multus-additional-cni-plugins-xcb5t\" (UID: \"96aa5b32-eb66-4453-9774-be58ee22cfce\") " pod="openshift-multus/multus-additional-cni-plugins-xcb5t" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.713837 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8qgd\" (UniqueName: \"kubernetes.io/projected/dea8fd3f-411f-44a8-a1d6-4881f41fc149-kube-api-access-z8qgd\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.713858 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e8c88d7d-092b-44f7-b4c8-3540be3c0e8b-mcd-auth-proxy-config\") pod \"machine-config-daemon-qdhfx\" (UID: \"e8c88d7d-092b-44f7-b4c8-3540be3c0e8b\") " pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.713876 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-etc-openvswitch\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.714050 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9bfe5118-0560-4d0c-9f5a-8a77143dd58e-multus-cni-dir\") pod \"multus-spn2r\" (UID: \"9bfe5118-0560-4d0c-9f5a-8a77143dd58e\") " pod="openshift-multus/multus-spn2r" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.714600 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9bfe5118-0560-4d0c-9f5a-8a77143dd58e-multus-daemon-config\") pod \"multus-spn2r\" (UID: \"9bfe5118-0560-4d0c-9f5a-8a77143dd58e\") " pod="openshift-multus/multus-spn2r" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.714633 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/96aa5b32-eb66-4453-9774-be58ee22cfce-cnibin\") pod \"multus-additional-cni-plugins-xcb5t\" (UID: \"96aa5b32-eb66-4453-9774-be58ee22cfce\") " pod="openshift-multus/multus-additional-cni-plugins-xcb5t" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.714661 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9bfe5118-0560-4d0c-9f5a-8a77143dd58e-host-var-lib-kubelet\") pod \"multus-spn2r\" (UID: \"9bfe5118-0560-4d0c-9f5a-8a77143dd58e\") " pod="openshift-multus/multus-spn2r" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.714714 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9bfe5118-0560-4d0c-9f5a-8a77143dd58e-cnibin\") pod \"multus-spn2r\" (UID: \"9bfe5118-0560-4d0c-9f5a-8a77143dd58e\") " pod="openshift-multus/multus-spn2r" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.714750 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9bfe5118-0560-4d0c-9f5a-8a77143dd58e-os-release\") pod \"multus-spn2r\" (UID: \"9bfe5118-0560-4d0c-9f5a-8a77143dd58e\") " pod="openshift-multus/multus-spn2r" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.714771 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9bfe5118-0560-4d0c-9f5a-8a77143dd58e-host-run-k8s-cni-cncf-io\") pod \"multus-spn2r\" (UID: \"9bfe5118-0560-4d0c-9f5a-8a77143dd58e\") " pod="openshift-multus/multus-spn2r" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.714834 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9bfe5118-0560-4d0c-9f5a-8a77143dd58e-system-cni-dir\") pod \"multus-spn2r\" (UID: \"9bfe5118-0560-4d0c-9f5a-8a77143dd58e\") " pod="openshift-multus/multus-spn2r" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.714873 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9bfe5118-0560-4d0c-9f5a-8a77143dd58e-multus-socket-dir-parent\") pod \"multus-spn2r\" (UID: \"9bfe5118-0560-4d0c-9f5a-8a77143dd58e\") " pod="openshift-multus/multus-spn2r" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.714899 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9bfe5118-0560-4d0c-9f5a-8a77143dd58e-multus-conf-dir\") pod \"multus-spn2r\" (UID: \"9bfe5118-0560-4d0c-9f5a-8a77143dd58e\") " pod="openshift-multus/multus-spn2r" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.714932 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/96aa5b32-eb66-4453-9774-be58ee22cfce-os-release\") pod \"multus-additional-cni-plugins-xcb5t\" (UID: \"96aa5b32-eb66-4453-9774-be58ee22cfce\") " pod="openshift-multus/multus-additional-cni-plugins-xcb5t" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.715301 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9bfe5118-0560-4d0c-9f5a-8a77143dd58e-cni-binary-copy\") pod \"multus-spn2r\" (UID: \"9bfe5118-0560-4d0c-9f5a-8a77143dd58e\") " pod="openshift-multus/multus-spn2r" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.715716 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/96aa5b32-eb66-4453-9774-be58ee22cfce-cni-binary-copy\") pod \"multus-additional-cni-plugins-xcb5t\" (UID: \"96aa5b32-eb66-4453-9774-be58ee22cfce\") " pod="openshift-multus/multus-additional-cni-plugins-xcb5t" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.716259 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/96aa5b32-eb66-4453-9774-be58ee22cfce-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xcb5t\" (UID: \"96aa5b32-eb66-4453-9774-be58ee22cfce\") " pod="openshift-multus/multus-additional-cni-plugins-xcb5t" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.724255 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9bfe5118-0560-4d0c-9f5a-8a77143dd58e-etc-kubernetes\") pod \"multus-spn2r\" (UID: \"9bfe5118-0560-4d0c-9f5a-8a77143dd58e\") " pod="openshift-multus/multus-spn2r" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.724281 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/96aa5b32-eb66-4453-9774-be58ee22cfce-system-cni-dir\") pod \"multus-additional-cni-plugins-xcb5t\" (UID: \"96aa5b32-eb66-4453-9774-be58ee22cfce\") " pod="openshift-multus/multus-additional-cni-plugins-xcb5t" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.724310 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9bfe5118-0560-4d0c-9f5a-8a77143dd58e-host-var-lib-cni-bin\") pod \"multus-spn2r\" (UID: \"9bfe5118-0560-4d0c-9f5a-8a77143dd58e\") " pod="openshift-multus/multus-spn2r" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.724350 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9bfe5118-0560-4d0c-9f5a-8a77143dd58e-hostroot\") pod \"multus-spn2r\" (UID: \"9bfe5118-0560-4d0c-9f5a-8a77143dd58e\") " pod="openshift-multus/multus-spn2r" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.724342 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9bfe5118-0560-4d0c-9f5a-8a77143dd58e-host-var-lib-cni-multus\") pod \"multus-spn2r\" (UID: \"9bfe5118-0560-4d0c-9f5a-8a77143dd58e\") " pod="openshift-multus/multus-spn2r" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.724363 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9bfe5118-0560-4d0c-9f5a-8a77143dd58e-host-run-netns\") pod \"multus-spn2r\" (UID: \"9bfe5118-0560-4d0c-9f5a-8a77143dd58e\") " pod="openshift-multus/multus-spn2r" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.725102 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9bfe5118-0560-4d0c-9f5a-8a77143dd58e-host-run-multus-certs\") pod \"multus-spn2r\" (UID: \"9bfe5118-0560-4d0c-9f5a-8a77143dd58e\") " pod="openshift-multus/multus-spn2r" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.728541 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/96aa5b32-eb66-4453-9774-be58ee22cfce-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xcb5t\" (UID: \"96aa5b32-eb66-4453-9774-be58ee22cfce\") " pod="openshift-multus/multus-additional-cni-plugins-xcb5t" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.755616 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kc78\" (UniqueName: \"kubernetes.io/projected/96aa5b32-eb66-4453-9774-be58ee22cfce-kube-api-access-8kc78\") pod \"multus-additional-cni-plugins-xcb5t\" (UID: \"96aa5b32-eb66-4453-9774-be58ee22cfce\") " pod="openshift-multus/multus-additional-cni-plugins-xcb5t" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.755640 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr2ql\" (UniqueName: \"kubernetes.io/projected/9bfe5118-0560-4d0c-9f5a-8a77143dd58e-kube-api-access-hr2ql\") pod \"multus-spn2r\" (UID: \"9bfe5118-0560-4d0c-9f5a-8a77143dd58e\") " pod="openshift-multus/multus-spn2r" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.756290 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-627v7" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.766951 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.766980 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.766989 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.767003 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.767013 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:45Z","lastTransitionTime":"2026-01-03T05:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.767423 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-6rlbv" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.815324 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-host-slash\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.815606 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-host-slash\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.815729 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-host-run-netns\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.815826 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-host-run-netns\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.815892 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e8c88d7d-092b-44f7-b4c8-3540be3c0e8b-rootfs\") pod \"machine-config-daemon-qdhfx\" (UID: \"e8c88d7d-092b-44f7-b4c8-3540be3c0e8b\") " pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.815918 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-run-ovn\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.815947 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e8c88d7d-092b-44f7-b4c8-3540be3c0e8b-proxy-tls\") pod \"machine-config-daemon-qdhfx\" (UID: \"e8c88d7d-092b-44f7-b4c8-3540be3c0e8b\") " pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.815971 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjwnm\" (UniqueName: \"kubernetes.io/projected/e8c88d7d-092b-44f7-b4c8-3540be3c0e8b-kube-api-access-fjwnm\") pod \"machine-config-daemon-qdhfx\" (UID: \"e8c88d7d-092b-44f7-b4c8-3540be3c0e8b\") " pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.815987 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-log-socket\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.816008 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/dea8fd3f-411f-44a8-a1d6-4881f41fc149-ovnkube-script-lib\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.816028 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-var-lib-openvswitch\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.816045 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-host-run-ovn-kubernetes\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.816063 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dea8fd3f-411f-44a8-a1d6-4881f41fc149-ovnkube-config\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.816116 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c9a35ce4-6254-4744-b9a8-966399ae89cc-metrics-certs\") pod \"network-metrics-daemon-6wgwf\" (UID: \"c9a35ce4-6254-4744-b9a8-966399ae89cc\") " pod="openshift-multus/network-metrics-daemon-6wgwf" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.816137 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-run-openvswitch\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.816161 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-host-cni-netd\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.816189 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-host-kubelet\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.816206 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-systemd-units\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.816242 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-node-log\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.816267 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-run-systemd\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.816294 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.816323 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dea8fd3f-411f-44a8-a1d6-4881f41fc149-ovn-node-metrics-cert\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.816347 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppv72\" (UniqueName: \"kubernetes.io/projected/c9a35ce4-6254-4744-b9a8-966399ae89cc-kube-api-access-ppv72\") pod \"network-metrics-daemon-6wgwf\" (UID: \"c9a35ce4-6254-4744-b9a8-966399ae89cc\") " pod="openshift-multus/network-metrics-daemon-6wgwf" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.816370 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-host-cni-bin\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.816384 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dea8fd3f-411f-44a8-a1d6-4881f41fc149-env-overrides\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.816405 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8qgd\" (UniqueName: \"kubernetes.io/projected/dea8fd3f-411f-44a8-a1d6-4881f41fc149-kube-api-access-z8qgd\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.816441 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e8c88d7d-092b-44f7-b4c8-3540be3c0e8b-mcd-auth-proxy-config\") pod \"machine-config-daemon-qdhfx\" (UID: \"e8c88d7d-092b-44f7-b4c8-3540be3c0e8b\") " pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.816462 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-etc-openvswitch\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.816517 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-etc-openvswitch\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.816560 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-host-cni-bin\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.816606 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-systemd-units\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.816648 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-host-cni-netd\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.816640 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-run-openvswitch\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.816692 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-node-log\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.816719 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-log-socket\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.816671 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-host-kubelet\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.816749 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e8c88d7d-092b-44f7-b4c8-3540be3c0e8b-rootfs\") pod \"machine-config-daemon-qdhfx\" (UID: \"e8c88d7d-092b-44f7-b4c8-3540be3c0e8b\") " pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.816777 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-run-ovn\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.817142 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-host-run-ovn-kubernetes\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.817198 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-run-systemd\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.817256 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-var-lib-openvswitch\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.817162 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dea8fd3f-411f-44a8-a1d6-4881f41fc149-env-overrides\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: E0103 05:40:45.817283 4854 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 03 05:40:45 crc kubenswrapper[4854]: E0103 05:40:45.817327 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c9a35ce4-6254-4744-b9a8-966399ae89cc-metrics-certs podName:c9a35ce4-6254-4744-b9a8-966399ae89cc nodeName:}" failed. No retries permitted until 2026-01-03 05:40:46.317315516 +0000 UTC m=+24.643892088 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c9a35ce4-6254-4744-b9a8-966399ae89cc-metrics-certs") pod "network-metrics-daemon-6wgwf" (UID: "c9a35ce4-6254-4744-b9a8-966399ae89cc") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.817619 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e8c88d7d-092b-44f7-b4c8-3540be3c0e8b-mcd-auth-proxy-config\") pod \"machine-config-daemon-qdhfx\" (UID: \"e8c88d7d-092b-44f7-b4c8-3540be3c0e8b\") " pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.817696 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/dea8fd3f-411f-44a8-a1d6-4881f41fc149-ovnkube-script-lib\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.817734 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-spn2r" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.817866 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dea8fd3f-411f-44a8-a1d6-4881f41fc149-ovnkube-config\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.818038 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.822624 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dea8fd3f-411f-44a8-a1d6-4881f41fc149-ovn-node-metrics-cert\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.823636 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e8c88d7d-092b-44f7-b4c8-3540be3c0e8b-proxy-tls\") pod \"machine-config-daemon-qdhfx\" (UID: \"e8c88d7d-092b-44f7-b4c8-3540be3c0e8b\") " pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.837539 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjwnm\" (UniqueName: \"kubernetes.io/projected/e8c88d7d-092b-44f7-b4c8-3540be3c0e8b-kube-api-access-fjwnm\") pod \"machine-config-daemon-qdhfx\" (UID: \"e8c88d7d-092b-44f7-b4c8-3540be3c0e8b\") " pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" Jan 03 05:40:45 crc kubenswrapper[4854]: W0103 05:40:45.839129 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9bfe5118_0560_4d0c_9f5a_8a77143dd58e.slice/crio-945d25633db2a67aa8bc1e2893673be45ee05400695dd41c680aaf448eafea17 WatchSource:0}: Error finding container 945d25633db2a67aa8bc1e2893673be45ee05400695dd41c680aaf448eafea17: Status 404 returned error can't find the container with id 945d25633db2a67aa8bc1e2893673be45ee05400695dd41c680aaf448eafea17 Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.843914 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8qgd\" (UniqueName: \"kubernetes.io/projected/dea8fd3f-411f-44a8-a1d6-4881f41fc149-kube-api-access-z8qgd\") pod \"ovnkube-node-zffbr\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.850737 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-xcb5t" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.852806 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppv72\" (UniqueName: \"kubernetes.io/projected/c9a35ce4-6254-4744-b9a8-966399ae89cc-kube-api-access-ppv72\") pod \"network-metrics-daemon-6wgwf\" (UID: \"c9a35ce4-6254-4744-b9a8-966399ae89cc\") " pod="openshift-multus/network-metrics-daemon-6wgwf" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.869954 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.870015 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.870028 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.870047 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.870059 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:45Z","lastTransitionTime":"2026-01-03T05:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.963215 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.972738 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.972767 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.972776 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.972804 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.972816 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:45Z","lastTransitionTime":"2026-01-03T05:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:45 crc kubenswrapper[4854]: I0103 05:40:45.974057 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:45 crc kubenswrapper[4854]: W0103 05:40:45.981550 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8c88d7d_092b_44f7_b4c8_3540be3c0e8b.slice/crio-4fb0f951eb3cbff5c7dd81205036e55da3006c7f4e1c1e76055a1f398608d20f WatchSource:0}: Error finding container 4fb0f951eb3cbff5c7dd81205036e55da3006c7f4e1c1e76055a1f398608d20f: Status 404 returned error can't find the container with id 4fb0f951eb3cbff5c7dd81205036e55da3006c7f4e1c1e76055a1f398608d20f Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.059662 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t8nt8"] Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.060104 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t8nt8" Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.062610 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.062641 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.077641 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.077673 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.077704 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.077723 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.077735 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:46Z","lastTransitionTime":"2026-01-03T05:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.117506 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.117583 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 05:40:46 crc kubenswrapper[4854]: E0103 05:40:46.117650 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 05:40:46 crc kubenswrapper[4854]: E0103 05:40:46.117763 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.118788 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b9b216aa-2746-458b-8442-6f9327c13886-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-t8nt8\" (UID: \"b9b216aa-2746-458b-8442-6f9327c13886\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t8nt8" Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.118833 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7lkb\" (UniqueName: \"kubernetes.io/projected/b9b216aa-2746-458b-8442-6f9327c13886-kube-api-access-z7lkb\") pod \"ovnkube-control-plane-749d76644c-t8nt8\" (UID: \"b9b216aa-2746-458b-8442-6f9327c13886\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t8nt8" Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.118876 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b9b216aa-2746-458b-8442-6f9327c13886-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-t8nt8\" (UID: \"b9b216aa-2746-458b-8442-6f9327c13886\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t8nt8" Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.118909 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b9b216aa-2746-458b-8442-6f9327c13886-env-overrides\") pod \"ovnkube-control-plane-749d76644c-t8nt8\" (UID: \"b9b216aa-2746-458b-8442-6f9327c13886\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t8nt8" Jan 03 05:40:46 crc kubenswrapper[4854]: W0103 05:40:46.159199 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddea8fd3f_411f_44a8_a1d6_4881f41fc149.slice/crio-565fc801a2168358e61255fc30de012d150001e705004cf9bbfa025053a1507b WatchSource:0}: Error finding container 565fc801a2168358e61255fc30de012d150001e705004cf9bbfa025053a1507b: Status 404 returned error can't find the container with id 565fc801a2168358e61255fc30de012d150001e705004cf9bbfa025053a1507b Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.180402 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.180442 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.180454 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.180473 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.180485 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:46Z","lastTransitionTime":"2026-01-03T05:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.220128 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b9b216aa-2746-458b-8442-6f9327c13886-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-t8nt8\" (UID: \"b9b216aa-2746-458b-8442-6f9327c13886\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t8nt8" Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.220173 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b9b216aa-2746-458b-8442-6f9327c13886-env-overrides\") pod \"ovnkube-control-plane-749d76644c-t8nt8\" (UID: \"b9b216aa-2746-458b-8442-6f9327c13886\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t8nt8" Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.220223 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b9b216aa-2746-458b-8442-6f9327c13886-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-t8nt8\" (UID: \"b9b216aa-2746-458b-8442-6f9327c13886\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t8nt8" Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.220270 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7lkb\" (UniqueName: \"kubernetes.io/projected/b9b216aa-2746-458b-8442-6f9327c13886-kube-api-access-z7lkb\") pod \"ovnkube-control-plane-749d76644c-t8nt8\" (UID: \"b9b216aa-2746-458b-8442-6f9327c13886\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t8nt8" Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.220937 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b9b216aa-2746-458b-8442-6f9327c13886-env-overrides\") pod \"ovnkube-control-plane-749d76644c-t8nt8\" (UID: \"b9b216aa-2746-458b-8442-6f9327c13886\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t8nt8" Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.220964 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b9b216aa-2746-458b-8442-6f9327c13886-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-t8nt8\" (UID: \"b9b216aa-2746-458b-8442-6f9327c13886\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t8nt8" Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.225013 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b9b216aa-2746-458b-8442-6f9327c13886-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-t8nt8\" (UID: \"b9b216aa-2746-458b-8442-6f9327c13886\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t8nt8" Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.246620 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7lkb\" (UniqueName: \"kubernetes.io/projected/b9b216aa-2746-458b-8442-6f9327c13886-kube-api-access-z7lkb\") pod \"ovnkube-control-plane-749d76644c-t8nt8\" (UID: \"b9b216aa-2746-458b-8442-6f9327c13886\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t8nt8" Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.284067 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.284118 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.284129 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.284155 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.284167 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:46Z","lastTransitionTime":"2026-01-03T05:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.321364 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c9a35ce4-6254-4744-b9a8-966399ae89cc-metrics-certs\") pod \"network-metrics-daemon-6wgwf\" (UID: \"c9a35ce4-6254-4744-b9a8-966399ae89cc\") " pod="openshift-multus/network-metrics-daemon-6wgwf" Jan 03 05:40:46 crc kubenswrapper[4854]: E0103 05:40:46.321521 4854 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 03 05:40:46 crc kubenswrapper[4854]: E0103 05:40:46.321568 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c9a35ce4-6254-4744-b9a8-966399ae89cc-metrics-certs podName:c9a35ce4-6254-4744-b9a8-966399ae89cc nodeName:}" failed. No retries permitted until 2026-01-03 05:40:47.32155251 +0000 UTC m=+25.648129082 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c9a35ce4-6254-4744-b9a8-966399ae89cc-metrics-certs") pod "network-metrics-daemon-6wgwf" (UID: "c9a35ce4-6254-4744-b9a8-966399ae89cc") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.378312 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t8nt8" Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.392386 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.392424 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.392435 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.392472 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.392482 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:46Z","lastTransitionTime":"2026-01-03T05:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:46 crc kubenswrapper[4854]: W0103 05:40:46.422883 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9b216aa_2746_458b_8442_6f9327c13886.slice/crio-f2796209b091e1f78c47b312f6c5ad8884c335fce18fcd53361ab4a4d6ce2f87 WatchSource:0}: Error finding container f2796209b091e1f78c47b312f6c5ad8884c335fce18fcd53361ab4a4d6ce2f87: Status 404 returned error can't find the container with id f2796209b091e1f78c47b312f6c5ad8884c335fce18fcd53361ab4a4d6ce2f87 Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.440155 4854 generic.go:334] "Generic (PLEG): container finished" podID="dea8fd3f-411f-44a8-a1d6-4881f41fc149" containerID="3faa8a1d01333196ebebebb77705daea5692b77e3602574b64b3944798dd5413" exitCode=0 Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.440218 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" event={"ID":"dea8fd3f-411f-44a8-a1d6-4881f41fc149","Type":"ContainerDied","Data":"3faa8a1d01333196ebebebb77705daea5692b77e3602574b64b3944798dd5413"} Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.440244 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" event={"ID":"dea8fd3f-411f-44a8-a1d6-4881f41fc149","Type":"ContainerStarted","Data":"565fc801a2168358e61255fc30de012d150001e705004cf9bbfa025053a1507b"} Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.441733 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xcb5t" event={"ID":"96aa5b32-eb66-4453-9774-be58ee22cfce","Type":"ContainerStarted","Data":"6139afa6d806cfaf87c0da4fa83af2d8bdb98748b1426faf64d4c996af155626"} Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.441755 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xcb5t" event={"ID":"96aa5b32-eb66-4453-9774-be58ee22cfce","Type":"ContainerStarted","Data":"553c607496f1e41570868e9e02e70aa0cf333ebe96dc8f8c8daf713f5c745018"} Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.443240 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-spn2r" event={"ID":"9bfe5118-0560-4d0c-9f5a-8a77143dd58e","Type":"ContainerStarted","Data":"9039309fdb9b29d081ebbe9b1145ccab345e3ac234f4bbc0b9267d69a4ee8f81"} Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.443272 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-spn2r" event={"ID":"9bfe5118-0560-4d0c-9f5a-8a77143dd58e","Type":"ContainerStarted","Data":"945d25633db2a67aa8bc1e2893673be45ee05400695dd41c680aaf448eafea17"} Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.444945 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" event={"ID":"e8c88d7d-092b-44f7-b4c8-3540be3c0e8b","Type":"ContainerStarted","Data":"41ee4426739e125fc38ef9de0bc907f228c08816a774c8b5f992bf1e1c0c09cc"} Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.444999 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" event={"ID":"e8c88d7d-092b-44f7-b4c8-3540be3c0e8b","Type":"ContainerStarted","Data":"4fb0f951eb3cbff5c7dd81205036e55da3006c7f4e1c1e76055a1f398608d20f"} Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.446192 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t8nt8" event={"ID":"b9b216aa-2746-458b-8442-6f9327c13886","Type":"ContainerStarted","Data":"f2796209b091e1f78c47b312f6c5ad8884c335fce18fcd53361ab4a4d6ce2f87"} Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.454508 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-6rlbv" event={"ID":"ff010d23-c09f-4d08-9f41-551442410288","Type":"ContainerStarted","Data":"f6d947620d5c071b7114b4071c499a1b3b56ed646feb05d5345dd1d78d4e49bf"} Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.454562 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-6rlbv" event={"ID":"ff010d23-c09f-4d08-9f41-551442410288","Type":"ContainerStarted","Data":"346f75046962a52dedd80cb79ba774ffcabeeaaa859bc6e4630c2e541368a5b3"} Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.458524 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-627v7" event={"ID":"550b65cf-a393-46c8-a8d6-f521689da32a","Type":"ContainerStarted","Data":"ee0f491277683409db2057fc83ed356df114f6c70b47bb5481dc8eef4a80310a"} Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.458559 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-627v7" event={"ID":"550b65cf-a393-46c8-a8d6-f521689da32a","Type":"ContainerStarted","Data":"77ab0d8f99d9ebac0be89d7fea604ffc3bb325380e36423af256d40746aa19d2"} Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.494675 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.494712 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.494721 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.494736 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.494747 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:46Z","lastTransitionTime":"2026-01-03T05:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.507729 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-6rlbv" podStartSLOduration=1.5077082229999998 podStartE2EDuration="1.507708223s" podCreationTimestamp="2026-01-03 05:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:40:46.506490631 +0000 UTC m=+24.833067213" watchObservedRunningTime="2026-01-03 05:40:46.507708223 +0000 UTC m=+24.834284805" Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.522006 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-627v7" podStartSLOduration=1.5219866629999999 podStartE2EDuration="1.521986663s" podCreationTimestamp="2026-01-03 05:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:40:46.521561282 +0000 UTC m=+24.848137854" watchObservedRunningTime="2026-01-03 05:40:46.521986663 +0000 UTC m=+24.848563255" Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.538696 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-spn2r" podStartSLOduration=1.538673615 podStartE2EDuration="1.538673615s" podCreationTimestamp="2026-01-03 05:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:40:46.538140811 +0000 UTC m=+24.864717413" watchObservedRunningTime="2026-01-03 05:40:46.538673615 +0000 UTC m=+24.865250197" Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.597674 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.597709 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.597720 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.597738 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.597752 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:46Z","lastTransitionTime":"2026-01-03T05:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.701010 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.701245 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.701253 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.701266 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.701275 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:46Z","lastTransitionTime":"2026-01-03T05:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.803556 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.803594 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.803604 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.803620 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.803632 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:46Z","lastTransitionTime":"2026-01-03T05:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.906008 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.906395 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.906408 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.906430 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:46 crc kubenswrapper[4854]: I0103 05:40:46.906441 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:46Z","lastTransitionTime":"2026-01-03T05:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.009216 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.009280 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.009301 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.009330 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.009352 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:47Z","lastTransitionTime":"2026-01-03T05:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.111623 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.111681 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.111703 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.111731 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.111749 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:47Z","lastTransitionTime":"2026-01-03T05:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.117026 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.117048 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6wgwf" Jan 03 05:40:47 crc kubenswrapper[4854]: E0103 05:40:47.117171 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 05:40:47 crc kubenswrapper[4854]: E0103 05:40:47.117267 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6wgwf" podUID="c9a35ce4-6254-4744-b9a8-966399ae89cc" Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.213899 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.214987 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.215002 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.215016 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.215026 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:47Z","lastTransitionTime":"2026-01-03T05:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.317172 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.317220 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.317232 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.317253 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.317267 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:47Z","lastTransitionTime":"2026-01-03T05:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.330878 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c9a35ce4-6254-4744-b9a8-966399ae89cc-metrics-certs\") pod \"network-metrics-daemon-6wgwf\" (UID: \"c9a35ce4-6254-4744-b9a8-966399ae89cc\") " pod="openshift-multus/network-metrics-daemon-6wgwf" Jan 03 05:40:47 crc kubenswrapper[4854]: E0103 05:40:47.331058 4854 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 03 05:40:47 crc kubenswrapper[4854]: E0103 05:40:47.331173 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c9a35ce4-6254-4744-b9a8-966399ae89cc-metrics-certs podName:c9a35ce4-6254-4744-b9a8-966399ae89cc nodeName:}" failed. No retries permitted until 2026-01-03 05:40:49.331150188 +0000 UTC m=+27.657726850 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c9a35ce4-6254-4744-b9a8-966399ae89cc-metrics-certs") pod "network-metrics-daemon-6wgwf" (UID: "c9a35ce4-6254-4744-b9a8-966399ae89cc") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.420305 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.420377 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.420399 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.420427 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.420442 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:47Z","lastTransitionTime":"2026-01-03T05:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.465309 4854 generic.go:334] "Generic (PLEG): container finished" podID="96aa5b32-eb66-4453-9774-be58ee22cfce" containerID="6139afa6d806cfaf87c0da4fa83af2d8bdb98748b1426faf64d4c996af155626" exitCode=0 Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.465395 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xcb5t" event={"ID":"96aa5b32-eb66-4453-9774-be58ee22cfce","Type":"ContainerDied","Data":"6139afa6d806cfaf87c0da4fa83af2d8bdb98748b1426faf64d4c996af155626"} Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.467526 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" event={"ID":"e8c88d7d-092b-44f7-b4c8-3540be3c0e8b","Type":"ContainerStarted","Data":"2277927daa3f918ffa654145fb76b8233e228bb8417001a5e0dc887925b265d6"} Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.469524 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t8nt8" event={"ID":"b9b216aa-2746-458b-8442-6f9327c13886","Type":"ContainerStarted","Data":"810d0a26ba001ce1dfa030a15c7716e4a7a6a4d830b9bce66add81e7a1b325a0"} Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.471869 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" event={"ID":"dea8fd3f-411f-44a8-a1d6-4881f41fc149","Type":"ContainerStarted","Data":"e4d2128cfbfad21895f6b5c0b98cdb06df7b48aa178c15b4162635e92afcfb71"} Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.471899 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" event={"ID":"dea8fd3f-411f-44a8-a1d6-4881f41fc149","Type":"ContainerStarted","Data":"1ed03d4f5e2709066a245014f5a4caef63f0b02ecf2fce8e89c6b9f43fc58aba"} Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.505726 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podStartSLOduration=2.50569843 podStartE2EDuration="2.50569843s" podCreationTimestamp="2026-01-03 05:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:40:47.504405327 +0000 UTC m=+25.830981929" watchObservedRunningTime="2026-01-03 05:40:47.50569843 +0000 UTC m=+25.832275002" Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.523230 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.523295 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.523312 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.523337 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.523353 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:47Z","lastTransitionTime":"2026-01-03T05:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.625586 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.625639 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.625652 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.625672 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.625686 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:47Z","lastTransitionTime":"2026-01-03T05:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.729135 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.729464 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.729475 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.729490 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.729500 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:47Z","lastTransitionTime":"2026-01-03T05:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.834542 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.834586 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.834600 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.834619 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.834633 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:47Z","lastTransitionTime":"2026-01-03T05:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.936932 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.936962 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.936971 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.936992 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:47 crc kubenswrapper[4854]: I0103 05:40:47.937002 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:47Z","lastTransitionTime":"2026-01-03T05:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.039535 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.039584 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.039595 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.039615 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.039628 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:48Z","lastTransitionTime":"2026-01-03T05:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.117694 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.117724 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 05:40:48 crc kubenswrapper[4854]: E0103 05:40:48.117829 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 05:40:48 crc kubenswrapper[4854]: E0103 05:40:48.118032 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.141691 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.141771 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.141791 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.141833 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.141854 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:48Z","lastTransitionTime":"2026-01-03T05:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.248531 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.248579 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.248597 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.248625 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.248644 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:48Z","lastTransitionTime":"2026-01-03T05:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.352318 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.352377 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.352388 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.352413 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.352429 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:48Z","lastTransitionTime":"2026-01-03T05:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.454968 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.455033 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.455052 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.455098 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.455118 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:48Z","lastTransitionTime":"2026-01-03T05:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.477298 4854 generic.go:334] "Generic (PLEG): container finished" podID="96aa5b32-eb66-4453-9774-be58ee22cfce" containerID="c1973f731cdb27804e94868e6b61157d85c98e6ff1b2dea5074f6534ef2bc541" exitCode=0 Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.477370 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xcb5t" event={"ID":"96aa5b32-eb66-4453-9774-be58ee22cfce","Type":"ContainerDied","Data":"c1973f731cdb27804e94868e6b61157d85c98e6ff1b2dea5074f6534ef2bc541"} Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.479484 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t8nt8" event={"ID":"b9b216aa-2746-458b-8442-6f9327c13886","Type":"ContainerStarted","Data":"ba10acd332db2d34cec8df46e8db5583162dd2c981ba4de219022081cc58555f"} Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.487287 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" event={"ID":"dea8fd3f-411f-44a8-a1d6-4881f41fc149","Type":"ContainerStarted","Data":"c1a422e6522d553b1ce279a23816bf94865070feb187372d04cc50e8bb733d31"} Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.487355 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" event={"ID":"dea8fd3f-411f-44a8-a1d6-4881f41fc149","Type":"ContainerStarted","Data":"74d71e736e09b8dab45141b304bca1c5ce323b3a9cf06b5a917072a38be385d2"} Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.487377 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" event={"ID":"dea8fd3f-411f-44a8-a1d6-4881f41fc149","Type":"ContainerStarted","Data":"0ff12fdc87727ca31d1c393d3ba08b1b3c814b82d28164b99e396b2ab8537276"} Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.487395 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" event={"ID":"dea8fd3f-411f-44a8-a1d6-4881f41fc149","Type":"ContainerStarted","Data":"fa2191fe20b6f81927579ecb9da5b3eb8b9ef9cabfa0dc608b92a1565bcc1160"} Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.522169 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t8nt8" podStartSLOduration=3.522131935 podStartE2EDuration="3.522131935s" podCreationTimestamp="2026-01-03 05:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:40:48.521203511 +0000 UTC m=+26.847780083" watchObservedRunningTime="2026-01-03 05:40:48.522131935 +0000 UTC m=+26.848708537" Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.557716 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.557799 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.557820 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.557850 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.557874 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:48Z","lastTransitionTime":"2026-01-03T05:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.661233 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.661747 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.661758 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.661773 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.661783 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:48Z","lastTransitionTime":"2026-01-03T05:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.747803 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.747941 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.747981 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.748006 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.748028 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 05:40:48 crc kubenswrapper[4854]: E0103 05:40:48.748215 4854 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 03 05:40:48 crc kubenswrapper[4854]: E0103 05:40:48.748297 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-03 05:40:56.748276124 +0000 UTC m=+35.074852696 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 03 05:40:48 crc kubenswrapper[4854]: E0103 05:40:48.748388 4854 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 03 05:40:48 crc kubenswrapper[4854]: E0103 05:40:48.748413 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 05:40:56.748379357 +0000 UTC m=+35.074955929 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:40:48 crc kubenswrapper[4854]: E0103 05:40:48.748426 4854 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 03 05:40:48 crc kubenswrapper[4854]: E0103 05:40:48.748455 4854 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 05:40:48 crc kubenswrapper[4854]: E0103 05:40:48.748501 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-03 05:40:56.74849372 +0000 UTC m=+35.075070292 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 05:40:48 crc kubenswrapper[4854]: E0103 05:40:48.748388 4854 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 03 05:40:48 crc kubenswrapper[4854]: E0103 05:40:48.748567 4854 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 03 05:40:48 crc kubenswrapper[4854]: E0103 05:40:48.748586 4854 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 05:40:48 crc kubenswrapper[4854]: E0103 05:40:48.748387 4854 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 03 05:40:48 crc kubenswrapper[4854]: E0103 05:40:48.748657 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-03 05:40:56.748636584 +0000 UTC m=+35.075213336 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 05:40:48 crc kubenswrapper[4854]: E0103 05:40:48.748745 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-03 05:40:56.748720846 +0000 UTC m=+35.075297608 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.763842 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.763892 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.763905 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.763931 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.763947 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:48Z","lastTransitionTime":"2026-01-03T05:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.866944 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.866994 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.867004 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.867024 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.867036 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:48Z","lastTransitionTime":"2026-01-03T05:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.969411 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.969447 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.969457 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.969471 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:48 crc kubenswrapper[4854]: I0103 05:40:48.969481 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:48Z","lastTransitionTime":"2026-01-03T05:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:49 crc kubenswrapper[4854]: I0103 05:40:49.072329 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:49 crc kubenswrapper[4854]: I0103 05:40:49.072364 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:49 crc kubenswrapper[4854]: I0103 05:40:49.072374 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:49 crc kubenswrapper[4854]: I0103 05:40:49.072388 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:49 crc kubenswrapper[4854]: I0103 05:40:49.072398 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:49Z","lastTransitionTime":"2026-01-03T05:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:49 crc kubenswrapper[4854]: I0103 05:40:49.117419 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 05:40:49 crc kubenswrapper[4854]: I0103 05:40:49.117454 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6wgwf" Jan 03 05:40:49 crc kubenswrapper[4854]: E0103 05:40:49.117596 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 05:40:49 crc kubenswrapper[4854]: E0103 05:40:49.117745 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6wgwf" podUID="c9a35ce4-6254-4744-b9a8-966399ae89cc" Jan 03 05:40:49 crc kubenswrapper[4854]: I0103 05:40:49.175263 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:49 crc kubenswrapper[4854]: I0103 05:40:49.175326 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:49 crc kubenswrapper[4854]: I0103 05:40:49.175349 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:49 crc kubenswrapper[4854]: I0103 05:40:49.175380 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:49 crc kubenswrapper[4854]: I0103 05:40:49.175403 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:49Z","lastTransitionTime":"2026-01-03T05:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:49 crc kubenswrapper[4854]: I0103 05:40:49.278543 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:49 crc kubenswrapper[4854]: I0103 05:40:49.278629 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:49 crc kubenswrapper[4854]: I0103 05:40:49.278655 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:49 crc kubenswrapper[4854]: I0103 05:40:49.278687 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:49 crc kubenswrapper[4854]: I0103 05:40:49.278713 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:49Z","lastTransitionTime":"2026-01-03T05:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:49 crc kubenswrapper[4854]: I0103 05:40:49.354703 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c9a35ce4-6254-4744-b9a8-966399ae89cc-metrics-certs\") pod \"network-metrics-daemon-6wgwf\" (UID: \"c9a35ce4-6254-4744-b9a8-966399ae89cc\") " pod="openshift-multus/network-metrics-daemon-6wgwf" Jan 03 05:40:49 crc kubenswrapper[4854]: E0103 05:40:49.354914 4854 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 03 05:40:49 crc kubenswrapper[4854]: E0103 05:40:49.355001 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c9a35ce4-6254-4744-b9a8-966399ae89cc-metrics-certs podName:c9a35ce4-6254-4744-b9a8-966399ae89cc nodeName:}" failed. No retries permitted until 2026-01-03 05:40:53.354977273 +0000 UTC m=+31.681553875 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c9a35ce4-6254-4744-b9a8-966399ae89cc-metrics-certs") pod "network-metrics-daemon-6wgwf" (UID: "c9a35ce4-6254-4744-b9a8-966399ae89cc") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 03 05:40:49 crc kubenswrapper[4854]: I0103 05:40:49.381396 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:49 crc kubenswrapper[4854]: I0103 05:40:49.381426 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:49 crc kubenswrapper[4854]: I0103 05:40:49.381434 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:49 crc kubenswrapper[4854]: I0103 05:40:49.381450 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:49 crc kubenswrapper[4854]: I0103 05:40:49.381460 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:49Z","lastTransitionTime":"2026-01-03T05:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:49 crc kubenswrapper[4854]: I0103 05:40:49.484609 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:49 crc kubenswrapper[4854]: I0103 05:40:49.484670 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:49 crc kubenswrapper[4854]: I0103 05:40:49.484696 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:49 crc kubenswrapper[4854]: I0103 05:40:49.484720 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:49 crc kubenswrapper[4854]: I0103 05:40:49.484740 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:49Z","lastTransitionTime":"2026-01-03T05:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:49 crc kubenswrapper[4854]: I0103 05:40:49.494352 4854 generic.go:334] "Generic (PLEG): container finished" podID="96aa5b32-eb66-4453-9774-be58ee22cfce" containerID="edfa3f48f3ef264ea3cd0a78e53bce956a1cad7b456e53e7ec4219d6ab43a7de" exitCode=0 Jan 03 05:40:49 crc kubenswrapper[4854]: I0103 05:40:49.494406 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xcb5t" event={"ID":"96aa5b32-eb66-4453-9774-be58ee22cfce","Type":"ContainerDied","Data":"edfa3f48f3ef264ea3cd0a78e53bce956a1cad7b456e53e7ec4219d6ab43a7de"} Jan 03 05:40:49 crc kubenswrapper[4854]: I0103 05:40:49.588043 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:49 crc kubenswrapper[4854]: I0103 05:40:49.588153 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:49 crc kubenswrapper[4854]: I0103 05:40:49.588173 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:49 crc kubenswrapper[4854]: I0103 05:40:49.588199 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:49 crc kubenswrapper[4854]: I0103 05:40:49.588221 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:49Z","lastTransitionTime":"2026-01-03T05:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:49 crc kubenswrapper[4854]: I0103 05:40:49.691982 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:49 crc kubenswrapper[4854]: I0103 05:40:49.692054 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:49 crc kubenswrapper[4854]: I0103 05:40:49.692107 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:49 crc kubenswrapper[4854]: I0103 05:40:49.692139 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:49 crc kubenswrapper[4854]: I0103 05:40:49.692158 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:49Z","lastTransitionTime":"2026-01-03T05:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:49 crc kubenswrapper[4854]: I0103 05:40:49.795064 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:49 crc kubenswrapper[4854]: I0103 05:40:49.795116 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:49 crc kubenswrapper[4854]: I0103 05:40:49.795126 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:49 crc kubenswrapper[4854]: I0103 05:40:49.795141 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:49 crc kubenswrapper[4854]: I0103 05:40:49.795152 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:49Z","lastTransitionTime":"2026-01-03T05:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:49 crc kubenswrapper[4854]: I0103 05:40:49.898869 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:49 crc kubenswrapper[4854]: I0103 05:40:49.898912 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:49 crc kubenswrapper[4854]: I0103 05:40:49.898929 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:49 crc kubenswrapper[4854]: I0103 05:40:49.898965 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:49 crc kubenswrapper[4854]: I0103 05:40:49.898982 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:49Z","lastTransitionTime":"2026-01-03T05:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:50 crc kubenswrapper[4854]: I0103 05:40:50.002623 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:50 crc kubenswrapper[4854]: I0103 05:40:50.002758 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:50 crc kubenswrapper[4854]: I0103 05:40:50.002777 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:50 crc kubenswrapper[4854]: I0103 05:40:50.002800 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:50 crc kubenswrapper[4854]: I0103 05:40:50.002816 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:50Z","lastTransitionTime":"2026-01-03T05:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:50 crc kubenswrapper[4854]: I0103 05:40:50.106578 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:50 crc kubenswrapper[4854]: I0103 05:40:50.106803 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:50 crc kubenswrapper[4854]: I0103 05:40:50.106933 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:50 crc kubenswrapper[4854]: I0103 05:40:50.107027 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:50 crc kubenswrapper[4854]: I0103 05:40:50.107132 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:50Z","lastTransitionTime":"2026-01-03T05:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:50 crc kubenswrapper[4854]: I0103 05:40:50.132364 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 05:40:50 crc kubenswrapper[4854]: E0103 05:40:50.132512 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 05:40:50 crc kubenswrapper[4854]: I0103 05:40:50.134435 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 05:40:50 crc kubenswrapper[4854]: E0103 05:40:50.134607 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 05:40:50 crc kubenswrapper[4854]: I0103 05:40:50.209869 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:50 crc kubenswrapper[4854]: I0103 05:40:50.209907 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:50 crc kubenswrapper[4854]: I0103 05:40:50.209917 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:50 crc kubenswrapper[4854]: I0103 05:40:50.209933 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:50 crc kubenswrapper[4854]: I0103 05:40:50.209942 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:50Z","lastTransitionTime":"2026-01-03T05:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:50 crc kubenswrapper[4854]: I0103 05:40:50.217761 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 05:40:50 crc kubenswrapper[4854]: I0103 05:40:50.312984 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:50 crc kubenswrapper[4854]: I0103 05:40:50.313037 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:50 crc kubenswrapper[4854]: I0103 05:40:50.313059 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:50 crc kubenswrapper[4854]: I0103 05:40:50.313100 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:50 crc kubenswrapper[4854]: I0103 05:40:50.313119 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:50Z","lastTransitionTime":"2026-01-03T05:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:50 crc kubenswrapper[4854]: I0103 05:40:50.415552 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:50 crc kubenswrapper[4854]: I0103 05:40:50.415609 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:50 crc kubenswrapper[4854]: I0103 05:40:50.415630 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:50 crc kubenswrapper[4854]: I0103 05:40:50.415655 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:50 crc kubenswrapper[4854]: I0103 05:40:50.415674 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:50Z","lastTransitionTime":"2026-01-03T05:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:50 crc kubenswrapper[4854]: I0103 05:40:50.505048 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" event={"ID":"dea8fd3f-411f-44a8-a1d6-4881f41fc149","Type":"ContainerStarted","Data":"6c0bb025e7decbbbaa763a759e9e0c7557db17fcc68ec3f6cb51787d923a0e8d"} Jan 03 05:40:50 crc kubenswrapper[4854]: I0103 05:40:50.508374 4854 generic.go:334] "Generic (PLEG): container finished" podID="96aa5b32-eb66-4453-9774-be58ee22cfce" containerID="89770bca7bbfdac5d04a508721b8091fdb73a667ffd36b46c733034c1c52c56f" exitCode=0 Jan 03 05:40:50 crc kubenswrapper[4854]: I0103 05:40:50.508405 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xcb5t" event={"ID":"96aa5b32-eb66-4453-9774-be58ee22cfce","Type":"ContainerDied","Data":"89770bca7bbfdac5d04a508721b8091fdb73a667ffd36b46c733034c1c52c56f"} Jan 03 05:40:50 crc kubenswrapper[4854]: I0103 05:40:50.522395 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:50 crc kubenswrapper[4854]: I0103 05:40:50.522448 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:50 crc kubenswrapper[4854]: I0103 05:40:50.522460 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:50 crc kubenswrapper[4854]: I0103 05:40:50.522483 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:50 crc kubenswrapper[4854]: I0103 05:40:50.522500 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:50Z","lastTransitionTime":"2026-01-03T05:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:50 crc kubenswrapper[4854]: I0103 05:40:50.624637 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:50 crc kubenswrapper[4854]: I0103 05:40:50.624671 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:50 crc kubenswrapper[4854]: I0103 05:40:50.624681 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:50 crc kubenswrapper[4854]: I0103 05:40:50.624697 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:50 crc kubenswrapper[4854]: I0103 05:40:50.624707 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:50Z","lastTransitionTime":"2026-01-03T05:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:50 crc kubenswrapper[4854]: I0103 05:40:50.727311 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:50 crc kubenswrapper[4854]: I0103 05:40:50.727371 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:50 crc kubenswrapper[4854]: I0103 05:40:50.727389 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:50 crc kubenswrapper[4854]: I0103 05:40:50.727418 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:50 crc kubenswrapper[4854]: I0103 05:40:50.727439 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:50Z","lastTransitionTime":"2026-01-03T05:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:50 crc kubenswrapper[4854]: I0103 05:40:50.830591 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:50 crc kubenswrapper[4854]: I0103 05:40:50.830677 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:50 crc kubenswrapper[4854]: I0103 05:40:50.830703 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:50 crc kubenswrapper[4854]: I0103 05:40:50.830737 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:50 crc kubenswrapper[4854]: I0103 05:40:50.830770 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:50Z","lastTransitionTime":"2026-01-03T05:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:50 crc kubenswrapper[4854]: I0103 05:40:50.933641 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:50 crc kubenswrapper[4854]: I0103 05:40:50.933697 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:50 crc kubenswrapper[4854]: I0103 05:40:50.933716 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:50 crc kubenswrapper[4854]: I0103 05:40:50.933742 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:50 crc kubenswrapper[4854]: I0103 05:40:50.933759 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:50Z","lastTransitionTime":"2026-01-03T05:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:51 crc kubenswrapper[4854]: I0103 05:40:51.038713 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:51 crc kubenswrapper[4854]: I0103 05:40:51.038824 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:51 crc kubenswrapper[4854]: I0103 05:40:51.038881 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:51 crc kubenswrapper[4854]: I0103 05:40:51.038911 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:51 crc kubenswrapper[4854]: I0103 05:40:51.038931 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:51Z","lastTransitionTime":"2026-01-03T05:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:51 crc kubenswrapper[4854]: I0103 05:40:51.117443 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 05:40:51 crc kubenswrapper[4854]: I0103 05:40:51.117443 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6wgwf" Jan 03 05:40:51 crc kubenswrapper[4854]: E0103 05:40:51.117706 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 05:40:51 crc kubenswrapper[4854]: E0103 05:40:51.117830 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6wgwf" podUID="c9a35ce4-6254-4744-b9a8-966399ae89cc" Jan 03 05:40:51 crc kubenswrapper[4854]: I0103 05:40:51.142309 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:51 crc kubenswrapper[4854]: I0103 05:40:51.142368 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:51 crc kubenswrapper[4854]: I0103 05:40:51.142386 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:51 crc kubenswrapper[4854]: I0103 05:40:51.142409 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:51 crc kubenswrapper[4854]: I0103 05:40:51.142429 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:51Z","lastTransitionTime":"2026-01-03T05:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:51 crc kubenswrapper[4854]: I0103 05:40:51.244725 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:51 crc kubenswrapper[4854]: I0103 05:40:51.244766 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:51 crc kubenswrapper[4854]: I0103 05:40:51.244780 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:51 crc kubenswrapper[4854]: I0103 05:40:51.244801 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:51 crc kubenswrapper[4854]: I0103 05:40:51.244816 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:51Z","lastTransitionTime":"2026-01-03T05:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:51 crc kubenswrapper[4854]: I0103 05:40:51.346974 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:51 crc kubenswrapper[4854]: I0103 05:40:51.347011 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:51 crc kubenswrapper[4854]: I0103 05:40:51.347019 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:51 crc kubenswrapper[4854]: I0103 05:40:51.347033 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:51 crc kubenswrapper[4854]: I0103 05:40:51.347042 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:51Z","lastTransitionTime":"2026-01-03T05:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:51 crc kubenswrapper[4854]: I0103 05:40:51.449699 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:51 crc kubenswrapper[4854]: I0103 05:40:51.449773 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:51 crc kubenswrapper[4854]: I0103 05:40:51.449794 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:51 crc kubenswrapper[4854]: I0103 05:40:51.449822 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:51 crc kubenswrapper[4854]: I0103 05:40:51.449842 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:51Z","lastTransitionTime":"2026-01-03T05:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:51 crc kubenswrapper[4854]: I0103 05:40:51.519638 4854 generic.go:334] "Generic (PLEG): container finished" podID="96aa5b32-eb66-4453-9774-be58ee22cfce" containerID="b08f8b509266801e9bde01957ace686a04678d832157d335e69b891a08950cff" exitCode=0 Jan 03 05:40:51 crc kubenswrapper[4854]: I0103 05:40:51.519715 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xcb5t" event={"ID":"96aa5b32-eb66-4453-9774-be58ee22cfce","Type":"ContainerDied","Data":"b08f8b509266801e9bde01957ace686a04678d832157d335e69b891a08950cff"} Jan 03 05:40:51 crc kubenswrapper[4854]: I0103 05:40:51.554950 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:51 crc kubenswrapper[4854]: I0103 05:40:51.555026 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:51 crc kubenswrapper[4854]: I0103 05:40:51.555051 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:51 crc kubenswrapper[4854]: I0103 05:40:51.555115 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:51 crc kubenswrapper[4854]: I0103 05:40:51.555147 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:51Z","lastTransitionTime":"2026-01-03T05:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:51 crc kubenswrapper[4854]: I0103 05:40:51.659595 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:51 crc kubenswrapper[4854]: I0103 05:40:51.659654 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:51 crc kubenswrapper[4854]: I0103 05:40:51.659672 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:51 crc kubenswrapper[4854]: I0103 05:40:51.659697 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:51 crc kubenswrapper[4854]: I0103 05:40:51.659719 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:51Z","lastTransitionTime":"2026-01-03T05:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:51 crc kubenswrapper[4854]: I0103 05:40:51.761585 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:51 crc kubenswrapper[4854]: I0103 05:40:51.761627 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:51 crc kubenswrapper[4854]: I0103 05:40:51.761636 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:51 crc kubenswrapper[4854]: I0103 05:40:51.761653 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:51 crc kubenswrapper[4854]: I0103 05:40:51.761665 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:51Z","lastTransitionTime":"2026-01-03T05:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:51 crc kubenswrapper[4854]: I0103 05:40:51.864762 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:51 crc kubenswrapper[4854]: I0103 05:40:51.864804 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:51 crc kubenswrapper[4854]: I0103 05:40:51.864816 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:51 crc kubenswrapper[4854]: I0103 05:40:51.864835 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:51 crc kubenswrapper[4854]: I0103 05:40:51.864848 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:51Z","lastTransitionTime":"2026-01-03T05:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:51 crc kubenswrapper[4854]: I0103 05:40:51.967650 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:51 crc kubenswrapper[4854]: I0103 05:40:51.968195 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:51 crc kubenswrapper[4854]: I0103 05:40:51.968211 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:51 crc kubenswrapper[4854]: I0103 05:40:51.968231 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:51 crc kubenswrapper[4854]: I0103 05:40:51.968246 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:51Z","lastTransitionTime":"2026-01-03T05:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:52 crc kubenswrapper[4854]: I0103 05:40:52.071850 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:52 crc kubenswrapper[4854]: I0103 05:40:52.071892 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:52 crc kubenswrapper[4854]: I0103 05:40:52.071902 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:52 crc kubenswrapper[4854]: I0103 05:40:52.071921 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:52 crc kubenswrapper[4854]: I0103 05:40:52.071932 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:52Z","lastTransitionTime":"2026-01-03T05:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:52 crc kubenswrapper[4854]: I0103 05:40:52.117805 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 05:40:52 crc kubenswrapper[4854]: I0103 05:40:52.117834 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 05:40:52 crc kubenswrapper[4854]: E0103 05:40:52.124592 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 05:40:52 crc kubenswrapper[4854]: E0103 05:40:52.124653 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 05:40:52 crc kubenswrapper[4854]: I0103 05:40:52.174284 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:52 crc kubenswrapper[4854]: I0103 05:40:52.174316 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:52 crc kubenswrapper[4854]: I0103 05:40:52.174327 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:52 crc kubenswrapper[4854]: I0103 05:40:52.174342 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:52 crc kubenswrapper[4854]: I0103 05:40:52.174352 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:52Z","lastTransitionTime":"2026-01-03T05:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:52 crc kubenswrapper[4854]: I0103 05:40:52.281723 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:52 crc kubenswrapper[4854]: I0103 05:40:52.281791 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:52 crc kubenswrapper[4854]: I0103 05:40:52.281810 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:52 crc kubenswrapper[4854]: I0103 05:40:52.281835 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:52 crc kubenswrapper[4854]: I0103 05:40:52.281861 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:52Z","lastTransitionTime":"2026-01-03T05:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:52 crc kubenswrapper[4854]: I0103 05:40:52.385221 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:52 crc kubenswrapper[4854]: I0103 05:40:52.385311 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:52 crc kubenswrapper[4854]: I0103 05:40:52.385334 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:52 crc kubenswrapper[4854]: I0103 05:40:52.385361 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:52 crc kubenswrapper[4854]: I0103 05:40:52.385382 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:52Z","lastTransitionTime":"2026-01-03T05:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:52 crc kubenswrapper[4854]: I0103 05:40:52.488519 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:52 crc kubenswrapper[4854]: I0103 05:40:52.488553 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:52 crc kubenswrapper[4854]: I0103 05:40:52.488564 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:52 crc kubenswrapper[4854]: I0103 05:40:52.488579 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:52 crc kubenswrapper[4854]: I0103 05:40:52.488587 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:52Z","lastTransitionTime":"2026-01-03T05:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:52 crc kubenswrapper[4854]: I0103 05:40:52.530510 4854 generic.go:334] "Generic (PLEG): container finished" podID="96aa5b32-eb66-4453-9774-be58ee22cfce" containerID="7451b08db2870a8f1925def1a048915f312d293dd21bb90d1ccaceea82db01b0" exitCode=0 Jan 03 05:40:52 crc kubenswrapper[4854]: I0103 05:40:52.530554 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xcb5t" event={"ID":"96aa5b32-eb66-4453-9774-be58ee22cfce","Type":"ContainerDied","Data":"7451b08db2870a8f1925def1a048915f312d293dd21bb90d1ccaceea82db01b0"} Jan 03 05:40:52 crc kubenswrapper[4854]: I0103 05:40:52.590969 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:52 crc kubenswrapper[4854]: I0103 05:40:52.591003 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:52 crc kubenswrapper[4854]: I0103 05:40:52.591012 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:52 crc kubenswrapper[4854]: I0103 05:40:52.591026 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:52 crc kubenswrapper[4854]: I0103 05:40:52.591038 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:52Z","lastTransitionTime":"2026-01-03T05:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:52 crc kubenswrapper[4854]: I0103 05:40:52.693336 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:52 crc kubenswrapper[4854]: I0103 05:40:52.693369 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:52 crc kubenswrapper[4854]: I0103 05:40:52.693377 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:52 crc kubenswrapper[4854]: I0103 05:40:52.693391 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:52 crc kubenswrapper[4854]: I0103 05:40:52.693423 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:52Z","lastTransitionTime":"2026-01-03T05:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:52 crc kubenswrapper[4854]: I0103 05:40:52.796417 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:52 crc kubenswrapper[4854]: I0103 05:40:52.796451 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:52 crc kubenswrapper[4854]: I0103 05:40:52.796459 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:52 crc kubenswrapper[4854]: I0103 05:40:52.796472 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:52 crc kubenswrapper[4854]: I0103 05:40:52.796481 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:52Z","lastTransitionTime":"2026-01-03T05:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:52 crc kubenswrapper[4854]: I0103 05:40:52.902332 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:52 crc kubenswrapper[4854]: I0103 05:40:52.902392 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:52 crc kubenswrapper[4854]: I0103 05:40:52.902409 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:52 crc kubenswrapper[4854]: I0103 05:40:52.902437 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:52 crc kubenswrapper[4854]: I0103 05:40:52.902458 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:52Z","lastTransitionTime":"2026-01-03T05:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.006145 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.006767 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.006784 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.006808 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.006823 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:53Z","lastTransitionTime":"2026-01-03T05:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.110189 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.110244 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.110257 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.110276 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.110289 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:53Z","lastTransitionTime":"2026-01-03T05:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.117505 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.117609 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6wgwf" Jan 03 05:40:53 crc kubenswrapper[4854]: E0103 05:40:53.117673 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 05:40:53 crc kubenswrapper[4854]: E0103 05:40:53.117781 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6wgwf" podUID="c9a35ce4-6254-4744-b9a8-966399ae89cc" Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.213361 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.213405 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.213421 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.213441 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.213455 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:53Z","lastTransitionTime":"2026-01-03T05:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.317396 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.317477 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.317501 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.317533 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.317556 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:53Z","lastTransitionTime":"2026-01-03T05:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.357502 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c9a35ce4-6254-4744-b9a8-966399ae89cc-metrics-certs\") pod \"network-metrics-daemon-6wgwf\" (UID: \"c9a35ce4-6254-4744-b9a8-966399ae89cc\") " pod="openshift-multus/network-metrics-daemon-6wgwf" Jan 03 05:40:53 crc kubenswrapper[4854]: E0103 05:40:53.357735 4854 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 03 05:40:53 crc kubenswrapper[4854]: E0103 05:40:53.358318 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c9a35ce4-6254-4744-b9a8-966399ae89cc-metrics-certs podName:c9a35ce4-6254-4744-b9a8-966399ae89cc nodeName:}" failed. No retries permitted until 2026-01-03 05:41:01.358287445 +0000 UTC m=+39.684864027 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c9a35ce4-6254-4744-b9a8-966399ae89cc-metrics-certs") pod "network-metrics-daemon-6wgwf" (UID: "c9a35ce4-6254-4744-b9a8-966399ae89cc") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.419796 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.419835 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.419848 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.419865 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.419878 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:53Z","lastTransitionTime":"2026-01-03T05:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.523676 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.523735 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.523756 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.523782 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.523800 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:53Z","lastTransitionTime":"2026-01-03T05:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.538533 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" event={"ID":"dea8fd3f-411f-44a8-a1d6-4881f41fc149","Type":"ContainerStarted","Data":"6044841ea44f74d5a3fe2aa4bd6f8c818aaf51bd4e502d7c505b64cd25518288"} Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.538886 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.543775 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xcb5t" event={"ID":"96aa5b32-eb66-4453-9774-be58ee22cfce","Type":"ContainerStarted","Data":"7a28fca972d8076dce1e8c89bf62709bac889e4ea97d661551aa69a34b6c643d"} Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.562779 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.562818 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.562830 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.562851 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.562863 4854 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T05:40:53Z","lastTransitionTime":"2026-01-03T05:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.565651 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.574189 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" podStartSLOduration=8.574160057 podStartE2EDuration="8.574160057s" podCreationTimestamp="2026-01-03 05:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:40:53.570940054 +0000 UTC m=+31.897516636" watchObservedRunningTime="2026-01-03 05:40:53.574160057 +0000 UTC m=+31.900736629" Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.612877 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-xcb5t" podStartSLOduration=8.61285488 podStartE2EDuration="8.61285488s" podCreationTimestamp="2026-01-03 05:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:40:53.599006591 +0000 UTC m=+31.925583213" watchObservedRunningTime="2026-01-03 05:40:53.61285488 +0000 UTC m=+31.939431462" Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.613554 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-v6gn8"] Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.614100 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v6gn8" Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.616765 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.617008 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.617162 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.617216 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.660715 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/20e94476-3093-4579-a42d-c895b581f605-service-ca\") pod \"cluster-version-operator-5c965bbfc6-v6gn8\" (UID: \"20e94476-3093-4579-a42d-c895b581f605\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v6gn8" Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.660758 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/20e94476-3093-4579-a42d-c895b581f605-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-v6gn8\" (UID: \"20e94476-3093-4579-a42d-c895b581f605\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v6gn8" Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.660858 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20e94476-3093-4579-a42d-c895b581f605-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-v6gn8\" (UID: \"20e94476-3093-4579-a42d-c895b581f605\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v6gn8" Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.660921 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/20e94476-3093-4579-a42d-c895b581f605-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-v6gn8\" (UID: \"20e94476-3093-4579-a42d-c895b581f605\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v6gn8" Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.660960 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20e94476-3093-4579-a42d-c895b581f605-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-v6gn8\" (UID: \"20e94476-3093-4579-a42d-c895b581f605\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v6gn8" Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.762091 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/20e94476-3093-4579-a42d-c895b581f605-service-ca\") pod \"cluster-version-operator-5c965bbfc6-v6gn8\" (UID: \"20e94476-3093-4579-a42d-c895b581f605\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v6gn8" Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.762145 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/20e94476-3093-4579-a42d-c895b581f605-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-v6gn8\" (UID: \"20e94476-3093-4579-a42d-c895b581f605\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v6gn8" Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.762173 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20e94476-3093-4579-a42d-c895b581f605-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-v6gn8\" (UID: \"20e94476-3093-4579-a42d-c895b581f605\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v6gn8" Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.762192 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/20e94476-3093-4579-a42d-c895b581f605-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-v6gn8\" (UID: \"20e94476-3093-4579-a42d-c895b581f605\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v6gn8" Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.762209 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20e94476-3093-4579-a42d-c895b581f605-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-v6gn8\" (UID: \"20e94476-3093-4579-a42d-c895b581f605\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v6gn8" Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.762434 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/20e94476-3093-4579-a42d-c895b581f605-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-v6gn8\" (UID: \"20e94476-3093-4579-a42d-c895b581f605\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v6gn8" Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.762463 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/20e94476-3093-4579-a42d-c895b581f605-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-v6gn8\" (UID: \"20e94476-3093-4579-a42d-c895b581f605\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v6gn8" Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.763524 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/20e94476-3093-4579-a42d-c895b581f605-service-ca\") pod \"cluster-version-operator-5c965bbfc6-v6gn8\" (UID: \"20e94476-3093-4579-a42d-c895b581f605\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v6gn8" Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.776555 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20e94476-3093-4579-a42d-c895b581f605-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-v6gn8\" (UID: \"20e94476-3093-4579-a42d-c895b581f605\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v6gn8" Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.779665 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20e94476-3093-4579-a42d-c895b581f605-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-v6gn8\" (UID: \"20e94476-3093-4579-a42d-c895b581f605\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v6gn8" Jan 03 05:40:53 crc kubenswrapper[4854]: I0103 05:40:53.930606 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v6gn8" Jan 03 05:40:54 crc kubenswrapper[4854]: I0103 05:40:54.117718 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 05:40:54 crc kubenswrapper[4854]: E0103 05:40:54.117873 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 05:40:54 crc kubenswrapper[4854]: I0103 05:40:54.118331 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 05:40:54 crc kubenswrapper[4854]: E0103 05:40:54.118430 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 05:40:54 crc kubenswrapper[4854]: I0103 05:40:54.549301 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v6gn8" event={"ID":"20e94476-3093-4579-a42d-c895b581f605","Type":"ContainerStarted","Data":"e194b0ab6814f99475fb0d31624b201528f792b53f9fe3be42cf74c546a8480a"} Jan 03 05:40:54 crc kubenswrapper[4854]: I0103 05:40:54.549384 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v6gn8" event={"ID":"20e94476-3093-4579-a42d-c895b581f605","Type":"ContainerStarted","Data":"899bc484bd0e96e66d3b0efdeca5a1c296cc81096797b20a77a72208a23a1591"} Jan 03 05:40:54 crc kubenswrapper[4854]: I0103 05:40:54.549391 4854 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 03 05:40:54 crc kubenswrapper[4854]: I0103 05:40:54.549871 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:54 crc kubenswrapper[4854]: I0103 05:40:54.568874 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v6gn8" podStartSLOduration=9.568853479 podStartE2EDuration="9.568853479s" podCreationTimestamp="2026-01-03 05:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:40:54.568771047 +0000 UTC m=+32.895347619" watchObservedRunningTime="2026-01-03 05:40:54.568853479 +0000 UTC m=+32.895430051" Jan 03 05:40:54 crc kubenswrapper[4854]: I0103 05:40:54.582227 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:54 crc kubenswrapper[4854]: I0103 05:40:54.989024 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-6wgwf"] Jan 03 05:40:54 crc kubenswrapper[4854]: I0103 05:40:54.989454 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6wgwf" Jan 03 05:40:54 crc kubenswrapper[4854]: E0103 05:40:54.989564 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6wgwf" podUID="c9a35ce4-6254-4744-b9a8-966399ae89cc" Jan 03 05:40:55 crc kubenswrapper[4854]: I0103 05:40:55.117366 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 05:40:55 crc kubenswrapper[4854]: E0103 05:40:55.117502 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 05:40:55 crc kubenswrapper[4854]: I0103 05:40:55.152154 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:40:56 crc kubenswrapper[4854]: I0103 05:40:56.117422 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 05:40:56 crc kubenswrapper[4854]: I0103 05:40:56.117485 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 05:40:56 crc kubenswrapper[4854]: E0103 05:40:56.117615 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 05:40:56 crc kubenswrapper[4854]: E0103 05:40:56.117758 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 05:40:56 crc kubenswrapper[4854]: I0103 05:40:56.799189 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:40:56 crc kubenswrapper[4854]: I0103 05:40:56.799366 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 05:40:56 crc kubenswrapper[4854]: E0103 05:40:56.799395 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 05:41:12.79936418 +0000 UTC m=+51.125940762 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:40:56 crc kubenswrapper[4854]: I0103 05:40:56.799450 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 05:40:56 crc kubenswrapper[4854]: I0103 05:40:56.799504 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 05:40:56 crc kubenswrapper[4854]: E0103 05:40:56.799589 4854 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 03 05:40:56 crc kubenswrapper[4854]: E0103 05:40:56.799627 4854 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 03 05:40:56 crc kubenswrapper[4854]: I0103 05:40:56.799627 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 05:40:56 crc kubenswrapper[4854]: E0103 05:40:56.799653 4854 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 05:40:56 crc kubenswrapper[4854]: E0103 05:40:56.799694 4854 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 03 05:40:56 crc kubenswrapper[4854]: E0103 05:40:56.799746 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-03 05:41:12.799717059 +0000 UTC m=+51.126293671 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 05:40:56 crc kubenswrapper[4854]: E0103 05:40:56.799809 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-03 05:41:12.79977501 +0000 UTC m=+51.126351622 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 03 05:40:56 crc kubenswrapper[4854]: E0103 05:40:56.799850 4854 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 03 05:40:56 crc kubenswrapper[4854]: E0103 05:40:56.799872 4854 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 03 05:40:56 crc kubenswrapper[4854]: E0103 05:40:56.799886 4854 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 05:40:56 crc kubenswrapper[4854]: E0103 05:40:56.799900 4854 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 03 05:40:56 crc kubenswrapper[4854]: E0103 05:40:56.799928 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-03 05:41:12.799917444 +0000 UTC m=+51.126494026 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 05:40:56 crc kubenswrapper[4854]: E0103 05:40:56.800118 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-03 05:41:12.800042327 +0000 UTC m=+51.126618939 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.117207 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.117322 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6wgwf" Jan 03 05:40:57 crc kubenswrapper[4854]: E0103 05:40:57.117567 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 05:40:57 crc kubenswrapper[4854]: E0103 05:40:57.117735 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6wgwf" podUID="c9a35ce4-6254-4744-b9a8-966399ae89cc" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.238699 4854 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.238950 4854 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.283441 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9tbks"] Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.284112 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9tbks" Jan 03 05:40:57 crc kubenswrapper[4854]: W0103 05:40:57.292373 4854 reflector.go:561] object-"openshift-route-controller-manager"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 03 05:40:57 crc kubenswrapper[4854]: W0103 05:40:57.292443 4854 reflector.go:561] object-"openshift-route-controller-manager"/"client-ca": failed to list *v1.ConfigMap: configmaps "client-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 03 05:40:57 crc kubenswrapper[4854]: W0103 05:40:57.292521 4854 reflector.go:561] object-"openshift-route-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 03 05:40:57 crc kubenswrapper[4854]: E0103 05:40:57.292544 4854 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 03 05:40:57 crc kubenswrapper[4854]: E0103 05:40:57.292531 4854 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"client-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 03 05:40:57 crc kubenswrapper[4854]: E0103 05:40:57.292444 4854 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 03 05:40:57 crc kubenswrapper[4854]: W0103 05:40:57.292560 4854 reflector.go:561] object-"openshift-route-controller-manager"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 03 05:40:57 crc kubenswrapper[4854]: E0103 05:40:57.292601 4854 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.296064 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-pzlj8"] Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.297822 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-pzlj8" Jan 03 05:40:57 crc kubenswrapper[4854]: W0103 05:40:57.298615 4854 reflector.go:561] object-"openshift-route-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.298075 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q9s7n"] Jan 03 05:40:57 crc kubenswrapper[4854]: E0103 05:40:57.298725 4854 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.300287 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q9s7n" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.301949 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.302581 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.302873 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wfmjq"] Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.303673 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-wfmjq" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.305392 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0316818-6edd-4e11-9a85-cdc385194515-serving-cert\") pod \"route-controller-manager-6576b87f9c-9tbks\" (UID: \"e0316818-6edd-4e11-9a85-cdc385194515\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9tbks" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.305515 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0316818-6edd-4e11-9a85-cdc385194515-config\") pod \"route-controller-manager-6576b87f9c-9tbks\" (UID: \"e0316818-6edd-4e11-9a85-cdc385194515\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9tbks" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.305603 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbtmc\" (UniqueName: \"kubernetes.io/projected/e0316818-6edd-4e11-9a85-cdc385194515-kube-api-access-pbtmc\") pod \"route-controller-manager-6576b87f9c-9tbks\" (UID: \"e0316818-6edd-4e11-9a85-cdc385194515\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9tbks" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.305656 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0316818-6edd-4e11-9a85-cdc385194515-client-ca\") pod \"route-controller-manager-6576b87f9c-9tbks\" (UID: \"e0316818-6edd-4e11-9a85-cdc385194515\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9tbks" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.307619 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-zhzlw"] Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.308628 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zhzlw" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.308938 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.309554 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.309567 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.309787 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-qv6qz"] Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.309968 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.310583 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-qv6qz" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.311541 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.311964 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.312190 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.312314 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.312419 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.312589 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.312601 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-wcbst"] Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.313765 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.313907 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-wcbst" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.314248 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.314319 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-24g4d"] Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.316511 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-24g4d" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.406790 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/2259a421-a8dd-45e8-baa8-15cf1d37782e-audit\") pod \"apiserver-76f77b778f-pzlj8\" (UID: \"2259a421-a8dd-45e8-baa8-15cf1d37782e\") " pod="openshift-apiserver/apiserver-76f77b778f-pzlj8" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.407186 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrjtk\" (UniqueName: \"kubernetes.io/projected/cfdb7138-2cd3-450f-9421-e213122022af-kube-api-access-jrjtk\") pod \"dns-operator-744455d44c-qv6qz\" (UID: \"cfdb7138-2cd3-450f-9421-e213122022af\") " pod="openshift-dns-operator/dns-operator-744455d44c-qv6qz" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.407316 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0316818-6edd-4e11-9a85-cdc385194515-serving-cert\") pod \"route-controller-manager-6576b87f9c-9tbks\" (UID: \"e0316818-6edd-4e11-9a85-cdc385194515\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9tbks" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.407391 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c6974d72-3008-4b24-ab6c-332aa56cfd3b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-wcbst\" (UID: \"c6974d72-3008-4b24-ab6c-332aa56cfd3b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wcbst" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.407468 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6974d72-3008-4b24-ab6c-332aa56cfd3b-config\") pod \"machine-api-operator-5694c8668f-wcbst\" (UID: \"c6974d72-3008-4b24-ab6c-332aa56cfd3b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wcbst" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.407562 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bbch\" (UniqueName: \"kubernetes.io/projected/ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc-kube-api-access-7bbch\") pod \"console-f9d7485db-zhzlw\" (UID: \"ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc\") " pod="openshift-console/console-f9d7485db-zhzlw" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.407717 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2259a421-a8dd-45e8-baa8-15cf1d37782e-audit-dir\") pod \"apiserver-76f77b778f-pzlj8\" (UID: \"2259a421-a8dd-45e8-baa8-15cf1d37782e\") " pod="openshift-apiserver/apiserver-76f77b778f-pzlj8" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.407798 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbtmc\" (UniqueName: \"kubernetes.io/projected/e0316818-6edd-4e11-9a85-cdc385194515-kube-api-access-pbtmc\") pod \"route-controller-manager-6576b87f9c-9tbks\" (UID: \"e0316818-6edd-4e11-9a85-cdc385194515\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9tbks" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.407911 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d7acbec-3363-42f1-b14d-150409b8c40b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-wfmjq\" (UID: \"7d7acbec-3363-42f1-b14d-150409b8c40b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wfmjq" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.407987 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/2259a421-a8dd-45e8-baa8-15cf1d37782e-image-import-ca\") pod \"apiserver-76f77b778f-pzlj8\" (UID: \"2259a421-a8dd-45e8-baa8-15cf1d37782e\") " pod="openshift-apiserver/apiserver-76f77b778f-pzlj8" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.408060 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc-console-oauth-config\") pod \"console-f9d7485db-zhzlw\" (UID: \"ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc\") " pod="openshift-console/console-f9d7485db-zhzlw" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.408184 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d7acbec-3363-42f1-b14d-150409b8c40b-serving-cert\") pod \"controller-manager-879f6c89f-wfmjq\" (UID: \"7d7acbec-3363-42f1-b14d-150409b8c40b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wfmjq" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.408262 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2259a421-a8dd-45e8-baa8-15cf1d37782e-etcd-serving-ca\") pod \"apiserver-76f77b778f-pzlj8\" (UID: \"2259a421-a8dd-45e8-baa8-15cf1d37782e\") " pod="openshift-apiserver/apiserver-76f77b778f-pzlj8" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.408331 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c6974d72-3008-4b24-ab6c-332aa56cfd3b-images\") pod \"machine-api-operator-5694c8668f-wcbst\" (UID: \"c6974d72-3008-4b24-ab6c-332aa56cfd3b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wcbst" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.408394 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mm6c\" (UniqueName: \"kubernetes.io/projected/55c2bf44-bf9f-4dd8-910d-f24744fa629d-kube-api-access-2mm6c\") pod \"cluster-samples-operator-665b6dd947-q9s7n\" (UID: \"55c2bf44-bf9f-4dd8-910d-f24744fa629d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q9s7n" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.408464 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2259a421-a8dd-45e8-baa8-15cf1d37782e-trusted-ca-bundle\") pod \"apiserver-76f77b778f-pzlj8\" (UID: \"2259a421-a8dd-45e8-baa8-15cf1d37782e\") " pod="openshift-apiserver/apiserver-76f77b778f-pzlj8" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.408529 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc-service-ca\") pod \"console-f9d7485db-zhzlw\" (UID: \"ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc\") " pod="openshift-console/console-f9d7485db-zhzlw" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.408597 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc-oauth-serving-cert\") pod \"console-f9d7485db-zhzlw\" (UID: \"ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc\") " pod="openshift-console/console-f9d7485db-zhzlw" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.408664 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txm7s\" (UniqueName: \"kubernetes.io/projected/c6974d72-3008-4b24-ab6c-332aa56cfd3b-kube-api-access-txm7s\") pod \"machine-api-operator-5694c8668f-wcbst\" (UID: \"c6974d72-3008-4b24-ab6c-332aa56cfd3b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wcbst" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.408726 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2259a421-a8dd-45e8-baa8-15cf1d37782e-etcd-client\") pod \"apiserver-76f77b778f-pzlj8\" (UID: \"2259a421-a8dd-45e8-baa8-15cf1d37782e\") " pod="openshift-apiserver/apiserver-76f77b778f-pzlj8" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.408798 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2259a421-a8dd-45e8-baa8-15cf1d37782e-serving-cert\") pod \"apiserver-76f77b778f-pzlj8\" (UID: \"2259a421-a8dd-45e8-baa8-15cf1d37782e\") " pod="openshift-apiserver/apiserver-76f77b778f-pzlj8" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.408890 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0ab5b4f5-1bd3-4bbe-b749-4f5607aa9f79-machine-approver-tls\") pod \"machine-approver-56656f9798-24g4d\" (UID: \"0ab5b4f5-1bd3-4bbe-b749-4f5607aa9f79\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-24g4d" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.408964 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0316818-6edd-4e11-9a85-cdc385194515-config\") pod \"route-controller-manager-6576b87f9c-9tbks\" (UID: \"e0316818-6edd-4e11-9a85-cdc385194515\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9tbks" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.409026 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m9lf\" (UniqueName: \"kubernetes.io/projected/7d7acbec-3363-42f1-b14d-150409b8c40b-kube-api-access-8m9lf\") pod \"controller-manager-879f6c89f-wfmjq\" (UID: \"7d7acbec-3363-42f1-b14d-150409b8c40b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wfmjq" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.409112 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0316818-6edd-4e11-9a85-cdc385194515-client-ca\") pod \"route-controller-manager-6576b87f9c-9tbks\" (UID: \"e0316818-6edd-4e11-9a85-cdc385194515\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9tbks" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.409200 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2259a421-a8dd-45e8-baa8-15cf1d37782e-encryption-config\") pod \"apiserver-76f77b778f-pzlj8\" (UID: \"2259a421-a8dd-45e8-baa8-15cf1d37782e\") " pod="openshift-apiserver/apiserver-76f77b778f-pzlj8" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.409273 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0ab5b4f5-1bd3-4bbe-b749-4f5607aa9f79-auth-proxy-config\") pod \"machine-approver-56656f9798-24g4d\" (UID: \"0ab5b4f5-1bd3-4bbe-b749-4f5607aa9f79\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-24g4d" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.409337 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc-console-serving-cert\") pod \"console-f9d7485db-zhzlw\" (UID: \"ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc\") " pod="openshift-console/console-f9d7485db-zhzlw" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.409401 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc-console-config\") pod \"console-f9d7485db-zhzlw\" (UID: \"ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc\") " pod="openshift-console/console-f9d7485db-zhzlw" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.409477 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2259a421-a8dd-45e8-baa8-15cf1d37782e-node-pullsecrets\") pod \"apiserver-76f77b778f-pzlj8\" (UID: \"2259a421-a8dd-45e8-baa8-15cf1d37782e\") " pod="openshift-apiserver/apiserver-76f77b778f-pzlj8" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.409554 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/55c2bf44-bf9f-4dd8-910d-f24744fa629d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-q9s7n\" (UID: \"55c2bf44-bf9f-4dd8-910d-f24744fa629d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q9s7n" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.409630 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d7acbec-3363-42f1-b14d-150409b8c40b-client-ca\") pod \"controller-manager-879f6c89f-wfmjq\" (UID: \"7d7acbec-3363-42f1-b14d-150409b8c40b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wfmjq" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.409701 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ab5b4f5-1bd3-4bbe-b749-4f5607aa9f79-config\") pod \"machine-approver-56656f9798-24g4d\" (UID: \"0ab5b4f5-1bd3-4bbe-b749-4f5607aa9f79\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-24g4d" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.409782 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d7acbec-3363-42f1-b14d-150409b8c40b-config\") pod \"controller-manager-879f6c89f-wfmjq\" (UID: \"7d7acbec-3363-42f1-b14d-150409b8c40b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wfmjq" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.409865 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc-trusted-ca-bundle\") pod \"console-f9d7485db-zhzlw\" (UID: \"ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc\") " pod="openshift-console/console-f9d7485db-zhzlw" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.409942 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2259a421-a8dd-45e8-baa8-15cf1d37782e-config\") pod \"apiserver-76f77b778f-pzlj8\" (UID: \"2259a421-a8dd-45e8-baa8-15cf1d37782e\") " pod="openshift-apiserver/apiserver-76f77b778f-pzlj8" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.410012 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76tjr\" (UniqueName: \"kubernetes.io/projected/0ab5b4f5-1bd3-4bbe-b749-4f5607aa9f79-kube-api-access-76tjr\") pod \"machine-approver-56656f9798-24g4d\" (UID: \"0ab5b4f5-1bd3-4bbe-b749-4f5607aa9f79\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-24g4d" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.410139 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdng4\" (UniqueName: \"kubernetes.io/projected/2259a421-a8dd-45e8-baa8-15cf1d37782e-kube-api-access-bdng4\") pod \"apiserver-76f77b778f-pzlj8\" (UID: \"2259a421-a8dd-45e8-baa8-15cf1d37782e\") " pod="openshift-apiserver/apiserver-76f77b778f-pzlj8" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.410228 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cfdb7138-2cd3-450f-9421-e213122022af-metrics-tls\") pod \"dns-operator-744455d44c-qv6qz\" (UID: \"cfdb7138-2cd3-450f-9421-e213122022af\") " pod="openshift-dns-operator/dns-operator-744455d44c-qv6qz" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.511355 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc-service-ca\") pod \"console-f9d7485db-zhzlw\" (UID: \"ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc\") " pod="openshift-console/console-f9d7485db-zhzlw" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.511438 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc-oauth-serving-cert\") pod \"console-f9d7485db-zhzlw\" (UID: \"ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc\") " pod="openshift-console/console-f9d7485db-zhzlw" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.511491 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2259a421-a8dd-45e8-baa8-15cf1d37782e-serving-cert\") pod \"apiserver-76f77b778f-pzlj8\" (UID: \"2259a421-a8dd-45e8-baa8-15cf1d37782e\") " pod="openshift-apiserver/apiserver-76f77b778f-pzlj8" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.511561 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txm7s\" (UniqueName: \"kubernetes.io/projected/c6974d72-3008-4b24-ab6c-332aa56cfd3b-kube-api-access-txm7s\") pod \"machine-api-operator-5694c8668f-wcbst\" (UID: \"c6974d72-3008-4b24-ab6c-332aa56cfd3b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wcbst" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.511613 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2259a421-a8dd-45e8-baa8-15cf1d37782e-etcd-client\") pod \"apiserver-76f77b778f-pzlj8\" (UID: \"2259a421-a8dd-45e8-baa8-15cf1d37782e\") " pod="openshift-apiserver/apiserver-76f77b778f-pzlj8" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.511685 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0ab5b4f5-1bd3-4bbe-b749-4f5607aa9f79-machine-approver-tls\") pod \"machine-approver-56656f9798-24g4d\" (UID: \"0ab5b4f5-1bd3-4bbe-b749-4f5607aa9f79\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-24g4d" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.511750 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8m9lf\" (UniqueName: \"kubernetes.io/projected/7d7acbec-3363-42f1-b14d-150409b8c40b-kube-api-access-8m9lf\") pod \"controller-manager-879f6c89f-wfmjq\" (UID: \"7d7acbec-3363-42f1-b14d-150409b8c40b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wfmjq" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.511795 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2259a421-a8dd-45e8-baa8-15cf1d37782e-encryption-config\") pod \"apiserver-76f77b778f-pzlj8\" (UID: \"2259a421-a8dd-45e8-baa8-15cf1d37782e\") " pod="openshift-apiserver/apiserver-76f77b778f-pzlj8" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.511852 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc-console-config\") pod \"console-f9d7485db-zhzlw\" (UID: \"ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc\") " pod="openshift-console/console-f9d7485db-zhzlw" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.511895 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2259a421-a8dd-45e8-baa8-15cf1d37782e-node-pullsecrets\") pod \"apiserver-76f77b778f-pzlj8\" (UID: \"2259a421-a8dd-45e8-baa8-15cf1d37782e\") " pod="openshift-apiserver/apiserver-76f77b778f-pzlj8" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.511978 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0ab5b4f5-1bd3-4bbe-b749-4f5607aa9f79-auth-proxy-config\") pod \"machine-approver-56656f9798-24g4d\" (UID: \"0ab5b4f5-1bd3-4bbe-b749-4f5607aa9f79\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-24g4d" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.512031 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc-console-serving-cert\") pod \"console-f9d7485db-zhzlw\" (UID: \"ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc\") " pod="openshift-console/console-f9d7485db-zhzlw" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.512116 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/55c2bf44-bf9f-4dd8-910d-f24744fa629d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-q9s7n\" (UID: \"55c2bf44-bf9f-4dd8-910d-f24744fa629d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q9s7n" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.512163 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d7acbec-3363-42f1-b14d-150409b8c40b-client-ca\") pod \"controller-manager-879f6c89f-wfmjq\" (UID: \"7d7acbec-3363-42f1-b14d-150409b8c40b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wfmjq" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.512208 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ab5b4f5-1bd3-4bbe-b749-4f5607aa9f79-config\") pod \"machine-approver-56656f9798-24g4d\" (UID: \"0ab5b4f5-1bd3-4bbe-b749-4f5607aa9f79\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-24g4d" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.512254 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d7acbec-3363-42f1-b14d-150409b8c40b-config\") pod \"controller-manager-879f6c89f-wfmjq\" (UID: \"7d7acbec-3363-42f1-b14d-150409b8c40b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wfmjq" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.512302 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76tjr\" (UniqueName: \"kubernetes.io/projected/0ab5b4f5-1bd3-4bbe-b749-4f5607aa9f79-kube-api-access-76tjr\") pod \"machine-approver-56656f9798-24g4d\" (UID: \"0ab5b4f5-1bd3-4bbe-b749-4f5607aa9f79\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-24g4d" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.512343 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc-trusted-ca-bundle\") pod \"console-f9d7485db-zhzlw\" (UID: \"ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc\") " pod="openshift-console/console-f9d7485db-zhzlw" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.512391 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2259a421-a8dd-45e8-baa8-15cf1d37782e-config\") pod \"apiserver-76f77b778f-pzlj8\" (UID: \"2259a421-a8dd-45e8-baa8-15cf1d37782e\") " pod="openshift-apiserver/apiserver-76f77b778f-pzlj8" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.512445 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdng4\" (UniqueName: \"kubernetes.io/projected/2259a421-a8dd-45e8-baa8-15cf1d37782e-kube-api-access-bdng4\") pod \"apiserver-76f77b778f-pzlj8\" (UID: \"2259a421-a8dd-45e8-baa8-15cf1d37782e\") " pod="openshift-apiserver/apiserver-76f77b778f-pzlj8" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.512524 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cfdb7138-2cd3-450f-9421-e213122022af-metrics-tls\") pod \"dns-operator-744455d44c-qv6qz\" (UID: \"cfdb7138-2cd3-450f-9421-e213122022af\") " pod="openshift-dns-operator/dns-operator-744455d44c-qv6qz" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.512573 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrjtk\" (UniqueName: \"kubernetes.io/projected/cfdb7138-2cd3-450f-9421-e213122022af-kube-api-access-jrjtk\") pod \"dns-operator-744455d44c-qv6qz\" (UID: \"cfdb7138-2cd3-450f-9421-e213122022af\") " pod="openshift-dns-operator/dns-operator-744455d44c-qv6qz" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.512608 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/2259a421-a8dd-45e8-baa8-15cf1d37782e-audit\") pod \"apiserver-76f77b778f-pzlj8\" (UID: \"2259a421-a8dd-45e8-baa8-15cf1d37782e\") " pod="openshift-apiserver/apiserver-76f77b778f-pzlj8" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.512643 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c6974d72-3008-4b24-ab6c-332aa56cfd3b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-wcbst\" (UID: \"c6974d72-3008-4b24-ab6c-332aa56cfd3b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wcbst" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.512702 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6974d72-3008-4b24-ab6c-332aa56cfd3b-config\") pod \"machine-api-operator-5694c8668f-wcbst\" (UID: \"c6974d72-3008-4b24-ab6c-332aa56cfd3b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wcbst" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.512736 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2259a421-a8dd-45e8-baa8-15cf1d37782e-audit-dir\") pod \"apiserver-76f77b778f-pzlj8\" (UID: \"2259a421-a8dd-45e8-baa8-15cf1d37782e\") " pod="openshift-apiserver/apiserver-76f77b778f-pzlj8" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.512772 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bbch\" (UniqueName: \"kubernetes.io/projected/ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc-kube-api-access-7bbch\") pod \"console-f9d7485db-zhzlw\" (UID: \"ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc\") " pod="openshift-console/console-f9d7485db-zhzlw" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.512819 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d7acbec-3363-42f1-b14d-150409b8c40b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-wfmjq\" (UID: \"7d7acbec-3363-42f1-b14d-150409b8c40b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wfmjq" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.512853 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/2259a421-a8dd-45e8-baa8-15cf1d37782e-image-import-ca\") pod \"apiserver-76f77b778f-pzlj8\" (UID: \"2259a421-a8dd-45e8-baa8-15cf1d37782e\") " pod="openshift-apiserver/apiserver-76f77b778f-pzlj8" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.514599 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/2259a421-a8dd-45e8-baa8-15cf1d37782e-audit\") pod \"apiserver-76f77b778f-pzlj8\" (UID: \"2259a421-a8dd-45e8-baa8-15cf1d37782e\") " pod="openshift-apiserver/apiserver-76f77b778f-pzlj8" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.514986 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2259a421-a8dd-45e8-baa8-15cf1d37782e-audit-dir\") pod \"apiserver-76f77b778f-pzlj8\" (UID: \"2259a421-a8dd-45e8-baa8-15cf1d37782e\") " pod="openshift-apiserver/apiserver-76f77b778f-pzlj8" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.515258 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2259a421-a8dd-45e8-baa8-15cf1d37782e-node-pullsecrets\") pod \"apiserver-76f77b778f-pzlj8\" (UID: \"2259a421-a8dd-45e8-baa8-15cf1d37782e\") " pod="openshift-apiserver/apiserver-76f77b778f-pzlj8" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.517460 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2259a421-a8dd-45e8-baa8-15cf1d37782e-config\") pod \"apiserver-76f77b778f-pzlj8\" (UID: \"2259a421-a8dd-45e8-baa8-15cf1d37782e\") " pod="openshift-apiserver/apiserver-76f77b778f-pzlj8" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.518981 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2259a421-a8dd-45e8-baa8-15cf1d37782e-serving-cert\") pod \"apiserver-76f77b778f-pzlj8\" (UID: \"2259a421-a8dd-45e8-baa8-15cf1d37782e\") " pod="openshift-apiserver/apiserver-76f77b778f-pzlj8" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.520140 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2259a421-a8dd-45e8-baa8-15cf1d37782e-etcd-client\") pod \"apiserver-76f77b778f-pzlj8\" (UID: \"2259a421-a8dd-45e8-baa8-15cf1d37782e\") " pod="openshift-apiserver/apiserver-76f77b778f-pzlj8" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.525150 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/55c2bf44-bf9f-4dd8-910d-f24744fa629d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-q9s7n\" (UID: \"55c2bf44-bf9f-4dd8-910d-f24744fa629d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q9s7n" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.532863 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/2259a421-a8dd-45e8-baa8-15cf1d37782e-image-import-ca\") pod \"apiserver-76f77b778f-pzlj8\" (UID: \"2259a421-a8dd-45e8-baa8-15cf1d37782e\") " pod="openshift-apiserver/apiserver-76f77b778f-pzlj8" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.533010 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc-console-oauth-config\") pod \"console-f9d7485db-zhzlw\" (UID: \"ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc\") " pod="openshift-console/console-f9d7485db-zhzlw" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.533154 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d7acbec-3363-42f1-b14d-150409b8c40b-serving-cert\") pod \"controller-manager-879f6c89f-wfmjq\" (UID: \"7d7acbec-3363-42f1-b14d-150409b8c40b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wfmjq" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.533193 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2259a421-a8dd-45e8-baa8-15cf1d37782e-etcd-serving-ca\") pod \"apiserver-76f77b778f-pzlj8\" (UID: \"2259a421-a8dd-45e8-baa8-15cf1d37782e\") " pod="openshift-apiserver/apiserver-76f77b778f-pzlj8" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.533264 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2259a421-a8dd-45e8-baa8-15cf1d37782e-trusted-ca-bundle\") pod \"apiserver-76f77b778f-pzlj8\" (UID: \"2259a421-a8dd-45e8-baa8-15cf1d37782e\") " pod="openshift-apiserver/apiserver-76f77b778f-pzlj8" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.533323 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c6974d72-3008-4b24-ab6c-332aa56cfd3b-images\") pod \"machine-api-operator-5694c8668f-wcbst\" (UID: \"c6974d72-3008-4b24-ab6c-332aa56cfd3b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wcbst" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.533356 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mm6c\" (UniqueName: \"kubernetes.io/projected/55c2bf44-bf9f-4dd8-910d-f24744fa629d-kube-api-access-2mm6c\") pod \"cluster-samples-operator-665b6dd947-q9s7n\" (UID: \"55c2bf44-bf9f-4dd8-910d-f24744fa629d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q9s7n" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.541657 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2259a421-a8dd-45e8-baa8-15cf1d37782e-etcd-serving-ca\") pod \"apiserver-76f77b778f-pzlj8\" (UID: \"2259a421-a8dd-45e8-baa8-15cf1d37782e\") " pod="openshift-apiserver/apiserver-76f77b778f-pzlj8" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.924857 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.925041 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.926638 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.926936 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.927261 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.928639 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-h7drl"] Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.929796 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.933144 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2259a421-a8dd-45e8-baa8-15cf1d37782e-encryption-config\") pod \"apiserver-76f77b778f-pzlj8\" (UID: \"2259a421-a8dd-45e8-baa8-15cf1d37782e\") " pod="openshift-apiserver/apiserver-76f77b778f-pzlj8" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.933601 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.934340 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.934897 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.935517 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.936614 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.937024 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.937854 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc-console-serving-cert\") pod \"console-f9d7485db-zhzlw\" (UID: \"ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc\") " pod="openshift-console/console-f9d7485db-zhzlw" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.938136 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.939700 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d7acbec-3363-42f1-b14d-150409b8c40b-client-ca\") pod \"controller-manager-879f6c89f-wfmjq\" (UID: \"7d7acbec-3363-42f1-b14d-150409b8c40b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wfmjq" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.939914 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d7acbec-3363-42f1-b14d-150409b8c40b-config\") pod \"controller-manager-879f6c89f-wfmjq\" (UID: \"7d7acbec-3363-42f1-b14d-150409b8c40b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wfmjq" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.940296 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fs4cj"] Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.941064 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fs4cj" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.942658 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc-service-ca\") pod \"console-f9d7485db-zhzlw\" (UID: \"ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc\") " pod="openshift-console/console-f9d7485db-zhzlw" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.945022 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7qqff"] Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.945546 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7qqff" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.952383 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-vd2jp"] Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.953835 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vd2jp" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.956816 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc-console-oauth-config\") pod \"console-f9d7485db-zhzlw\" (UID: \"ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc\") " pod="openshift-console/console-f9d7485db-zhzlw" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.957625 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d7acbec-3363-42f1-b14d-150409b8c40b-serving-cert\") pod \"controller-manager-879f6c89f-wfmjq\" (UID: \"7d7acbec-3363-42f1-b14d-150409b8c40b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wfmjq" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.960120 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-n82hj"] Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.961363 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n82hj" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.963340 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc-console-config\") pod \"console-f9d7485db-zhzlw\" (UID: \"ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc\") " pod="openshift-console/console-f9d7485db-zhzlw" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.973267 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-dmlm5"] Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.973776 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9tbks"] Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.973886 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-dmlm5" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.976098 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-vtpbv"] Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.976513 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-b9llh"] Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.976853 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-b9llh" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.977218 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-vtpbv" Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.980987 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-2lwzj"] Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.981807 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-pzlj8"] Jan 03 05:40:57 crc kubenswrapper[4854]: I0103 05:40:57.982036 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-2lwzj" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.021044 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.021171 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.021807 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.022401 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.024907 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wc7xf"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.025807 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ab5b4f5-1bd3-4bbe-b749-4f5607aa9f79-config\") pod \"machine-approver-56656f9798-24g4d\" (UID: \"0ab5b4f5-1bd3-4bbe-b749-4f5607aa9f79\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-24g4d" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.026602 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6974d72-3008-4b24-ab6c-332aa56cfd3b-config\") pod \"machine-api-operator-5694c8668f-wcbst\" (UID: \"c6974d72-3008-4b24-ab6c-332aa56cfd3b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wcbst" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.022585 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.027551 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.030248 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.049937 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c6974d72-3008-4b24-ab6c-332aa56cfd3b-images\") pod \"machine-api-operator-5694c8668f-wcbst\" (UID: \"c6974d72-3008-4b24-ab6c-332aa56cfd3b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wcbst" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.050761 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.052458 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wfmjq"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.054145 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.054423 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.054683 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.054906 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.058523 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.058677 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.058777 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.058862 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.058984 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.062169 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mm6c\" (UniqueName: \"kubernetes.io/projected/55c2bf44-bf9f-4dd8-910d-f24744fa629d-kube-api-access-2mm6c\") pod \"cluster-samples-operator-665b6dd947-q9s7n\" (UID: \"55c2bf44-bf9f-4dd8-910d-f24744fa629d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q9s7n" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.062538 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.066595 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-zvkh2"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.075212 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-wcbst"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.075307 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-qv6qz"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.075407 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-2brf7"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.075857 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-tdlx9"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.076335 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-tdlx9" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.076550 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jdzfq"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.076872 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.077128 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-zvkh2" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.072801 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3a93788-c0a0-412d-aabd-1e93727e72f0-config\") pod \"openshift-apiserver-operator-796bbdcf4f-b9llh\" (UID: \"c3a93788-c0a0-412d-aabd-1e93727e72f0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-b9llh" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.084038 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e05124c8-4705-4d57-82ec-b1ae0658e98e-serving-cert\") pod \"apiserver-7bbb656c7d-vd2jp\" (UID: \"e05124c8-4705-4d57-82ec-b1ae0658e98e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vd2jp" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.084062 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2p9n\" (UniqueName: \"kubernetes.io/projected/e05124c8-4705-4d57-82ec-b1ae0658e98e-kube-api-access-c2p9n\") pod \"apiserver-7bbb656c7d-vd2jp\" (UID: \"e05124c8-4705-4d57-82ec-b1ae0658e98e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vd2jp" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.084093 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfw4c\" (UniqueName: \"kubernetes.io/projected/c31b366b-2182-4c59-8777-e552553ba8a8-kube-api-access-vfw4c\") pod \"oauth-openshift-558db77b4-h7drl\" (UID: \"c31b366b-2182-4c59-8777-e552553ba8a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.084110 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6f1ea809-34a6-45e1-87b1-6cce4f74ced0-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-7qqff\" (UID: \"6f1ea809-34a6-45e1-87b1-6cce4f74ced0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7qqff" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.084127 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e05124c8-4705-4d57-82ec-b1ae0658e98e-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-vd2jp\" (UID: \"e05124c8-4705-4d57-82ec-b1ae0658e98e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vd2jp" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.084144 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c31b366b-2182-4c59-8777-e552553ba8a8-audit-policies\") pod \"oauth-openshift-558db77b4-h7drl\" (UID: \"c31b366b-2182-4c59-8777-e552553ba8a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.084165 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-h7drl\" (UID: \"c31b366b-2182-4c59-8777-e552553ba8a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.084190 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3a93788-c0a0-412d-aabd-1e93727e72f0-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-b9llh\" (UID: \"c3a93788-c0a0-412d-aabd-1e93727e72f0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-b9llh" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.084205 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e05124c8-4705-4d57-82ec-b1ae0658e98e-etcd-client\") pod \"apiserver-7bbb656c7d-vd2jp\" (UID: \"e05124c8-4705-4d57-82ec-b1ae0658e98e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vd2jp" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.084261 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4cxm\" (UniqueName: \"kubernetes.io/projected/dcde1a7d-7025-45cb-92de-483da7a86296-kube-api-access-s4cxm\") pod \"console-operator-58897d9998-2lwzj\" (UID: \"dcde1a7d-7025-45cb-92de-483da7a86296\") " pod="openshift-console-operator/console-operator-58897d9998-2lwzj" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.084284 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6f1ea809-34a6-45e1-87b1-6cce4f74ced0-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-7qqff\" (UID: \"6f1ea809-34a6-45e1-87b1-6cce4f74ced0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7qqff" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.084314 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-h7drl\" (UID: \"c31b366b-2182-4c59-8777-e552553ba8a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.084343 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-h7drl\" (UID: \"c31b366b-2182-4c59-8777-e552553ba8a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.084358 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-h7drl\" (UID: \"c31b366b-2182-4c59-8777-e552553ba8a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.084385 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cxpm\" (UniqueName: \"kubernetes.io/projected/6f1ea809-34a6-45e1-87b1-6cce4f74ced0-kube-api-access-7cxpm\") pod \"cluster-image-registry-operator-dc59b4c8b-7qqff\" (UID: \"6f1ea809-34a6-45e1-87b1-6cce4f74ced0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7qqff" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.084413 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-h7drl\" (UID: \"c31b366b-2182-4c59-8777-e552553ba8a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.084430 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-h7drl\" (UID: \"c31b366b-2182-4c59-8777-e552553ba8a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.084446 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-h7drl\" (UID: \"c31b366b-2182-4c59-8777-e552553ba8a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.084486 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9ecb343a-f88c-49d3-a792-696f8b94eca3-available-featuregates\") pod \"openshift-config-operator-7777fb866f-n82hj\" (UID: \"9ecb343a-f88c-49d3-a792-696f8b94eca3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n82hj" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.084501 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-h7drl\" (UID: \"c31b366b-2182-4c59-8777-e552553ba8a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.084524 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0379c6e-b02d-40ef-b9ae-add1e633bc4a-service-ca-bundle\") pod \"authentication-operator-69f744f599-vtpbv\" (UID: \"b0379c6e-b02d-40ef-b9ae-add1e633bc4a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vtpbv" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.084537 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgvvr\" (UniqueName: \"kubernetes.io/projected/b0379c6e-b02d-40ef-b9ae-add1e633bc4a-kube-api-access-hgvvr\") pod \"authentication-operator-69f744f599-vtpbv\" (UID: \"b0379c6e-b02d-40ef-b9ae-add1e633bc4a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vtpbv" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.084554 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk8bm\" (UniqueName: \"kubernetes.io/projected/3266855e-377c-4d01-ab10-7779bb871699-kube-api-access-jk8bm\") pod \"openshift-controller-manager-operator-756b6f6bc6-fs4cj\" (UID: \"3266855e-377c-4d01-ab10-7779bb871699\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fs4cj" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.084575 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0379c6e-b02d-40ef-b9ae-add1e633bc4a-config\") pod \"authentication-operator-69f744f599-vtpbv\" (UID: \"b0379c6e-b02d-40ef-b9ae-add1e633bc4a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vtpbv" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.084597 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f1ea809-34a6-45e1-87b1-6cce4f74ced0-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-7qqff\" (UID: \"6f1ea809-34a6-45e1-87b1-6cce4f74ced0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7qqff" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.084622 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0379c6e-b02d-40ef-b9ae-add1e633bc4a-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-vtpbv\" (UID: \"b0379c6e-b02d-40ef-b9ae-add1e633bc4a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vtpbv" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.084644 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddcr5\" (UniqueName: \"kubernetes.io/projected/c3a93788-c0a0-412d-aabd-1e93727e72f0-kube-api-access-ddcr5\") pod \"openshift-apiserver-operator-796bbdcf4f-b9llh\" (UID: \"c3a93788-c0a0-412d-aabd-1e93727e72f0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-b9llh" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.084674 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e05124c8-4705-4d57-82ec-b1ae0658e98e-encryption-config\") pod \"apiserver-7bbb656c7d-vd2jp\" (UID: \"e05124c8-4705-4d57-82ec-b1ae0658e98e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vd2jp" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.084703 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-h7drl\" (UID: \"c31b366b-2182-4c59-8777-e552553ba8a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.084718 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hsfj\" (UniqueName: \"kubernetes.io/projected/9ecb343a-f88c-49d3-a792-696f8b94eca3-kube-api-access-5hsfj\") pod \"openshift-config-operator-7777fb866f-n82hj\" (UID: \"9ecb343a-f88c-49d3-a792-696f8b94eca3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n82hj" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.084737 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3266855e-377c-4d01-ab10-7779bb871699-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-fs4cj\" (UID: \"3266855e-377c-4d01-ab10-7779bb871699\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fs4cj" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.084751 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e05124c8-4705-4d57-82ec-b1ae0658e98e-audit-dir\") pod \"apiserver-7bbb656c7d-vd2jp\" (UID: \"e05124c8-4705-4d57-82ec-b1ae0658e98e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vd2jp" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.084778 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcde1a7d-7025-45cb-92de-483da7a86296-config\") pod \"console-operator-58897d9998-2lwzj\" (UID: \"dcde1a7d-7025-45cb-92de-483da7a86296\") " pod="openshift-console-operator/console-operator-58897d9998-2lwzj" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.084793 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dcde1a7d-7025-45cb-92de-483da7a86296-trusted-ca\") pod \"console-operator-58897d9998-2lwzj\" (UID: \"dcde1a7d-7025-45cb-92de-483da7a86296\") " pod="openshift-console-operator/console-operator-58897d9998-2lwzj" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.084814 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-h7drl\" (UID: \"c31b366b-2182-4c59-8777-e552553ba8a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.084828 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e05124c8-4705-4d57-82ec-b1ae0658e98e-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-vd2jp\" (UID: \"e05124c8-4705-4d57-82ec-b1ae0658e98e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vd2jp" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.084857 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0379c6e-b02d-40ef-b9ae-add1e633bc4a-serving-cert\") pod \"authentication-operator-69f744f599-vtpbv\" (UID: \"b0379c6e-b02d-40ef-b9ae-add1e633bc4a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vtpbv" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.084872 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e05124c8-4705-4d57-82ec-b1ae0658e98e-audit-policies\") pod \"apiserver-7bbb656c7d-vd2jp\" (UID: \"e05124c8-4705-4d57-82ec-b1ae0658e98e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vd2jp" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.084887 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3266855e-377c-4d01-ab10-7779bb871699-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-fs4cj\" (UID: \"3266855e-377c-4d01-ab10-7779bb871699\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fs4cj" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.084901 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcde1a7d-7025-45cb-92de-483da7a86296-serving-cert\") pod \"console-operator-58897d9998-2lwzj\" (UID: \"dcde1a7d-7025-45cb-92de-483da7a86296\") " pod="openshift-console-operator/console-operator-58897d9998-2lwzj" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.084923 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7f2n\" (UniqueName: \"kubernetes.io/projected/d5805efa-800c-43df-ba80-7a7db226ebb3-kube-api-access-g7f2n\") pod \"downloads-7954f5f757-dmlm5\" (UID: \"d5805efa-800c-43df-ba80-7a7db226ebb3\") " pod="openshift-console/downloads-7954f5f757-dmlm5" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.084949 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c31b366b-2182-4c59-8777-e552553ba8a8-audit-dir\") pod \"oauth-openshift-558db77b4-h7drl\" (UID: \"c31b366b-2182-4c59-8777-e552553ba8a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.084963 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-h7drl\" (UID: \"c31b366b-2182-4c59-8777-e552553ba8a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.084988 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ecb343a-f88c-49d3-a792-696f8b94eca3-serving-cert\") pod \"openshift-config-operator-7777fb866f-n82hj\" (UID: \"9ecb343a-f88c-49d3-a792-696f8b94eca3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n82hj" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.076951 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.073423 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c6974d72-3008-4b24-ab6c-332aa56cfd3b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-wcbst\" (UID: \"c6974d72-3008-4b24-ab6c-332aa56cfd3b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wcbst" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.073128 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc-oauth-serving-cert\") pod \"console-f9d7485db-zhzlw\" (UID: \"ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc\") " pod="openshift-console/console-f9d7485db-zhzlw" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.073974 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdng4\" (UniqueName: \"kubernetes.io/projected/2259a421-a8dd-45e8-baa8-15cf1d37782e-kube-api-access-bdng4\") pod \"apiserver-76f77b778f-pzlj8\" (UID: \"2259a421-a8dd-45e8-baa8-15cf1d37782e\") " pod="openshift-apiserver/apiserver-76f77b778f-pzlj8" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.077328 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2brf7" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.062730 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.071521 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.071568 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.086405 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.086431 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.086529 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.086568 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.086611 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.086687 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.086757 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8m9lf\" (UniqueName: \"kubernetes.io/projected/7d7acbec-3363-42f1-b14d-150409b8c40b-kube-api-access-8m9lf\") pod \"controller-manager-879f6c89f-wfmjq\" (UID: \"7d7acbec-3363-42f1-b14d-150409b8c40b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wfmjq" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.086785 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.086798 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.086871 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.086951 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.086995 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.087102 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.087112 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.087177 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.087254 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.087547 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.087562 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.088157 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.088319 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.088451 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.088629 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.088738 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.071610 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.071642 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.071707 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.071768 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.071800 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.071833 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.071865 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.089178 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.071972 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.072006 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.072038 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.072298 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.089387 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.072334 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.072413 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.072448 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.072481 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.072580 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.072679 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.072778 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.072822 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.072860 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.073007 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.076997 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.077067 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.062584 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.077451 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.090262 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.090505 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.090679 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.096665 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.097956 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-58vx9"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.098137 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.098659 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d7acbec-3363-42f1-b14d-150409b8c40b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-wfmjq\" (UID: \"7d7acbec-3363-42f1-b14d-150409b8c40b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wfmjq" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.098799 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrjtk\" (UniqueName: \"kubernetes.io/projected/cfdb7138-2cd3-450f-9421-e213122022af-kube-api-access-jrjtk\") pod \"dns-operator-744455d44c-qv6qz\" (UID: \"cfdb7138-2cd3-450f-9421-e213122022af\") " pod="openshift-dns-operator/dns-operator-744455d44c-qv6qz" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.099348 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76tjr\" (UniqueName: \"kubernetes.io/projected/0ab5b4f5-1bd3-4bbe-b749-4f5607aa9f79-kube-api-access-76tjr\") pod \"machine-approver-56656f9798-24g4d\" (UID: \"0ab5b4f5-1bd3-4bbe-b749-4f5607aa9f79\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-24g4d" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.100063 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bbch\" (UniqueName: \"kubernetes.io/projected/ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc-kube-api-access-7bbch\") pod \"console-f9d7485db-zhzlw\" (UID: \"ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc\") " pod="openshift-console/console-f9d7485db-zhzlw" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.100498 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2259a421-a8dd-45e8-baa8-15cf1d37782e-trusted-ca-bundle\") pod \"apiserver-76f77b778f-pzlj8\" (UID: \"2259a421-a8dd-45e8-baa8-15cf1d37782e\") " pod="openshift-apiserver/apiserver-76f77b778f-pzlj8" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.101482 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc-trusted-ca-bundle\") pod \"console-f9d7485db-zhzlw\" (UID: \"ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc\") " pod="openshift-console/console-f9d7485db-zhzlw" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.101580 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jdzfq" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.101592 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zr2w7"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.101786 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-58vx9" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.103575 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0ab5b4f5-1bd3-4bbe-b749-4f5607aa9f79-auth-proxy-config\") pod \"machine-approver-56656f9798-24g4d\" (UID: \"0ab5b4f5-1bd3-4bbe-b749-4f5607aa9f79\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-24g4d" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.117964 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.118016 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.118736 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.119020 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.119742 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.121291 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-hvh5b"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.121840 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-sh5ck"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.122239 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cfdb7138-2cd3-450f-9421-e213122022af-metrics-tls\") pod \"dns-operator-744455d44c-qv6qz\" (UID: \"cfdb7138-2cd3-450f-9421-e213122022af\") " pod="openshift-dns-operator/dns-operator-744455d44c-qv6qz" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.122275 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.122394 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zr2w7" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.122811 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0ab5b4f5-1bd3-4bbe-b749-4f5607aa9f79-machine-approver-tls\") pod \"machine-approver-56656f9798-24g4d\" (UID: \"0ab5b4f5-1bd3-4bbe-b749-4f5607aa9f79\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-24g4d" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.123328 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.123413 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txm7s\" (UniqueName: \"kubernetes.io/projected/c6974d72-3008-4b24-ab6c-332aa56cfd3b-kube-api-access-txm7s\") pod \"machine-api-operator-5694c8668f-wcbst\" (UID: \"c6974d72-3008-4b24-ab6c-332aa56cfd3b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wcbst" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.123590 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hvh5b" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.123606 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sh5ck" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.124860 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.127314 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.128219 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2jw8v"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.128713 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xrcwd"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.129059 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2jw8v" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.129216 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.129285 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.130331 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-59h6x"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.130899 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xrcwd" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.131114 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mgczz"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.131422 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fwgd2"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.131738 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fwgd2" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.131871 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-59h6x" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.132229 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mgczz" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.136307 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.139048 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.139354 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-phv75"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.139234 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.139887 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.139262 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.139323 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.139430 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.140119 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.139492 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.139517 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.139527 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.139614 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.140322 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.139712 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.140481 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-phv75" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.140493 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lltxw"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.141202 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lltxw" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.141553 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-99h4j"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.142219 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-99h4j" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.142929 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-d4txl"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.143685 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-d4txl" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.144020 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8d5t5"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.144472 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.144703 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8d5t5" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.147063 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29456970-f9qt7"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.147934 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29456970-f9qt7" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.148845 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-q7mpr"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.150045 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-qmphn"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.151778 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-84wq5"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.151972 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-q7mpr" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.152255 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-qmphn" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.161854 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.162116 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-84wq5" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.167054 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.170670 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-qr6h4"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.173391 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qr6h4" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.173706 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.184842 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-9d9fw"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.185715 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-2q7dr"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.186381 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-2q7dr" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.186399 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3a93788-c0a0-412d-aabd-1e93727e72f0-config\") pod \"openshift-apiserver-operator-796bbdcf4f-b9llh\" (UID: \"c3a93788-c0a0-412d-aabd-1e93727e72f0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-b9llh" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.186422 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e05124c8-4705-4d57-82ec-b1ae0658e98e-serving-cert\") pod \"apiserver-7bbb656c7d-vd2jp\" (UID: \"e05124c8-4705-4d57-82ec-b1ae0658e98e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vd2jp" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.186444 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2p9n\" (UniqueName: \"kubernetes.io/projected/e05124c8-4705-4d57-82ec-b1ae0658e98e-kube-api-access-c2p9n\") pod \"apiserver-7bbb656c7d-vd2jp\" (UID: \"e05124c8-4705-4d57-82ec-b1ae0658e98e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vd2jp" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.186462 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-h7drl\" (UID: \"c31b366b-2182-4c59-8777-e552553ba8a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.186477 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfw4c\" (UniqueName: \"kubernetes.io/projected/c31b366b-2182-4c59-8777-e552553ba8a8-kube-api-access-vfw4c\") pod \"oauth-openshift-558db77b4-h7drl\" (UID: \"c31b366b-2182-4c59-8777-e552553ba8a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.186492 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6f1ea809-34a6-45e1-87b1-6cce4f74ced0-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-7qqff\" (UID: \"6f1ea809-34a6-45e1-87b1-6cce4f74ced0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7qqff" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.186506 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e05124c8-4705-4d57-82ec-b1ae0658e98e-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-vd2jp\" (UID: \"e05124c8-4705-4d57-82ec-b1ae0658e98e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vd2jp" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.186523 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c31b366b-2182-4c59-8777-e552553ba8a8-audit-policies\") pod \"oauth-openshift-558db77b4-h7drl\" (UID: \"c31b366b-2182-4c59-8777-e552553ba8a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.186553 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/154b4721-3c78-4469-946c-cdf5a68bd110-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-58vx9\" (UID: \"154b4721-3c78-4469-946c-cdf5a68bd110\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-58vx9" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.186571 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3a93788-c0a0-412d-aabd-1e93727e72f0-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-b9llh\" (UID: \"c3a93788-c0a0-412d-aabd-1e93727e72f0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-b9llh" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.186585 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e05124c8-4705-4d57-82ec-b1ae0658e98e-etcd-client\") pod \"apiserver-7bbb656c7d-vd2jp\" (UID: \"e05124c8-4705-4d57-82ec-b1ae0658e98e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vd2jp" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.186607 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4cxm\" (UniqueName: \"kubernetes.io/projected/dcde1a7d-7025-45cb-92de-483da7a86296-kube-api-access-s4cxm\") pod \"console-operator-58897d9998-2lwzj\" (UID: \"dcde1a7d-7025-45cb-92de-483da7a86296\") " pod="openshift-console-operator/console-operator-58897d9998-2lwzj" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.186616 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-9d9fw" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.186624 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-h7drl\" (UID: \"c31b366b-2182-4c59-8777-e552553ba8a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.186639 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6f1ea809-34a6-45e1-87b1-6cce4f74ced0-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-7qqff\" (UID: \"6f1ea809-34a6-45e1-87b1-6cce4f74ced0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7qqff" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.186662 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-h7drl\" (UID: \"c31b366b-2182-4c59-8777-e552553ba8a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.186680 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-h7drl\" (UID: \"c31b366b-2182-4c59-8777-e552553ba8a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.186700 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cxpm\" (UniqueName: \"kubernetes.io/projected/6f1ea809-34a6-45e1-87b1-6cce4f74ced0-kube-api-access-7cxpm\") pod \"cluster-image-registry-operator-dc59b4c8b-7qqff\" (UID: \"6f1ea809-34a6-45e1-87b1-6cce4f74ced0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7qqff" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.186724 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-h7drl\" (UID: \"c31b366b-2182-4c59-8777-e552553ba8a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.186758 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-h7drl\" (UID: \"c31b366b-2182-4c59-8777-e552553ba8a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.186776 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-h7drl\" (UID: \"c31b366b-2182-4c59-8777-e552553ba8a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.186799 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9ecb343a-f88c-49d3-a792-696f8b94eca3-available-featuregates\") pod \"openshift-config-operator-7777fb866f-n82hj\" (UID: \"9ecb343a-f88c-49d3-a792-696f8b94eca3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n82hj" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.186819 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-h7drl\" (UID: \"c31b366b-2182-4c59-8777-e552553ba8a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.186835 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f4db886-9214-4c75-931e-acea9a580541-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-jdzfq\" (UID: \"8f4db886-9214-4c75-931e-acea9a580541\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jdzfq" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.186852 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0379c6e-b02d-40ef-b9ae-add1e633bc4a-config\") pod \"authentication-operator-69f744f599-vtpbv\" (UID: \"b0379c6e-b02d-40ef-b9ae-add1e633bc4a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vtpbv" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.186868 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0379c6e-b02d-40ef-b9ae-add1e633bc4a-service-ca-bundle\") pod \"authentication-operator-69f744f599-vtpbv\" (UID: \"b0379c6e-b02d-40ef-b9ae-add1e633bc4a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vtpbv" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.186884 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgvvr\" (UniqueName: \"kubernetes.io/projected/b0379c6e-b02d-40ef-b9ae-add1e633bc4a-kube-api-access-hgvvr\") pod \"authentication-operator-69f744f599-vtpbv\" (UID: \"b0379c6e-b02d-40ef-b9ae-add1e633bc4a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vtpbv" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.186900 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jk8bm\" (UniqueName: \"kubernetes.io/projected/3266855e-377c-4d01-ab10-7779bb871699-kube-api-access-jk8bm\") pod \"openshift-controller-manager-operator-756b6f6bc6-fs4cj\" (UID: \"3266855e-377c-4d01-ab10-7779bb871699\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fs4cj" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.186925 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f1ea809-34a6-45e1-87b1-6cce4f74ced0-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-7qqff\" (UID: \"6f1ea809-34a6-45e1-87b1-6cce4f74ced0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7qqff" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.186947 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0379c6e-b02d-40ef-b9ae-add1e633bc4a-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-vtpbv\" (UID: \"b0379c6e-b02d-40ef-b9ae-add1e633bc4a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vtpbv" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.186962 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/154b4721-3c78-4469-946c-cdf5a68bd110-config\") pod \"kube-controller-manager-operator-78b949d7b-58vx9\" (UID: \"154b4721-3c78-4469-946c-cdf5a68bd110\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-58vx9" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.186982 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddcr5\" (UniqueName: \"kubernetes.io/projected/c3a93788-c0a0-412d-aabd-1e93727e72f0-kube-api-access-ddcr5\") pod \"openshift-apiserver-operator-796bbdcf4f-b9llh\" (UID: \"c3a93788-c0a0-412d-aabd-1e93727e72f0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-b9llh" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.187000 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kj8t\" (UniqueName: \"kubernetes.io/projected/8f4db886-9214-4c75-931e-acea9a580541-kube-api-access-9kj8t\") pod \"kube-storage-version-migrator-operator-b67b599dd-jdzfq\" (UID: \"8f4db886-9214-4c75-931e-acea9a580541\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jdzfq" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.187022 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e05124c8-4705-4d57-82ec-b1ae0658e98e-encryption-config\") pod \"apiserver-7bbb656c7d-vd2jp\" (UID: \"e05124c8-4705-4d57-82ec-b1ae0658e98e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vd2jp" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.187040 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/154b4721-3c78-4469-946c-cdf5a68bd110-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-58vx9\" (UID: \"154b4721-3c78-4469-946c-cdf5a68bd110\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-58vx9" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.187057 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-h7drl\" (UID: \"c31b366b-2182-4c59-8777-e552553ba8a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.187072 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hsfj\" (UniqueName: \"kubernetes.io/projected/9ecb343a-f88c-49d3-a792-696f8b94eca3-kube-api-access-5hsfj\") pod \"openshift-config-operator-7777fb866f-n82hj\" (UID: \"9ecb343a-f88c-49d3-a792-696f8b94eca3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n82hj" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.187109 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3266855e-377c-4d01-ab10-7779bb871699-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-fs4cj\" (UID: \"3266855e-377c-4d01-ab10-7779bb871699\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fs4cj" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.187128 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e05124c8-4705-4d57-82ec-b1ae0658e98e-audit-dir\") pod \"apiserver-7bbb656c7d-vd2jp\" (UID: \"e05124c8-4705-4d57-82ec-b1ae0658e98e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vd2jp" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.187157 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcde1a7d-7025-45cb-92de-483da7a86296-config\") pod \"console-operator-58897d9998-2lwzj\" (UID: \"dcde1a7d-7025-45cb-92de-483da7a86296\") " pod="openshift-console-operator/console-operator-58897d9998-2lwzj" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.187175 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dcde1a7d-7025-45cb-92de-483da7a86296-trusted-ca\") pod \"console-operator-58897d9998-2lwzj\" (UID: \"dcde1a7d-7025-45cb-92de-483da7a86296\") " pod="openshift-console-operator/console-operator-58897d9998-2lwzj" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.187189 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e05124c8-4705-4d57-82ec-b1ae0658e98e-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-vd2jp\" (UID: \"e05124c8-4705-4d57-82ec-b1ae0658e98e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vd2jp" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.187207 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-h7drl\" (UID: \"c31b366b-2182-4c59-8777-e552553ba8a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.187226 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e05124c8-4705-4d57-82ec-b1ae0658e98e-audit-policies\") pod \"apiserver-7bbb656c7d-vd2jp\" (UID: \"e05124c8-4705-4d57-82ec-b1ae0658e98e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vd2jp" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.187249 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f4db886-9214-4c75-931e-acea9a580541-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-jdzfq\" (UID: \"8f4db886-9214-4c75-931e-acea9a580541\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jdzfq" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.187276 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0379c6e-b02d-40ef-b9ae-add1e633bc4a-serving-cert\") pod \"authentication-operator-69f744f599-vtpbv\" (UID: \"b0379c6e-b02d-40ef-b9ae-add1e633bc4a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vtpbv" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.187295 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3266855e-377c-4d01-ab10-7779bb871699-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-fs4cj\" (UID: \"3266855e-377c-4d01-ab10-7779bb871699\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fs4cj" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.187310 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcde1a7d-7025-45cb-92de-483da7a86296-serving-cert\") pod \"console-operator-58897d9998-2lwzj\" (UID: \"dcde1a7d-7025-45cb-92de-483da7a86296\") " pod="openshift-console-operator/console-operator-58897d9998-2lwzj" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.187310 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.187328 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7f2n\" (UniqueName: \"kubernetes.io/projected/d5805efa-800c-43df-ba80-7a7db226ebb3-kube-api-access-g7f2n\") pod \"downloads-7954f5f757-dmlm5\" (UID: \"d5805efa-800c-43df-ba80-7a7db226ebb3\") " pod="openshift-console/downloads-7954f5f757-dmlm5" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.187357 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c31b366b-2182-4c59-8777-e552553ba8a8-audit-dir\") pod \"oauth-openshift-558db77b4-h7drl\" (UID: \"c31b366b-2182-4c59-8777-e552553ba8a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.187372 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-h7drl\" (UID: \"c31b366b-2182-4c59-8777-e552553ba8a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.187388 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ecb343a-f88c-49d3-a792-696f8b94eca3-serving-cert\") pod \"openshift-config-operator-7777fb866f-n82hj\" (UID: \"9ecb343a-f88c-49d3-a792-696f8b94eca3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n82hj" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.187480 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-zhzlw"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.187678 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-h7drl\" (UID: \"c31b366b-2182-4c59-8777-e552553ba8a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.188209 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3a93788-c0a0-412d-aabd-1e93727e72f0-config\") pod \"openshift-apiserver-operator-796bbdcf4f-b9llh\" (UID: \"c3a93788-c0a0-412d-aabd-1e93727e72f0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-b9llh" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.188980 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6f1ea809-34a6-45e1-87b1-6cce4f74ced0-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-7qqff\" (UID: \"6f1ea809-34a6-45e1-87b1-6cce4f74ced0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7qqff" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.190471 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e05124c8-4705-4d57-82ec-b1ae0658e98e-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-vd2jp\" (UID: \"e05124c8-4705-4d57-82ec-b1ae0658e98e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vd2jp" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.191906 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0379c6e-b02d-40ef-b9ae-add1e633bc4a-service-ca-bundle\") pod \"authentication-operator-69f744f599-vtpbv\" (UID: \"b0379c6e-b02d-40ef-b9ae-add1e633bc4a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vtpbv" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.191900 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-h7drl\" (UID: \"c31b366b-2182-4c59-8777-e552553ba8a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.192146 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fs4cj"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.192204 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q9s7n"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.192223 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-vd2jp"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.192343 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0379c6e-b02d-40ef-b9ae-add1e633bc4a-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-vtpbv\" (UID: \"b0379c6e-b02d-40ef-b9ae-add1e633bc4a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vtpbv" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.192596 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0379c6e-b02d-40ef-b9ae-add1e633bc4a-config\") pod \"authentication-operator-69f744f599-vtpbv\" (UID: \"b0379c6e-b02d-40ef-b9ae-add1e633bc4a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vtpbv" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.192787 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e05124c8-4705-4d57-82ec-b1ae0658e98e-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-vd2jp\" (UID: \"e05124c8-4705-4d57-82ec-b1ae0658e98e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vd2jp" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.192998 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-h7drl\" (UID: \"c31b366b-2182-4c59-8777-e552553ba8a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.193861 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9ecb343a-f88c-49d3-a792-696f8b94eca3-available-featuregates\") pod \"openshift-config-operator-7777fb866f-n82hj\" (UID: \"9ecb343a-f88c-49d3-a792-696f8b94eca3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n82hj" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.195835 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e05124c8-4705-4d57-82ec-b1ae0658e98e-encryption-config\") pod \"apiserver-7bbb656c7d-vd2jp\" (UID: \"e05124c8-4705-4d57-82ec-b1ae0658e98e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vd2jp" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.195903 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c31b366b-2182-4c59-8777-e552553ba8a8-audit-policies\") pod \"oauth-openshift-558db77b4-h7drl\" (UID: \"c31b366b-2182-4c59-8777-e552553ba8a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.197423 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e05124c8-4705-4d57-82ec-b1ae0658e98e-serving-cert\") pod \"apiserver-7bbb656c7d-vd2jp\" (UID: \"e05124c8-4705-4d57-82ec-b1ae0658e98e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vd2jp" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.197622 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ecb343a-f88c-49d3-a792-696f8b94eca3-serving-cert\") pod \"openshift-config-operator-7777fb866f-n82hj\" (UID: \"9ecb343a-f88c-49d3-a792-696f8b94eca3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n82hj" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.197844 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-h7drl\" (UID: \"c31b366b-2182-4c59-8777-e552553ba8a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.198123 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e05124c8-4705-4d57-82ec-b1ae0658e98e-audit-dir\") pod \"apiserver-7bbb656c7d-vd2jp\" (UID: \"e05124c8-4705-4d57-82ec-b1ae0658e98e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vd2jp" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.198765 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-h7drl\" (UID: \"c31b366b-2182-4c59-8777-e552553ba8a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.199008 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-h7drl\" (UID: \"c31b366b-2182-4c59-8777-e552553ba8a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.199521 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jdzfq"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.199558 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-b9llh"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.199571 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7qqff"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.199729 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcde1a7d-7025-45cb-92de-483da7a86296-config\") pod \"console-operator-58897d9998-2lwzj\" (UID: \"dcde1a7d-7025-45cb-92de-483da7a86296\") " pod="openshift-console-operator/console-operator-58897d9998-2lwzj" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.200716 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dcde1a7d-7025-45cb-92de-483da7a86296-trusted-ca\") pod \"console-operator-58897d9998-2lwzj\" (UID: \"dcde1a7d-7025-45cb-92de-483da7a86296\") " pod="openshift-console-operator/console-operator-58897d9998-2lwzj" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.202162 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3a93788-c0a0-412d-aabd-1e93727e72f0-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-b9llh\" (UID: \"c3a93788-c0a0-412d-aabd-1e93727e72f0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-b9llh" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.202700 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e05124c8-4705-4d57-82ec-b1ae0658e98e-audit-policies\") pod \"apiserver-7bbb656c7d-vd2jp\" (UID: \"e05124c8-4705-4d57-82ec-b1ae0658e98e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vd2jp" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.202812 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c31b366b-2182-4c59-8777-e552553ba8a8-audit-dir\") pod \"oauth-openshift-558db77b4-h7drl\" (UID: \"c31b366b-2182-4c59-8777-e552553ba8a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.206053 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-dmlm5"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.206865 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcde1a7d-7025-45cb-92de-483da7a86296-serving-cert\") pod \"console-operator-58897d9998-2lwzj\" (UID: \"dcde1a7d-7025-45cb-92de-483da7a86296\") " pod="openshift-console-operator/console-operator-58897d9998-2lwzj" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.207256 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-h7drl\" (UID: \"c31b366b-2182-4c59-8777-e552553ba8a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.207511 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-h7drl\" (UID: \"c31b366b-2182-4c59-8777-e552553ba8a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.207766 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.207836 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3266855e-377c-4d01-ab10-7779bb871699-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-fs4cj\" (UID: \"3266855e-377c-4d01-ab10-7779bb871699\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fs4cj" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.209125 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e05124c8-4705-4d57-82ec-b1ae0658e98e-etcd-client\") pod \"apiserver-7bbb656c7d-vd2jp\" (UID: \"e05124c8-4705-4d57-82ec-b1ae0658e98e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vd2jp" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.209371 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-h7drl\" (UID: \"c31b366b-2182-4c59-8777-e552553ba8a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.209401 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-hvh5b"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.210943 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-h7drl\" (UID: \"c31b366b-2182-4c59-8777-e552553ba8a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.211177 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f1ea809-34a6-45e1-87b1-6cce4f74ced0-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-7qqff\" (UID: \"6f1ea809-34a6-45e1-87b1-6cce4f74ced0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7qqff" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.211804 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-h7drl\" (UID: \"c31b366b-2182-4c59-8777-e552553ba8a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.212186 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3266855e-377c-4d01-ab10-7779bb871699-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-fs4cj\" (UID: \"3266855e-377c-4d01-ab10-7779bb871699\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fs4cj" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.213247 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-vtpbv"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.214442 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-zvkh2"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.215509 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0379c6e-b02d-40ef-b9ae-add1e633bc4a-serving-cert\") pod \"authentication-operator-69f744f599-vtpbv\" (UID: \"b0379c6e-b02d-40ef-b9ae-add1e633bc4a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vtpbv" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.217534 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-58vx9"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.219409 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lltxw"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.221997 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-2lwzj"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.223975 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xrcwd"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.225262 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-n82hj"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.225623 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.227105 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-59h6x"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.228008 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zr2w7"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.229179 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-2brf7"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.230475 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2jw8v"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.231497 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-sh5ck"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.233220 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mgczz"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.233657 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-pzlj8" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.234606 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-phv75"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.240930 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-qmphn"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.241898 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-d4txl"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.242302 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q9s7n" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.243629 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-99h4j"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.245239 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fwgd2"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.246603 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-h7drl"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.247683 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wc7xf"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.248747 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-qr6h4"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.250166 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8d5t5"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.250470 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-wfmjq" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.252793 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29456970-f9qt7"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.254816 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-2q7dr"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.255177 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-q7mpr"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.257837 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zhzlw" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.265977 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-qv6qz" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.266369 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.272363 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-wcbst" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.279036 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-24g4d" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.286787 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.289949 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f4db886-9214-4c75-931e-acea9a580541-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-jdzfq\" (UID: \"8f4db886-9214-4c75-931e-acea9a580541\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jdzfq" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.290006 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/154b4721-3c78-4469-946c-cdf5a68bd110-config\") pod \"kube-controller-manager-operator-78b949d7b-58vx9\" (UID: \"154b4721-3c78-4469-946c-cdf5a68bd110\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-58vx9" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.290031 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kj8t\" (UniqueName: \"kubernetes.io/projected/8f4db886-9214-4c75-931e-acea9a580541-kube-api-access-9kj8t\") pod \"kube-storage-version-migrator-operator-b67b599dd-jdzfq\" (UID: \"8f4db886-9214-4c75-931e-acea9a580541\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jdzfq" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.290054 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/154b4721-3c78-4469-946c-cdf5a68bd110-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-58vx9\" (UID: \"154b4721-3c78-4469-946c-cdf5a68bd110\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-58vx9" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.290100 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f4db886-9214-4c75-931e-acea9a580541-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-jdzfq\" (UID: \"8f4db886-9214-4c75-931e-acea9a580541\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jdzfq" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.290182 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/154b4721-3c78-4469-946c-cdf5a68bd110-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-58vx9\" (UID: \"154b4721-3c78-4469-946c-cdf5a68bd110\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-58vx9" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.291054 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f4db886-9214-4c75-931e-acea9a580541-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-jdzfq\" (UID: \"8f4db886-9214-4c75-931e-acea9a580541\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jdzfq" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.294861 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/154b4721-3c78-4469-946c-cdf5a68bd110-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-58vx9\" (UID: \"154b4721-3c78-4469-946c-cdf5a68bd110\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-58vx9" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.295184 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f4db886-9214-4c75-931e-acea9a580541-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-jdzfq\" (UID: \"8f4db886-9214-4c75-931e-acea9a580541\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jdzfq" Jan 03 05:40:58 crc kubenswrapper[4854]: W0103 05:40:58.308147 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ab5b4f5_1bd3_4bbe_b749_4f5607aa9f79.slice/crio-6eb758eff28b7d97915c1ca2858eb594809aa71d4c595036b9b4e0980cf25114 WatchSource:0}: Error finding container 6eb758eff28b7d97915c1ca2858eb594809aa71d4c595036b9b4e0980cf25114: Status 404 returned error can't find the container with id 6eb758eff28b7d97915c1ca2858eb594809aa71d4c595036b9b4e0980cf25114 Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.308176 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.315863 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/154b4721-3c78-4469-946c-cdf5a68bd110-config\") pod \"kube-controller-manager-operator-78b949d7b-58vx9\" (UID: \"154b4721-3c78-4469-946c-cdf5a68bd110\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-58vx9" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.351260 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.370370 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.386950 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.406131 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 03 05:40:58 crc kubenswrapper[4854]: E0103 05:40:58.407823 4854 secret.go:188] Couldn't get secret openshift-route-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 03 05:40:58 crc kubenswrapper[4854]: E0103 05:40:58.407908 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e0316818-6edd-4e11-9a85-cdc385194515-serving-cert podName:e0316818-6edd-4e11-9a85-cdc385194515 nodeName:}" failed. No retries permitted until 2026-01-03 05:40:58.907884934 +0000 UTC m=+37.234461506 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e0316818-6edd-4e11-9a85-cdc385194515-serving-cert") pod "route-controller-manager-6576b87f9c-9tbks" (UID: "e0316818-6edd-4e11-9a85-cdc385194515") : failed to sync secret cache: timed out waiting for the condition Jan 03 05:40:58 crc kubenswrapper[4854]: E0103 05:40:58.410022 4854 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Jan 03 05:40:58 crc kubenswrapper[4854]: E0103 05:40:58.410056 4854 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Jan 03 05:40:58 crc kubenswrapper[4854]: E0103 05:40:58.410113 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e0316818-6edd-4e11-9a85-cdc385194515-client-ca podName:e0316818-6edd-4e11-9a85-cdc385194515 nodeName:}" failed. No retries permitted until 2026-01-03 05:40:58.910065251 +0000 UTC m=+37.236641823 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/e0316818-6edd-4e11-9a85-cdc385194515-client-ca") pod "route-controller-manager-6576b87f9c-9tbks" (UID: "e0316818-6edd-4e11-9a85-cdc385194515") : failed to sync configmap cache: timed out waiting for the condition Jan 03 05:40:58 crc kubenswrapper[4854]: E0103 05:40:58.410146 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e0316818-6edd-4e11-9a85-cdc385194515-config podName:e0316818-6edd-4e11-9a85-cdc385194515 nodeName:}" failed. No retries permitted until 2026-01-03 05:40:58.910127412 +0000 UTC m=+37.236703984 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e0316818-6edd-4e11-9a85-cdc385194515-config") pod "route-controller-manager-6576b87f9c-9tbks" (UID: "e0316818-6edd-4e11-9a85-cdc385194515") : failed to sync configmap cache: timed out waiting for the condition Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.428055 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.449312 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.467896 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.487016 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.506111 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-pzlj8"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.507416 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 03 05:40:58 crc kubenswrapper[4854]: W0103 05:40:58.515678 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2259a421_a8dd_45e8_baa8_15cf1d37782e.slice/crio-3508cf5c9cbf0b13c0babdfa50bd36339703b69857d175fcd2c134c7beaf93c6 WatchSource:0}: Error finding container 3508cf5c9cbf0b13c0babdfa50bd36339703b69857d175fcd2c134c7beaf93c6: Status 404 returned error can't find the container with id 3508cf5c9cbf0b13c0babdfa50bd36339703b69857d175fcd2c134c7beaf93c6 Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.525933 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.547262 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.561994 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-pzlj8" event={"ID":"2259a421-a8dd-45e8-baa8-15cf1d37782e","Type":"ContainerStarted","Data":"3508cf5c9cbf0b13c0babdfa50bd36339703b69857d175fcd2c134c7beaf93c6"} Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.562724 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-24g4d" event={"ID":"0ab5b4f5-1bd3-4bbe-b749-4f5607aa9f79","Type":"ContainerStarted","Data":"6eb758eff28b7d97915c1ca2858eb594809aa71d4c595036b9b4e0980cf25114"} Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.567901 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.593314 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.606815 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.626770 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.646021 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.666208 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.703741 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.706373 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.726626 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.737337 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q9s7n"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.741506 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-zhzlw"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.746714 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.767243 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.789336 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.792564 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-wcbst"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.806491 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wfmjq"] Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.808390 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.826278 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.845913 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.866623 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.885848 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.908786 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0316818-6edd-4e11-9a85-cdc385194515-serving-cert\") pod \"route-controller-manager-6576b87f9c-9tbks\" (UID: \"e0316818-6edd-4e11-9a85-cdc385194515\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9tbks" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.910538 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.926918 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.945598 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.965986 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 03 05:40:58 crc kubenswrapper[4854]: I0103 05:40:58.986049 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.006757 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.010437 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0316818-6edd-4e11-9a85-cdc385194515-config\") pod \"route-controller-manager-6576b87f9c-9tbks\" (UID: \"e0316818-6edd-4e11-9a85-cdc385194515\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9tbks" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.010530 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0316818-6edd-4e11-9a85-cdc385194515-client-ca\") pod \"route-controller-manager-6576b87f9c-9tbks\" (UID: \"e0316818-6edd-4e11-9a85-cdc385194515\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9tbks" Jan 03 05:40:59 crc kubenswrapper[4854]: E0103 05:40:59.022728 4854 projected.go:288] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.027995 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.046402 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.067007 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.085716 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.106600 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.118109 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.118809 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6wgwf" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.126734 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.144910 4854 request.go:700] Waited for 1.000922073s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.150788 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.166365 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.186989 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.208254 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.227516 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.247568 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.267028 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.287355 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.306154 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.327659 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.346493 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.369989 4854 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.387337 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.406859 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.426322 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.447016 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.466798 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.487013 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.507227 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.527064 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.547340 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.571636 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.587949 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.659879 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2p9n\" (UniqueName: \"kubernetes.io/projected/e05124c8-4705-4d57-82ec-b1ae0658e98e-kube-api-access-c2p9n\") pod \"apiserver-7bbb656c7d-vd2jp\" (UID: \"e05124c8-4705-4d57-82ec-b1ae0658e98e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vd2jp" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.660946 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jk8bm\" (UniqueName: \"kubernetes.io/projected/3266855e-377c-4d01-ab10-7779bb871699-kube-api-access-jk8bm\") pod \"openshift-controller-manager-operator-756b6f6bc6-fs4cj\" (UID: \"3266855e-377c-4d01-ab10-7779bb871699\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fs4cj" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.680637 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgvvr\" (UniqueName: \"kubernetes.io/projected/b0379c6e-b02d-40ef-b9ae-add1e633bc4a-kube-api-access-hgvvr\") pod \"authentication-operator-69f744f599-vtpbv\" (UID: \"b0379c6e-b02d-40ef-b9ae-add1e633bc4a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vtpbv" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.691070 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfw4c\" (UniqueName: \"kubernetes.io/projected/c31b366b-2182-4c59-8777-e552553ba8a8-kube-api-access-vfw4c\") pod \"oauth-openshift-558db77b4-h7drl\" (UID: \"c31b366b-2182-4c59-8777-e552553ba8a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.721445 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-vtpbv" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.733907 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddcr5\" (UniqueName: \"kubernetes.io/projected/c3a93788-c0a0-412d-aabd-1e93727e72f0-kube-api-access-ddcr5\") pod \"openshift-apiserver-operator-796bbdcf4f-b9llh\" (UID: \"c3a93788-c0a0-412d-aabd-1e93727e72f0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-b9llh" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.734628 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cxpm\" (UniqueName: \"kubernetes.io/projected/6f1ea809-34a6-45e1-87b1-6cce4f74ced0-kube-api-access-7cxpm\") pod \"cluster-image-registry-operator-dc59b4c8b-7qqff\" (UID: \"6f1ea809-34a6-45e1-87b1-6cce4f74ced0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7qqff" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.749289 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6f1ea809-34a6-45e1-87b1-6cce4f74ced0-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-7qqff\" (UID: \"6f1ea809-34a6-45e1-87b1-6cce4f74ced0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7qqff" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.756525 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-qv6qz"] Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.762520 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hsfj\" (UniqueName: \"kubernetes.io/projected/9ecb343a-f88c-49d3-a792-696f8b94eca3-kube-api-access-5hsfj\") pod \"openshift-config-operator-7777fb866f-n82hj\" (UID: \"9ecb343a-f88c-49d3-a792-696f8b94eca3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n82hj" Jan 03 05:40:59 crc kubenswrapper[4854]: W0103 05:40:59.790707 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcfdb7138_2cd3_450f_9421_e213122022af.slice/crio-c14904a1488ec58f79095491fbd6cd8c535543cd916c7960f541824494674660 WatchSource:0}: Error finding container c14904a1488ec58f79095491fbd6cd8c535543cd916c7960f541824494674660: Status 404 returned error can't find the container with id c14904a1488ec58f79095491fbd6cd8c535543cd916c7960f541824494674660 Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.792609 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7f2n\" (UniqueName: \"kubernetes.io/projected/d5805efa-800c-43df-ba80-7a7db226ebb3-kube-api-access-g7f2n\") pod \"downloads-7954f5f757-dmlm5\" (UID: \"d5805efa-800c-43df-ba80-7a7db226ebb3\") " pod="openshift-console/downloads-7954f5f757-dmlm5" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.805251 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4cxm\" (UniqueName: \"kubernetes.io/projected/dcde1a7d-7025-45cb-92de-483da7a86296-kube-api-access-s4cxm\") pod \"console-operator-58897d9998-2lwzj\" (UID: \"dcde1a7d-7025-45cb-92de-483da7a86296\") " pod="openshift-console-operator/console-operator-58897d9998-2lwzj" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.844263 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.845716 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab6ec22e-2a2c-4e28-8242-5bd783990843-service-ca-bundle\") pod \"router-default-5444994796-tdlx9\" (UID: \"ab6ec22e-2a2c-4e28-8242-5bd783990843\") " pod="openshift-ingress/router-default-5444994796-tdlx9" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.845814 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn4g6\" (UniqueName: \"kubernetes.io/projected/5737390b-6370-438a-9096-e47bdff12392-kube-api-access-fn4g6\") pod \"etcd-operator-b45778765-zvkh2\" (UID: \"5737390b-6370-438a-9096-e47bdff12392\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zvkh2" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.845848 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ab6ec22e-2a2c-4e28-8242-5bd783990843-default-certificate\") pod \"router-default-5444994796-tdlx9\" (UID: \"ab6ec22e-2a2c-4e28-8242-5bd783990843\") " pod="openshift-ingress/router-default-5444994796-tdlx9" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.845899 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ea3251f8-9e38-4094-86f1-98187e5b2c75-registry-certificates\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.845971 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5737390b-6370-438a-9096-e47bdff12392-etcd-service-ca\") pod \"etcd-operator-b45778765-zvkh2\" (UID: \"5737390b-6370-438a-9096-e47bdff12392\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zvkh2" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.846014 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5737390b-6370-438a-9096-e47bdff12392-serving-cert\") pod \"etcd-operator-b45778765-zvkh2\" (UID: \"5737390b-6370-438a-9096-e47bdff12392\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zvkh2" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.846063 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85lzq\" (UniqueName: \"kubernetes.io/projected/ea3251f8-9e38-4094-86f1-98187e5b2c75-kube-api-access-85lzq\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.846184 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c378c365-855f-480d-9089-f6abd1b6a743-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-zr2w7\" (UID: \"c378c365-855f-480d-9089-f6abd1b6a743\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zr2w7" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.846226 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ab6ec22e-2a2c-4e28-8242-5bd783990843-stats-auth\") pod \"router-default-5444994796-tdlx9\" (UID: \"ab6ec22e-2a2c-4e28-8242-5bd783990843\") " pod="openshift-ingress/router-default-5444994796-tdlx9" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.846259 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c378c365-855f-480d-9089-f6abd1b6a743-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-zr2w7\" (UID: \"c378c365-855f-480d-9089-f6abd1b6a743\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zr2w7" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.846297 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5737390b-6370-438a-9096-e47bdff12392-config\") pod \"etcd-operator-b45778765-zvkh2\" (UID: \"5737390b-6370-438a-9096-e47bdff12392\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zvkh2" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.846328 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c378c365-855f-480d-9089-f6abd1b6a743-config\") pod \"kube-apiserver-operator-766d6c64bb-zr2w7\" (UID: \"c378c365-855f-480d-9089-f6abd1b6a743\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zr2w7" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.847226 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab6ec22e-2a2c-4e28-8242-5bd783990843-metrics-certs\") pod \"router-default-5444994796-tdlx9\" (UID: \"ab6ec22e-2a2c-4e28-8242-5bd783990843\") " pod="openshift-ingress/router-default-5444994796-tdlx9" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.847256 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ea3251f8-9e38-4094-86f1-98187e5b2c75-trusted-ca\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.847279 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5737390b-6370-438a-9096-e47bdff12392-etcd-ca\") pod \"etcd-operator-b45778765-zvkh2\" (UID: \"5737390b-6370-438a-9096-e47bdff12392\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zvkh2" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.848424 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ea3251f8-9e38-4094-86f1-98187e5b2c75-ca-trust-extracted\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.848485 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ea3251f8-9e38-4094-86f1-98187e5b2c75-registry-tls\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.850166 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5737390b-6370-438a-9096-e47bdff12392-etcd-client\") pod \"etcd-operator-b45778765-zvkh2\" (UID: \"5737390b-6370-438a-9096-e47bdff12392\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zvkh2" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.850223 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ea3251f8-9e38-4094-86f1-98187e5b2c75-installation-pull-secrets\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.850250 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8qh4\" (UniqueName: \"kubernetes.io/projected/ab6ec22e-2a2c-4e28-8242-5bd783990843-kube-api-access-d8qh4\") pod \"router-default-5444994796-tdlx9\" (UID: \"ab6ec22e-2a2c-4e28-8242-5bd783990843\") " pod="openshift-ingress/router-default-5444994796-tdlx9" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.850321 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.850430 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ea3251f8-9e38-4094-86f1-98187e5b2c75-bound-sa-token\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:40:59 crc kubenswrapper[4854]: E0103 05:40:59.850953 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 05:41:00.350930953 +0000 UTC m=+38.677507525 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wc7xf" (UID: "ea3251f8-9e38-4094-86f1-98187e5b2c75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.854475 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/154b4721-3c78-4469-946c-cdf5a68bd110-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-58vx9\" (UID: \"154b4721-3c78-4469-946c-cdf5a68bd110\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-58vx9" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.877331 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kj8t\" (UniqueName: \"kubernetes.io/projected/8f4db886-9214-4c75-931e-acea9a580541-kube-api-access-9kj8t\") pod \"kube-storage-version-migrator-operator-b67b599dd-jdzfq\" (UID: \"8f4db886-9214-4c75-931e-acea9a580541\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jdzfq" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.889041 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7qqff" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.889509 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fs4cj" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.889753 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.893744 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0316818-6edd-4e11-9a85-cdc385194515-config\") pod \"route-controller-manager-6576b87f9c-9tbks\" (UID: \"e0316818-6edd-4e11-9a85-cdc385194515\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9tbks" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.906380 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vd2jp" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.906817 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 03 05:40:59 crc kubenswrapper[4854]: E0103 05:40:59.909866 4854 secret.go:188] Couldn't get secret openshift-route-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 03 05:40:59 crc kubenswrapper[4854]: E0103 05:40:59.910201 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e0316818-6edd-4e11-9a85-cdc385194515-serving-cert podName:e0316818-6edd-4e11-9a85-cdc385194515 nodeName:}" failed. No retries permitted until 2026-01-03 05:41:00.910180988 +0000 UTC m=+39.236757560 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e0316818-6edd-4e11-9a85-cdc385194515-serving-cert") pod "route-controller-manager-6576b87f9c-9tbks" (UID: "e0316818-6edd-4e11-9a85-cdc385194515") : failed to sync secret cache: timed out waiting for the condition Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.911428 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0316818-6edd-4e11-9a85-cdc385194515-client-ca\") pod \"route-controller-manager-6576b87f9c-9tbks\" (UID: \"e0316818-6edd-4e11-9a85-cdc385194515\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9tbks" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.930266 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n82hj" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.930600 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.941799 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-dmlm5" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.947973 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.951778 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.952212 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/29a7524a-4f1c-4e10-ae41-8e05f91cbde6-mountpoint-dir\") pod \"csi-hostpathplugin-qmphn\" (UID: \"29a7524a-4f1c-4e10-ae41-8e05f91cbde6\") " pod="hostpath-provisioner/csi-hostpathplugin-qmphn" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.952273 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ea3251f8-9e38-4094-86f1-98187e5b2c75-installation-pull-secrets\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.952318 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8qh4\" (UniqueName: \"kubernetes.io/projected/ab6ec22e-2a2c-4e28-8242-5bd783990843-kube-api-access-d8qh4\") pod \"router-default-5444994796-tdlx9\" (UID: \"ab6ec22e-2a2c-4e28-8242-5bd783990843\") " pod="openshift-ingress/router-default-5444994796-tdlx9" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.952354 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9958a532-c481-4e0d-9bb4-f1303bf8b1a9-signing-cabundle\") pod \"service-ca-9c57cc56f-d4txl\" (UID: \"9958a532-c481-4e0d-9bb4-f1303bf8b1a9\") " pod="openshift-service-ca/service-ca-9c57cc56f-d4txl" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.952402 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d53f65da-2081-4c98-8807-d86727ac7f89-certs\") pod \"machine-config-server-84wq5\" (UID: \"d53f65da-2081-4c98-8807-d86727ac7f89\") " pod="openshift-machine-config-operator/machine-config-server-84wq5" Jan 03 05:40:59 crc kubenswrapper[4854]: E0103 05:40:59.952475 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 05:41:00.452408582 +0000 UTC m=+38.778985154 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.952652 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfktk\" (UniqueName: \"kubernetes.io/projected/07007d77-4861-45ac-aacd-17b840bef2ee-kube-api-access-wfktk\") pod \"package-server-manager-789f6589d5-99h4j\" (UID: \"07007d77-4861-45ac-aacd-17b840bef2ee\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-99h4j" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.952822 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2nh8\" (UniqueName: \"kubernetes.io/projected/58b842c2-f723-45ae-9d08-9218837bb66a-kube-api-access-x2nh8\") pod \"olm-operator-6b444d44fb-8d5t5\" (UID: \"58b842c2-f723-45ae-9d08-9218837bb66a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8d5t5" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.952886 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqtkh\" (UniqueName: \"kubernetes.io/projected/36a86a6b-3a2c-4994-af93-2b4ae754edfa-kube-api-access-vqtkh\") pod \"collect-profiles-29456970-f9qt7\" (UID: \"36a86a6b-3a2c-4994-af93-2b4ae754edfa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456970-f9qt7" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.952913 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4343974e-fe2d-4bac-b5aa-c7cfcbfdec02-metrics-tls\") pod \"dns-default-2q7dr\" (UID: \"4343974e-fe2d-4bac-b5aa-c7cfcbfdec02\") " pod="openshift-dns/dns-default-2q7dr" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.953074 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ea2ee243-24c1-48aa-befb-ff2e4e839819-bound-sa-token\") pod \"ingress-operator-5b745b69d9-2brf7\" (UID: \"ea2ee243-24c1-48aa-befb-ff2e4e839819\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2brf7" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.953281 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/36a86a6b-3a2c-4994-af93-2b4ae754edfa-secret-volume\") pod \"collect-profiles-29456970-f9qt7\" (UID: \"36a86a6b-3a2c-4994-af93-2b4ae754edfa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456970-f9qt7" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.953322 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/29a7524a-4f1c-4e10-ae41-8e05f91cbde6-socket-dir\") pod \"csi-hostpathplugin-qmphn\" (UID: \"29a7524a-4f1c-4e10-ae41-8e05f91cbde6\") " pod="hostpath-provisioner/csi-hostpathplugin-qmphn" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.953356 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jscr9\" (UniqueName: \"kubernetes.io/projected/4fd0b8a0-fbaa-4655-b2dd-a15e0bd3f55a-kube-api-access-jscr9\") pod \"machine-config-operator-74547568cd-sh5ck\" (UID: \"4fd0b8a0-fbaa-4655-b2dd-a15e0bd3f55a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sh5ck" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.953381 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wn28\" (UniqueName: \"kubernetes.io/projected/4343974e-fe2d-4bac-b5aa-c7cfcbfdec02-kube-api-access-6wn28\") pod \"dns-default-2q7dr\" (UID: \"4343974e-fe2d-4bac-b5aa-c7cfcbfdec02\") " pod="openshift-dns/dns-default-2q7dr" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.953409 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/58b842c2-f723-45ae-9d08-9218837bb66a-srv-cert\") pod \"olm-operator-6b444d44fb-8d5t5\" (UID: \"58b842c2-f723-45ae-9d08-9218837bb66a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8d5t5" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.953433 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h89z\" (UniqueName: \"kubernetes.io/projected/5ab7ee8b-9182-43e2-85de-f8d92aa12587-kube-api-access-9h89z\") pod \"packageserver-d55dfcdfc-fwgd2\" (UID: \"5ab7ee8b-9182-43e2-85de-f8d92aa12587\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fwgd2" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.953461 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6j9m\" (UniqueName: \"kubernetes.io/projected/d53f65da-2081-4c98-8807-d86727ac7f89-kube-api-access-q6j9m\") pod \"machine-config-server-84wq5\" (UID: \"d53f65da-2081-4c98-8807-d86727ac7f89\") " pod="openshift-machine-config-operator/machine-config-server-84wq5" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.953561 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ea3251f8-9e38-4094-86f1-98187e5b2c75-registry-certificates\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.953603 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/fc73bdbb-e111-427a-b2d6-95976be94058-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-mgczz\" (UID: \"fc73bdbb-e111-427a-b2d6-95976be94058\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mgczz" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.953634 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4fd0b8a0-fbaa-4655-b2dd-a15e0bd3f55a-proxy-tls\") pod \"machine-config-operator-74547568cd-sh5ck\" (UID: \"4fd0b8a0-fbaa-4655-b2dd-a15e0bd3f55a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sh5ck" Jan 03 05:40:59 crc kubenswrapper[4854]: E0103 05:40:59.953633 4854 projected.go:194] Error preparing data for projected volume kube-api-access-pbtmc for pod openshift-route-controller-manager/route-controller-manager-6576b87f9c-9tbks: failed to sync configmap cache: timed out waiting for the condition Jan 03 05:40:59 crc kubenswrapper[4854]: E0103 05:40:59.953828 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e0316818-6edd-4e11-9a85-cdc385194515-kube-api-access-pbtmc podName:e0316818-6edd-4e11-9a85-cdc385194515 nodeName:}" failed. No retries permitted until 2026-01-03 05:41:00.453720926 +0000 UTC m=+38.780297698 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pbtmc" (UniqueName: "kubernetes.io/projected/e0316818-6edd-4e11-9a85-cdc385194515-kube-api-access-pbtmc") pod "route-controller-manager-6576b87f9c-9tbks" (UID: "e0316818-6edd-4e11-9a85-cdc385194515") : failed to sync configmap cache: timed out waiting for the condition Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.954862 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnx77\" (UniqueName: \"kubernetes.io/projected/9958a532-c481-4e0d-9bb4-f1303bf8b1a9-kube-api-access-nnx77\") pod \"service-ca-9c57cc56f-d4txl\" (UID: \"9958a532-c481-4e0d-9bb4-f1303bf8b1a9\") " pod="openshift-service-ca/service-ca-9c57cc56f-d4txl" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.954926 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5737390b-6370-438a-9096-e47bdff12392-serving-cert\") pod \"etcd-operator-b45778765-zvkh2\" (UID: \"5737390b-6370-438a-9096-e47bdff12392\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zvkh2" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.954951 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/997ae328-13fc-41c0-9d10-fde36789b6c4-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-59h6x\" (UID: \"997ae328-13fc-41c0-9d10-fde36789b6c4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-59h6x" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.954995 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4fd0b8a0-fbaa-4655-b2dd-a15e0bd3f55a-auth-proxy-config\") pod \"machine-config-operator-74547568cd-sh5ck\" (UID: \"4fd0b8a0-fbaa-4655-b2dd-a15e0bd3f55a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sh5ck" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.955028 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85lzq\" (UniqueName: \"kubernetes.io/projected/ea3251f8-9e38-4094-86f1-98187e5b2c75-kube-api-access-85lzq\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.955540 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-b9llh" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.956521 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ea2ee243-24c1-48aa-befb-ff2e4e839819-metrics-tls\") pod \"ingress-operator-5b745b69d9-2brf7\" (UID: \"ea2ee243-24c1-48aa-befb-ff2e4e839819\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2brf7" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.956552 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/51e0b974-fa59-4ecb-b9f4-1dfe9381c17d-proxy-tls\") pod \"machine-config-controller-84d6567774-hvh5b\" (UID: \"51e0b974-fa59-4ecb-b9f4-1dfe9381c17d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hvh5b" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.956653 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f526fe3-5134-42d0-a52f-e3c821137ef0-config\") pod \"service-ca-operator-777779d784-phv75\" (UID: \"5f526fe3-5134-42d0-a52f-e3c821137ef0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-phv75" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.956675 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/29a7524a-4f1c-4e10-ae41-8e05f91cbde6-csi-data-dir\") pod \"csi-hostpathplugin-qmphn\" (UID: \"29a7524a-4f1c-4e10-ae41-8e05f91cbde6\") " pod="hostpath-provisioner/csi-hostpathplugin-qmphn" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.957116 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c378c365-855f-480d-9089-f6abd1b6a743-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-zr2w7\" (UID: \"c378c365-855f-480d-9089-f6abd1b6a743\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zr2w7" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.957190 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2e68edf2-12f6-4758-aed1-2d72186bc7de-cert\") pod \"ingress-canary-qr6h4\" (UID: \"2e68edf2-12f6-4758-aed1-2d72186bc7de\") " pod="openshift-ingress-canary/ingress-canary-qr6h4" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.957332 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c378c365-855f-480d-9089-f6abd1b6a743-config\") pod \"kube-apiserver-operator-766d6c64bb-zr2w7\" (UID: \"c378c365-855f-480d-9089-f6abd1b6a743\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zr2w7" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.957799 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ea3251f8-9e38-4094-86f1-98187e5b2c75-trusted-ca\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.957843 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5737390b-6370-438a-9096-e47bdff12392-etcd-ca\") pod \"etcd-operator-b45778765-zvkh2\" (UID: \"5737390b-6370-438a-9096-e47bdff12392\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zvkh2" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.957861 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c378c365-855f-480d-9089-f6abd1b6a743-config\") pod \"kube-apiserver-operator-766d6c64bb-zr2w7\" (UID: \"c378c365-855f-480d-9089-f6abd1b6a743\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zr2w7" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.958584 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdlsv\" (UniqueName: \"kubernetes.io/projected/5b7f5d78-25a5-497a-9315-494fe26edb93-kube-api-access-pdlsv\") pod \"marketplace-operator-79b997595-2jw8v\" (UID: \"5b7f5d78-25a5-497a-9315-494fe26edb93\") " pod="openshift-marketplace/marketplace-operator-79b997595-2jw8v" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.958696 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnggr\" (UniqueName: \"kubernetes.io/projected/29a7524a-4f1c-4e10-ae41-8e05f91cbde6-kube-api-access-hnggr\") pod \"csi-hostpathplugin-qmphn\" (UID: \"29a7524a-4f1c-4e10-ae41-8e05f91cbde6\") " pod="hostpath-provisioner/csi-hostpathplugin-qmphn" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.958747 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/51e0b974-fa59-4ecb-b9f4-1dfe9381c17d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-hvh5b\" (UID: \"51e0b974-fa59-4ecb-b9f4-1dfe9381c17d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hvh5b" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.958777 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/caf95282-b4d9-4814-bccc-e6c8c77658c5-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xrcwd\" (UID: \"caf95282-b4d9-4814-bccc-e6c8c77658c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xrcwd" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.958816 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z6bm\" (UniqueName: \"kubernetes.io/projected/77084a3a-5610-4014-a3bf-6d4073a74d44-kube-api-access-2z6bm\") pod \"catalog-operator-68c6474976-lltxw\" (UID: \"77084a3a-5610-4014-a3bf-6d4073a74d44\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lltxw" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.958892 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f-ready\") pod \"cni-sysctl-allowlist-ds-9d9fw\" (UID: \"3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9d9fw" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.958958 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5ab7ee8b-9182-43e2-85de-f8d92aa12587-tmpfs\") pod \"packageserver-d55dfcdfc-fwgd2\" (UID: \"5ab7ee8b-9182-43e2-85de-f8d92aa12587\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fwgd2" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.958994 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/caf95282-b4d9-4814-bccc-e6c8c77658c5-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xrcwd\" (UID: \"caf95282-b4d9-4814-bccc-e6c8c77658c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xrcwd" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.959060 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5737390b-6370-438a-9096-e47bdff12392-etcd-client\") pod \"etcd-operator-b45778765-zvkh2\" (UID: \"5737390b-6370-438a-9096-e47bdff12392\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zvkh2" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.959147 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5b7f5d78-25a5-497a-9315-494fe26edb93-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2jw8v\" (UID: \"5b7f5d78-25a5-497a-9315-494fe26edb93\") " pod="openshift-marketplace/marketplace-operator-79b997595-2jw8v" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.959182 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4fd0b8a0-fbaa-4655-b2dd-a15e0bd3f55a-images\") pod \"machine-config-operator-74547568cd-sh5ck\" (UID: \"4fd0b8a0-fbaa-4655-b2dd-a15e0bd3f55a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sh5ck" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.959203 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5ab7ee8b-9182-43e2-85de-f8d92aa12587-apiservice-cert\") pod \"packageserver-d55dfcdfc-fwgd2\" (UID: \"5ab7ee8b-9182-43e2-85de-f8d92aa12587\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fwgd2" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.959228 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j29wc\" (UniqueName: \"kubernetes.io/projected/51e0b974-fa59-4ecb-b9f4-1dfe9381c17d-kube-api-access-j29wc\") pod \"machine-config-controller-84d6567774-hvh5b\" (UID: \"51e0b974-fa59-4ecb-b9f4-1dfe9381c17d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hvh5b" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.959248 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrtk5\" (UniqueName: \"kubernetes.io/projected/997ae328-13fc-41c0-9d10-fde36789b6c4-kube-api-access-mrtk5\") pod \"multus-admission-controller-857f4d67dd-59h6x\" (UID: \"997ae328-13fc-41c0-9d10-fde36789b6c4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-59h6x" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.959270 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ea3251f8-9e38-4094-86f1-98187e5b2c75-bound-sa-token\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.959286 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbh62\" (UniqueName: \"kubernetes.io/projected/2e68edf2-12f6-4758-aed1-2d72186bc7de-kube-api-access-lbh62\") pod \"ingress-canary-qr6h4\" (UID: \"2e68edf2-12f6-4758-aed1-2d72186bc7de\") " pod="openshift-ingress-canary/ingress-canary-qr6h4" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.959313 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5b7f5d78-25a5-497a-9315-494fe26edb93-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2jw8v\" (UID: \"5b7f5d78-25a5-497a-9315-494fe26edb93\") " pod="openshift-marketplace/marketplace-operator-79b997595-2jw8v" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.959332 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/29a7524a-4f1c-4e10-ae41-8e05f91cbde6-plugins-dir\") pod \"csi-hostpathplugin-qmphn\" (UID: \"29a7524a-4f1c-4e10-ae41-8e05f91cbde6\") " pod="hostpath-provisioner/csi-hostpathplugin-qmphn" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.959354 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab6ec22e-2a2c-4e28-8242-5bd783990843-service-ca-bundle\") pod \"router-default-5444994796-tdlx9\" (UID: \"ab6ec22e-2a2c-4e28-8242-5bd783990843\") " pod="openshift-ingress/router-default-5444994796-tdlx9" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.959375 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-9d9fw\" (UID: \"3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9d9fw" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.959393 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/07007d77-4861-45ac-aacd-17b840bef2ee-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-99h4j\" (UID: \"07007d77-4861-45ac-aacd-17b840bef2ee\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-99h4j" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.959413 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tjwx\" (UniqueName: \"kubernetes.io/projected/7f1b22aa-8d19-479a-8439-52a095edf970-kube-api-access-8tjwx\") pod \"migrator-59844c95c7-q7mpr\" (UID: \"7f1b22aa-8d19-479a-8439-52a095edf970\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-q7mpr" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.959433 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn4g6\" (UniqueName: \"kubernetes.io/projected/5737390b-6370-438a-9096-e47bdff12392-kube-api-access-fn4g6\") pod \"etcd-operator-b45778765-zvkh2\" (UID: \"5737390b-6370-438a-9096-e47bdff12392\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zvkh2" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.959451 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqf22\" (UniqueName: \"kubernetes.io/projected/ea2ee243-24c1-48aa-befb-ff2e4e839819-kube-api-access-sqf22\") pod \"ingress-operator-5b745b69d9-2brf7\" (UID: \"ea2ee243-24c1-48aa-befb-ff2e4e839819\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2brf7" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.959470 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q92r\" (UniqueName: \"kubernetes.io/projected/3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f-kube-api-access-6q92r\") pod \"cni-sysctl-allowlist-ds-9d9fw\" (UID: \"3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9d9fw" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.959488 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv5cx\" (UniqueName: \"kubernetes.io/projected/5f526fe3-5134-42d0-a52f-e3c821137ef0-kube-api-access-zv5cx\") pod \"service-ca-operator-777779d784-phv75\" (UID: \"5f526fe3-5134-42d0-a52f-e3c821137ef0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-phv75" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.959517 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ab6ec22e-2a2c-4e28-8242-5bd783990843-default-certificate\") pod \"router-default-5444994796-tdlx9\" (UID: \"ab6ec22e-2a2c-4e28-8242-5bd783990843\") " pod="openshift-ingress/router-default-5444994796-tdlx9" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.959632 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/58b842c2-f723-45ae-9d08-9218837bb66a-profile-collector-cert\") pod \"olm-operator-6b444d44fb-8d5t5\" (UID: \"58b842c2-f723-45ae-9d08-9218837bb66a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8d5t5" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.959670 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d53f65da-2081-4c98-8807-d86727ac7f89-node-bootstrap-token\") pod \"machine-config-server-84wq5\" (UID: \"d53f65da-2081-4c98-8807-d86727ac7f89\") " pod="openshift-machine-config-operator/machine-config-server-84wq5" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.959748 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz2hq\" (UniqueName: \"kubernetes.io/projected/fc73bdbb-e111-427a-b2d6-95976be94058-kube-api-access-jz2hq\") pod \"control-plane-machine-set-operator-78cbb6b69f-mgczz\" (UID: \"fc73bdbb-e111-427a-b2d6-95976be94058\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mgczz" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.959781 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5737390b-6370-438a-9096-e47bdff12392-etcd-service-ca\") pod \"etcd-operator-b45778765-zvkh2\" (UID: \"5737390b-6370-438a-9096-e47bdff12392\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zvkh2" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.959809 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/36a86a6b-3a2c-4994-af93-2b4ae754edfa-config-volume\") pod \"collect-profiles-29456970-f9qt7\" (UID: \"36a86a6b-3a2c-4994-af93-2b4ae754edfa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456970-f9qt7" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.959873 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/caf95282-b4d9-4814-bccc-e6c8c77658c5-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xrcwd\" (UID: \"caf95282-b4d9-4814-bccc-e6c8c77658c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xrcwd" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.959891 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ea3251f8-9e38-4094-86f1-98187e5b2c75-registry-certificates\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.959912 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ab6ec22e-2a2c-4e28-8242-5bd783990843-stats-auth\") pod \"router-default-5444994796-tdlx9\" (UID: \"ab6ec22e-2a2c-4e28-8242-5bd783990843\") " pod="openshift-ingress/router-default-5444994796-tdlx9" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.960253 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5737390b-6370-438a-9096-e47bdff12392-etcd-ca\") pod \"etcd-operator-b45778765-zvkh2\" (UID: \"5737390b-6370-438a-9096-e47bdff12392\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zvkh2" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.960671 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c378c365-855f-480d-9089-f6abd1b6a743-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-zr2w7\" (UID: \"c378c365-855f-480d-9089-f6abd1b6a743\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zr2w7" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.960726 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab6ec22e-2a2c-4e28-8242-5bd783990843-service-ca-bundle\") pod \"router-default-5444994796-tdlx9\" (UID: \"ab6ec22e-2a2c-4e28-8242-5bd783990843\") " pod="openshift-ingress/router-default-5444994796-tdlx9" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.960749 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab6ec22e-2a2c-4e28-8242-5bd783990843-metrics-certs\") pod \"router-default-5444994796-tdlx9\" (UID: \"ab6ec22e-2a2c-4e28-8242-5bd783990843\") " pod="openshift-ingress/router-default-5444994796-tdlx9" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.960781 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5737390b-6370-438a-9096-e47bdff12392-config\") pod \"etcd-operator-b45778765-zvkh2\" (UID: \"5737390b-6370-438a-9096-e47bdff12392\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zvkh2" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.960815 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-9d9fw\" (UID: \"3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9d9fw" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.960843 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/29a7524a-4f1c-4e10-ae41-8e05f91cbde6-registration-dir\") pod \"csi-hostpathplugin-qmphn\" (UID: \"29a7524a-4f1c-4e10-ae41-8e05f91cbde6\") " pod="hostpath-provisioner/csi-hostpathplugin-qmphn" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.961313 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ea2ee243-24c1-48aa-befb-ff2e4e839819-trusted-ca\") pod \"ingress-operator-5b745b69d9-2brf7\" (UID: \"ea2ee243-24c1-48aa-befb-ff2e4e839819\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2brf7" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.961346 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9958a532-c481-4e0d-9bb4-f1303bf8b1a9-signing-key\") pod \"service-ca-9c57cc56f-d4txl\" (UID: \"9958a532-c481-4e0d-9bb4-f1303bf8b1a9\") " pod="openshift-service-ca/service-ca-9c57cc56f-d4txl" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.961401 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ea3251f8-9e38-4094-86f1-98187e5b2c75-ca-trust-extracted\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.961447 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/77084a3a-5610-4014-a3bf-6d4073a74d44-srv-cert\") pod \"catalog-operator-68c6474976-lltxw\" (UID: \"77084a3a-5610-4014-a3bf-6d4073a74d44\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lltxw" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.961470 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4343974e-fe2d-4bac-b5aa-c7cfcbfdec02-config-volume\") pod \"dns-default-2q7dr\" (UID: \"4343974e-fe2d-4bac-b5aa-c7cfcbfdec02\") " pod="openshift-dns/dns-default-2q7dr" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.961509 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ea3251f8-9e38-4094-86f1-98187e5b2c75-registry-tls\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.961552 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/77084a3a-5610-4014-a3bf-6d4073a74d44-profile-collector-cert\") pod \"catalog-operator-68c6474976-lltxw\" (UID: \"77084a3a-5610-4014-a3bf-6d4073a74d44\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lltxw" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.961958 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ea3251f8-9e38-4094-86f1-98187e5b2c75-installation-pull-secrets\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.962121 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5737390b-6370-438a-9096-e47bdff12392-etcd-service-ca\") pod \"etcd-operator-b45778765-zvkh2\" (UID: \"5737390b-6370-438a-9096-e47bdff12392\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zvkh2" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.962162 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5737390b-6370-438a-9096-e47bdff12392-config\") pod \"etcd-operator-b45778765-zvkh2\" (UID: \"5737390b-6370-438a-9096-e47bdff12392\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zvkh2" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.962709 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5ab7ee8b-9182-43e2-85de-f8d92aa12587-webhook-cert\") pod \"packageserver-d55dfcdfc-fwgd2\" (UID: \"5ab7ee8b-9182-43e2-85de-f8d92aa12587\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fwgd2" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.962753 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f526fe3-5134-42d0-a52f-e3c821137ef0-serving-cert\") pod \"service-ca-operator-777779d784-phv75\" (UID: \"5f526fe3-5134-42d0-a52f-e3c821137ef0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-phv75" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.964133 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ab6ec22e-2a2c-4e28-8242-5bd783990843-default-certificate\") pod \"router-default-5444994796-tdlx9\" (UID: \"ab6ec22e-2a2c-4e28-8242-5bd783990843\") " pod="openshift-ingress/router-default-5444994796-tdlx9" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.965450 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ea3251f8-9e38-4094-86f1-98187e5b2c75-trusted-ca\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.966610 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ea3251f8-9e38-4094-86f1-98187e5b2c75-ca-trust-extracted\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.966984 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ab6ec22e-2a2c-4e28-8242-5bd783990843-stats-auth\") pod \"router-default-5444994796-tdlx9\" (UID: \"ab6ec22e-2a2c-4e28-8242-5bd783990843\") " pod="openshift-ingress/router-default-5444994796-tdlx9" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.967129 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.967680 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab6ec22e-2a2c-4e28-8242-5bd783990843-metrics-certs\") pod \"router-default-5444994796-tdlx9\" (UID: \"ab6ec22e-2a2c-4e28-8242-5bd783990843\") " pod="openshift-ingress/router-default-5444994796-tdlx9" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.968531 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5737390b-6370-438a-9096-e47bdff12392-etcd-client\") pod \"etcd-operator-b45778765-zvkh2\" (UID: \"5737390b-6370-438a-9096-e47bdff12392\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zvkh2" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.968950 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c378c365-855f-480d-9089-f6abd1b6a743-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-zr2w7\" (UID: \"c378c365-855f-480d-9089-f6abd1b6a743\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zr2w7" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.969313 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5737390b-6370-438a-9096-e47bdff12392-serving-cert\") pod \"etcd-operator-b45778765-zvkh2\" (UID: \"5737390b-6370-438a-9096-e47bdff12392\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zvkh2" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.973465 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ea3251f8-9e38-4094-86f1-98187e5b2c75-registry-tls\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.979528 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-vtpbv"] Jan 03 05:40:59 crc kubenswrapper[4854]: I0103 05:40:59.987733 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.006888 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.020146 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-2lwzj" Jan 03 05:41:00 crc kubenswrapper[4854]: W0103 05:41:00.039599 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0379c6e_b02d_40ef_b9ae_add1e633bc4a.slice/crio-62516bf1c6e88af27dff1c7ec7ad53eb135e880f323c36e5a9c86fba339ce00c WatchSource:0}: Error finding container 62516bf1c6e88af27dff1c7ec7ad53eb135e880f323c36e5a9c86fba339ce00c: Status 404 returned error can't find the container with id 62516bf1c6e88af27dff1c7ec7ad53eb135e880f323c36e5a9c86fba339ce00c Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.049569 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jdzfq" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.060315 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-58vx9" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.063795 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/58b842c2-f723-45ae-9d08-9218837bb66a-profile-collector-cert\") pod \"olm-operator-6b444d44fb-8d5t5\" (UID: \"58b842c2-f723-45ae-9d08-9218837bb66a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8d5t5" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.063846 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d53f65da-2081-4c98-8807-d86727ac7f89-node-bootstrap-token\") pod \"machine-config-server-84wq5\" (UID: \"d53f65da-2081-4c98-8807-d86727ac7f89\") " pod="openshift-machine-config-operator/machine-config-server-84wq5" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.063881 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jz2hq\" (UniqueName: \"kubernetes.io/projected/fc73bdbb-e111-427a-b2d6-95976be94058-kube-api-access-jz2hq\") pod \"control-plane-machine-set-operator-78cbb6b69f-mgczz\" (UID: \"fc73bdbb-e111-427a-b2d6-95976be94058\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mgczz" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.063910 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/36a86a6b-3a2c-4994-af93-2b4ae754edfa-config-volume\") pod \"collect-profiles-29456970-f9qt7\" (UID: \"36a86a6b-3a2c-4994-af93-2b4ae754edfa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456970-f9qt7" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.063934 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/caf95282-b4d9-4814-bccc-e6c8c77658c5-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xrcwd\" (UID: \"caf95282-b4d9-4814-bccc-e6c8c77658c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xrcwd" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.063970 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/29a7524a-4f1c-4e10-ae41-8e05f91cbde6-registration-dir\") pod \"csi-hostpathplugin-qmphn\" (UID: \"29a7524a-4f1c-4e10-ae41-8e05f91cbde6\") " pod="hostpath-provisioner/csi-hostpathplugin-qmphn" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.063999 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-9d9fw\" (UID: \"3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9d9fw" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.064028 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ea2ee243-24c1-48aa-befb-ff2e4e839819-trusted-ca\") pod \"ingress-operator-5b745b69d9-2brf7\" (UID: \"ea2ee243-24c1-48aa-befb-ff2e4e839819\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2brf7" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.064052 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9958a532-c481-4e0d-9bb4-f1303bf8b1a9-signing-key\") pod \"service-ca-9c57cc56f-d4txl\" (UID: \"9958a532-c481-4e0d-9bb4-f1303bf8b1a9\") " pod="openshift-service-ca/service-ca-9c57cc56f-d4txl" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.064069 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/77084a3a-5610-4014-a3bf-6d4073a74d44-srv-cert\") pod \"catalog-operator-68c6474976-lltxw\" (UID: \"77084a3a-5610-4014-a3bf-6d4073a74d44\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lltxw" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.064192 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4343974e-fe2d-4bac-b5aa-c7cfcbfdec02-config-volume\") pod \"dns-default-2q7dr\" (UID: \"4343974e-fe2d-4bac-b5aa-c7cfcbfdec02\") " pod="openshift-dns/dns-default-2q7dr" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.064220 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/77084a3a-5610-4014-a3bf-6d4073a74d44-profile-collector-cert\") pod \"catalog-operator-68c6474976-lltxw\" (UID: \"77084a3a-5610-4014-a3bf-6d4073a74d44\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lltxw" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.064246 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5ab7ee8b-9182-43e2-85de-f8d92aa12587-webhook-cert\") pod \"packageserver-d55dfcdfc-fwgd2\" (UID: \"5ab7ee8b-9182-43e2-85de-f8d92aa12587\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fwgd2" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.064269 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f526fe3-5134-42d0-a52f-e3c821137ef0-serving-cert\") pod \"service-ca-operator-777779d784-phv75\" (UID: \"5f526fe3-5134-42d0-a52f-e3c821137ef0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-phv75" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.064295 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/29a7524a-4f1c-4e10-ae41-8e05f91cbde6-mountpoint-dir\") pod \"csi-hostpathplugin-qmphn\" (UID: \"29a7524a-4f1c-4e10-ae41-8e05f91cbde6\") " pod="hostpath-provisioner/csi-hostpathplugin-qmphn" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.064328 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9958a532-c481-4e0d-9bb4-f1303bf8b1a9-signing-cabundle\") pod \"service-ca-9c57cc56f-d4txl\" (UID: \"9958a532-c481-4e0d-9bb4-f1303bf8b1a9\") " pod="openshift-service-ca/service-ca-9c57cc56f-d4txl" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.064363 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.064389 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d53f65da-2081-4c98-8807-d86727ac7f89-certs\") pod \"machine-config-server-84wq5\" (UID: \"d53f65da-2081-4c98-8807-d86727ac7f89\") " pod="openshift-machine-config-operator/machine-config-server-84wq5" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.064400 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/29a7524a-4f1c-4e10-ae41-8e05f91cbde6-registration-dir\") pod \"csi-hostpathplugin-qmphn\" (UID: \"29a7524a-4f1c-4e10-ae41-8e05f91cbde6\") " pod="hostpath-provisioner/csi-hostpathplugin-qmphn" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.064411 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfktk\" (UniqueName: \"kubernetes.io/projected/07007d77-4861-45ac-aacd-17b840bef2ee-kube-api-access-wfktk\") pod \"package-server-manager-789f6589d5-99h4j\" (UID: \"07007d77-4861-45ac-aacd-17b840bef2ee\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-99h4j" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.064489 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2nh8\" (UniqueName: \"kubernetes.io/projected/58b842c2-f723-45ae-9d08-9218837bb66a-kube-api-access-x2nh8\") pod \"olm-operator-6b444d44fb-8d5t5\" (UID: \"58b842c2-f723-45ae-9d08-9218837bb66a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8d5t5" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.064531 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqtkh\" (UniqueName: \"kubernetes.io/projected/36a86a6b-3a2c-4994-af93-2b4ae754edfa-kube-api-access-vqtkh\") pod \"collect-profiles-29456970-f9qt7\" (UID: \"36a86a6b-3a2c-4994-af93-2b4ae754edfa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456970-f9qt7" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.064560 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4343974e-fe2d-4bac-b5aa-c7cfcbfdec02-metrics-tls\") pod \"dns-default-2q7dr\" (UID: \"4343974e-fe2d-4bac-b5aa-c7cfcbfdec02\") " pod="openshift-dns/dns-default-2q7dr" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.064592 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ea2ee243-24c1-48aa-befb-ff2e4e839819-bound-sa-token\") pod \"ingress-operator-5b745b69d9-2brf7\" (UID: \"ea2ee243-24c1-48aa-befb-ff2e4e839819\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2brf7" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.064621 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/36a86a6b-3a2c-4994-af93-2b4ae754edfa-secret-volume\") pod \"collect-profiles-29456970-f9qt7\" (UID: \"36a86a6b-3a2c-4994-af93-2b4ae754edfa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456970-f9qt7" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.064649 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-9d9fw\" (UID: \"3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9d9fw" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.064837 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/29a7524a-4f1c-4e10-ae41-8e05f91cbde6-socket-dir\") pod \"csi-hostpathplugin-qmphn\" (UID: \"29a7524a-4f1c-4e10-ae41-8e05f91cbde6\") " pod="hostpath-provisioner/csi-hostpathplugin-qmphn" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.064871 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jscr9\" (UniqueName: \"kubernetes.io/projected/4fd0b8a0-fbaa-4655-b2dd-a15e0bd3f55a-kube-api-access-jscr9\") pod \"machine-config-operator-74547568cd-sh5ck\" (UID: \"4fd0b8a0-fbaa-4655-b2dd-a15e0bd3f55a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sh5ck" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.064893 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wn28\" (UniqueName: \"kubernetes.io/projected/4343974e-fe2d-4bac-b5aa-c7cfcbfdec02-kube-api-access-6wn28\") pod \"dns-default-2q7dr\" (UID: \"4343974e-fe2d-4bac-b5aa-c7cfcbfdec02\") " pod="openshift-dns/dns-default-2q7dr" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.064914 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9h89z\" (UniqueName: \"kubernetes.io/projected/5ab7ee8b-9182-43e2-85de-f8d92aa12587-kube-api-access-9h89z\") pod \"packageserver-d55dfcdfc-fwgd2\" (UID: \"5ab7ee8b-9182-43e2-85de-f8d92aa12587\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fwgd2" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.064941 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/58b842c2-f723-45ae-9d08-9218837bb66a-srv-cert\") pod \"olm-operator-6b444d44fb-8d5t5\" (UID: \"58b842c2-f723-45ae-9d08-9218837bb66a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8d5t5" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.064965 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6j9m\" (UniqueName: \"kubernetes.io/projected/d53f65da-2081-4c98-8807-d86727ac7f89-kube-api-access-q6j9m\") pod \"machine-config-server-84wq5\" (UID: \"d53f65da-2081-4c98-8807-d86727ac7f89\") " pod="openshift-machine-config-operator/machine-config-server-84wq5" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.065009 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4fd0b8a0-fbaa-4655-b2dd-a15e0bd3f55a-proxy-tls\") pod \"machine-config-operator-74547568cd-sh5ck\" (UID: \"4fd0b8a0-fbaa-4655-b2dd-a15e0bd3f55a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sh5ck" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.065044 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/fc73bdbb-e111-427a-b2d6-95976be94058-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-mgczz\" (UID: \"fc73bdbb-e111-427a-b2d6-95976be94058\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mgczz" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.065102 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnx77\" (UniqueName: \"kubernetes.io/projected/9958a532-c481-4e0d-9bb4-f1303bf8b1a9-kube-api-access-nnx77\") pod \"service-ca-9c57cc56f-d4txl\" (UID: \"9958a532-c481-4e0d-9bb4-f1303bf8b1a9\") " pod="openshift-service-ca/service-ca-9c57cc56f-d4txl" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.065134 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/997ae328-13fc-41c0-9d10-fde36789b6c4-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-59h6x\" (UID: \"997ae328-13fc-41c0-9d10-fde36789b6c4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-59h6x" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.065163 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4fd0b8a0-fbaa-4655-b2dd-a15e0bd3f55a-auth-proxy-config\") pod \"machine-config-operator-74547568cd-sh5ck\" (UID: \"4fd0b8a0-fbaa-4655-b2dd-a15e0bd3f55a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sh5ck" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.065199 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ea2ee243-24c1-48aa-befb-ff2e4e839819-metrics-tls\") pod \"ingress-operator-5b745b69d9-2brf7\" (UID: \"ea2ee243-24c1-48aa-befb-ff2e4e839819\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2brf7" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.065218 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/51e0b974-fa59-4ecb-b9f4-1dfe9381c17d-proxy-tls\") pod \"machine-config-controller-84d6567774-hvh5b\" (UID: \"51e0b974-fa59-4ecb-b9f4-1dfe9381c17d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hvh5b" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.065241 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f526fe3-5134-42d0-a52f-e3c821137ef0-config\") pod \"service-ca-operator-777779d784-phv75\" (UID: \"5f526fe3-5134-42d0-a52f-e3c821137ef0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-phv75" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.065262 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/29a7524a-4f1c-4e10-ae41-8e05f91cbde6-csi-data-dir\") pod \"csi-hostpathplugin-qmphn\" (UID: \"29a7524a-4f1c-4e10-ae41-8e05f91cbde6\") " pod="hostpath-provisioner/csi-hostpathplugin-qmphn" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.065313 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2e68edf2-12f6-4758-aed1-2d72186bc7de-cert\") pod \"ingress-canary-qr6h4\" (UID: \"2e68edf2-12f6-4758-aed1-2d72186bc7de\") " pod="openshift-ingress-canary/ingress-canary-qr6h4" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.065363 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdlsv\" (UniqueName: \"kubernetes.io/projected/5b7f5d78-25a5-497a-9315-494fe26edb93-kube-api-access-pdlsv\") pod \"marketplace-operator-79b997595-2jw8v\" (UID: \"5b7f5d78-25a5-497a-9315-494fe26edb93\") " pod="openshift-marketplace/marketplace-operator-79b997595-2jw8v" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.065384 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnggr\" (UniqueName: \"kubernetes.io/projected/29a7524a-4f1c-4e10-ae41-8e05f91cbde6-kube-api-access-hnggr\") pod \"csi-hostpathplugin-qmphn\" (UID: \"29a7524a-4f1c-4e10-ae41-8e05f91cbde6\") " pod="hostpath-provisioner/csi-hostpathplugin-qmphn" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.065413 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/caf95282-b4d9-4814-bccc-e6c8c77658c5-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xrcwd\" (UID: \"caf95282-b4d9-4814-bccc-e6c8c77658c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xrcwd" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.065437 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2z6bm\" (UniqueName: \"kubernetes.io/projected/77084a3a-5610-4014-a3bf-6d4073a74d44-kube-api-access-2z6bm\") pod \"catalog-operator-68c6474976-lltxw\" (UID: \"77084a3a-5610-4014-a3bf-6d4073a74d44\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lltxw" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.065463 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/51e0b974-fa59-4ecb-b9f4-1dfe9381c17d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-hvh5b\" (UID: \"51e0b974-fa59-4ecb-b9f4-1dfe9381c17d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hvh5b" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.065490 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f-ready\") pod \"cni-sysctl-allowlist-ds-9d9fw\" (UID: \"3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9d9fw" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.065508 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5ab7ee8b-9182-43e2-85de-f8d92aa12587-tmpfs\") pod \"packageserver-d55dfcdfc-fwgd2\" (UID: \"5ab7ee8b-9182-43e2-85de-f8d92aa12587\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fwgd2" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.065529 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/caf95282-b4d9-4814-bccc-e6c8c77658c5-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xrcwd\" (UID: \"caf95282-b4d9-4814-bccc-e6c8c77658c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xrcwd" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.065556 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4fd0b8a0-fbaa-4655-b2dd-a15e0bd3f55a-images\") pod \"machine-config-operator-74547568cd-sh5ck\" (UID: \"4fd0b8a0-fbaa-4655-b2dd-a15e0bd3f55a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sh5ck" Jan 03 05:41:00 crc kubenswrapper[4854]: E0103 05:41:00.066578 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 05:41:00.566560539 +0000 UTC m=+38.893137301 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wc7xf" (UID: "ea3251f8-9e38-4094-86f1-98187e5b2c75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.067302 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9958a532-c481-4e0d-9bb4-f1303bf8b1a9-signing-cabundle\") pod \"service-ca-9c57cc56f-d4txl\" (UID: \"9958a532-c481-4e0d-9bb4-f1303bf8b1a9\") " pod="openshift-service-ca/service-ca-9c57cc56f-d4txl" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.067357 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/caf95282-b4d9-4814-bccc-e6c8c77658c5-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xrcwd\" (UID: \"caf95282-b4d9-4814-bccc-e6c8c77658c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xrcwd" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.067920 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f-ready\") pod \"cni-sysctl-allowlist-ds-9d9fw\" (UID: \"3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9d9fw" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.068100 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4343974e-fe2d-4bac-b5aa-c7cfcbfdec02-config-volume\") pod \"dns-default-2q7dr\" (UID: \"4343974e-fe2d-4bac-b5aa-c7cfcbfdec02\") " pod="openshift-dns/dns-default-2q7dr" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.068158 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/29a7524a-4f1c-4e10-ae41-8e05f91cbde6-mountpoint-dir\") pod \"csi-hostpathplugin-qmphn\" (UID: \"29a7524a-4f1c-4e10-ae41-8e05f91cbde6\") " pod="hostpath-provisioner/csi-hostpathplugin-qmphn" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.068184 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/36a86a6b-3a2c-4994-af93-2b4ae754edfa-config-volume\") pod \"collect-profiles-29456970-f9qt7\" (UID: \"36a86a6b-3a2c-4994-af93-2b4ae754edfa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456970-f9qt7" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.069180 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/29a7524a-4f1c-4e10-ae41-8e05f91cbde6-socket-dir\") pod \"csi-hostpathplugin-qmphn\" (UID: \"29a7524a-4f1c-4e10-ae41-8e05f91cbde6\") " pod="hostpath-provisioner/csi-hostpathplugin-qmphn" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.069246 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/58b842c2-f723-45ae-9d08-9218837bb66a-profile-collector-cert\") pod \"olm-operator-6b444d44fb-8d5t5\" (UID: \"58b842c2-f723-45ae-9d08-9218837bb66a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8d5t5" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.069333 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ea2ee243-24c1-48aa-befb-ff2e4e839819-trusted-ca\") pod \"ingress-operator-5b745b69d9-2brf7\" (UID: \"ea2ee243-24c1-48aa-befb-ff2e4e839819\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2brf7" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.069690 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d53f65da-2081-4c98-8807-d86727ac7f89-node-bootstrap-token\") pod \"machine-config-server-84wq5\" (UID: \"d53f65da-2081-4c98-8807-d86727ac7f89\") " pod="openshift-machine-config-operator/machine-config-server-84wq5" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.065576 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5ab7ee8b-9182-43e2-85de-f8d92aa12587-apiservice-cert\") pod \"packageserver-d55dfcdfc-fwgd2\" (UID: \"5ab7ee8b-9182-43e2-85de-f8d92aa12587\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fwgd2" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.070242 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8qh4\" (UniqueName: \"kubernetes.io/projected/ab6ec22e-2a2c-4e28-8242-5bd783990843-kube-api-access-d8qh4\") pod \"router-default-5444994796-tdlx9\" (UID: \"ab6ec22e-2a2c-4e28-8242-5bd783990843\") " pod="openshift-ingress/router-default-5444994796-tdlx9" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.070574 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4fd0b8a0-fbaa-4655-b2dd-a15e0bd3f55a-auth-proxy-config\") pod \"machine-config-operator-74547568cd-sh5ck\" (UID: \"4fd0b8a0-fbaa-4655-b2dd-a15e0bd3f55a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sh5ck" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.070670 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/29a7524a-4f1c-4e10-ae41-8e05f91cbde6-csi-data-dir\") pod \"csi-hostpathplugin-qmphn\" (UID: \"29a7524a-4f1c-4e10-ae41-8e05f91cbde6\") " pod="hostpath-provisioner/csi-hostpathplugin-qmphn" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.070711 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5ab7ee8b-9182-43e2-85de-f8d92aa12587-tmpfs\") pod \"packageserver-d55dfcdfc-fwgd2\" (UID: \"5ab7ee8b-9182-43e2-85de-f8d92aa12587\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fwgd2" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.072228 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5b7f5d78-25a5-497a-9315-494fe26edb93-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2jw8v\" (UID: \"5b7f5d78-25a5-497a-9315-494fe26edb93\") " pod="openshift-marketplace/marketplace-operator-79b997595-2jw8v" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.072482 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j29wc\" (UniqueName: \"kubernetes.io/projected/51e0b974-fa59-4ecb-b9f4-1dfe9381c17d-kube-api-access-j29wc\") pod \"machine-config-controller-84d6567774-hvh5b\" (UID: \"51e0b974-fa59-4ecb-b9f4-1dfe9381c17d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hvh5b" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.072539 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrtk5\" (UniqueName: \"kubernetes.io/projected/997ae328-13fc-41c0-9d10-fde36789b6c4-kube-api-access-mrtk5\") pod \"multus-admission-controller-857f4d67dd-59h6x\" (UID: \"997ae328-13fc-41c0-9d10-fde36789b6c4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-59h6x" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.072642 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4fd0b8a0-fbaa-4655-b2dd-a15e0bd3f55a-images\") pod \"machine-config-operator-74547568cd-sh5ck\" (UID: \"4fd0b8a0-fbaa-4655-b2dd-a15e0bd3f55a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sh5ck" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.072794 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbh62\" (UniqueName: \"kubernetes.io/projected/2e68edf2-12f6-4758-aed1-2d72186bc7de-kube-api-access-lbh62\") pod \"ingress-canary-qr6h4\" (UID: \"2e68edf2-12f6-4758-aed1-2d72186bc7de\") " pod="openshift-ingress-canary/ingress-canary-qr6h4" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.072865 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f526fe3-5134-42d0-a52f-e3c821137ef0-config\") pod \"service-ca-operator-777779d784-phv75\" (UID: \"5f526fe3-5134-42d0-a52f-e3c821137ef0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-phv75" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.072873 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5b7f5d78-25a5-497a-9315-494fe26edb93-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2jw8v\" (UID: \"5b7f5d78-25a5-497a-9315-494fe26edb93\") " pod="openshift-marketplace/marketplace-operator-79b997595-2jw8v" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.072950 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/29a7524a-4f1c-4e10-ae41-8e05f91cbde6-plugins-dir\") pod \"csi-hostpathplugin-qmphn\" (UID: \"29a7524a-4f1c-4e10-ae41-8e05f91cbde6\") " pod="hostpath-provisioner/csi-hostpathplugin-qmphn" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.075608 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-9d9fw\" (UID: \"3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9d9fw" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.075645 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/07007d77-4861-45ac-aacd-17b840bef2ee-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-99h4j\" (UID: \"07007d77-4861-45ac-aacd-17b840bef2ee\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-99h4j" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.075688 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tjwx\" (UniqueName: \"kubernetes.io/projected/7f1b22aa-8d19-479a-8439-52a095edf970-kube-api-access-8tjwx\") pod \"migrator-59844c95c7-q7mpr\" (UID: \"7f1b22aa-8d19-479a-8439-52a095edf970\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-q7mpr" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.075735 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqf22\" (UniqueName: \"kubernetes.io/projected/ea2ee243-24c1-48aa-befb-ff2e4e839819-kube-api-access-sqf22\") pod \"ingress-operator-5b745b69d9-2brf7\" (UID: \"ea2ee243-24c1-48aa-befb-ff2e4e839819\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2brf7" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.075785 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6q92r\" (UniqueName: \"kubernetes.io/projected/3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f-kube-api-access-6q92r\") pod \"cni-sysctl-allowlist-ds-9d9fw\" (UID: \"3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9d9fw" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.075815 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zv5cx\" (UniqueName: \"kubernetes.io/projected/5f526fe3-5134-42d0-a52f-e3c821137ef0-kube-api-access-zv5cx\") pod \"service-ca-operator-777779d784-phv75\" (UID: \"5f526fe3-5134-42d0-a52f-e3c821137ef0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-phv75" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.073037 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/29a7524a-4f1c-4e10-ae41-8e05f91cbde6-plugins-dir\") pod \"csi-hostpathplugin-qmphn\" (UID: \"29a7524a-4f1c-4e10-ae41-8e05f91cbde6\") " pod="hostpath-provisioner/csi-hostpathplugin-qmphn" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.073170 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9958a532-c481-4e0d-9bb4-f1303bf8b1a9-signing-key\") pod \"service-ca-9c57cc56f-d4txl\" (UID: \"9958a532-c481-4e0d-9bb4-f1303bf8b1a9\") " pod="openshift-service-ca/service-ca-9c57cc56f-d4txl" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.073563 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5b7f5d78-25a5-497a-9315-494fe26edb93-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2jw8v\" (UID: \"5b7f5d78-25a5-497a-9315-494fe26edb93\") " pod="openshift-marketplace/marketplace-operator-79b997595-2jw8v" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.075525 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5ab7ee8b-9182-43e2-85de-f8d92aa12587-webhook-cert\") pod \"packageserver-d55dfcdfc-fwgd2\" (UID: \"5ab7ee8b-9182-43e2-85de-f8d92aa12587\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fwgd2" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.076829 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-9d9fw\" (UID: \"3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9d9fw" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.079498 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/77084a3a-5610-4014-a3bf-6d4073a74d44-srv-cert\") pod \"catalog-operator-68c6474976-lltxw\" (UID: \"77084a3a-5610-4014-a3bf-6d4073a74d44\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lltxw" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.079704 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d53f65da-2081-4c98-8807-d86727ac7f89-certs\") pod \"machine-config-server-84wq5\" (UID: \"d53f65da-2081-4c98-8807-d86727ac7f89\") " pod="openshift-machine-config-operator/machine-config-server-84wq5" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.080130 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/36a86a6b-3a2c-4994-af93-2b4ae754edfa-secret-volume\") pod \"collect-profiles-29456970-f9qt7\" (UID: \"36a86a6b-3a2c-4994-af93-2b4ae754edfa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456970-f9qt7" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.080216 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/caf95282-b4d9-4814-bccc-e6c8c77658c5-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xrcwd\" (UID: \"caf95282-b4d9-4814-bccc-e6c8c77658c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xrcwd" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.080603 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f526fe3-5134-42d0-a52f-e3c821137ef0-serving-cert\") pod \"service-ca-operator-777779d784-phv75\" (UID: \"5f526fe3-5134-42d0-a52f-e3c821137ef0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-phv75" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.080724 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/fc73bdbb-e111-427a-b2d6-95976be94058-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-mgczz\" (UID: \"fc73bdbb-e111-427a-b2d6-95976be94058\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mgczz" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.080978 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4fd0b8a0-fbaa-4655-b2dd-a15e0bd3f55a-proxy-tls\") pod \"machine-config-operator-74547568cd-sh5ck\" (UID: \"4fd0b8a0-fbaa-4655-b2dd-a15e0bd3f55a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sh5ck" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.081602 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ea2ee243-24c1-48aa-befb-ff2e4e839819-metrics-tls\") pod \"ingress-operator-5b745b69d9-2brf7\" (UID: \"ea2ee243-24c1-48aa-befb-ff2e4e839819\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2brf7" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.082178 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/07007d77-4861-45ac-aacd-17b840bef2ee-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-99h4j\" (UID: \"07007d77-4861-45ac-aacd-17b840bef2ee\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-99h4j" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.082730 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/77084a3a-5610-4014-a3bf-6d4073a74d44-profile-collector-cert\") pod \"catalog-operator-68c6474976-lltxw\" (UID: \"77084a3a-5610-4014-a3bf-6d4073a74d44\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lltxw" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.085042 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85lzq\" (UniqueName: \"kubernetes.io/projected/ea3251f8-9e38-4094-86f1-98187e5b2c75-kube-api-access-85lzq\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.085336 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4343974e-fe2d-4bac-b5aa-c7cfcbfdec02-metrics-tls\") pod \"dns-default-2q7dr\" (UID: \"4343974e-fe2d-4bac-b5aa-c7cfcbfdec02\") " pod="openshift-dns/dns-default-2q7dr" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.086002 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/997ae328-13fc-41c0-9d10-fde36789b6c4-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-59h6x\" (UID: \"997ae328-13fc-41c0-9d10-fde36789b6c4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-59h6x" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.088683 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/58b842c2-f723-45ae-9d08-9218837bb66a-srv-cert\") pod \"olm-operator-6b444d44fb-8d5t5\" (UID: \"58b842c2-f723-45ae-9d08-9218837bb66a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8d5t5" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.088952 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5b7f5d78-25a5-497a-9315-494fe26edb93-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2jw8v\" (UID: \"5b7f5d78-25a5-497a-9315-494fe26edb93\") " pod="openshift-marketplace/marketplace-operator-79b997595-2jw8v" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.101247 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2e68edf2-12f6-4758-aed1-2d72186bc7de-cert\") pod \"ingress-canary-qr6h4\" (UID: \"2e68edf2-12f6-4758-aed1-2d72186bc7de\") " pod="openshift-ingress-canary/ingress-canary-qr6h4" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.103692 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5ab7ee8b-9182-43e2-85de-f8d92aa12587-apiservice-cert\") pod \"packageserver-d55dfcdfc-fwgd2\" (UID: \"5ab7ee8b-9182-43e2-85de-f8d92aa12587\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fwgd2" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.104188 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ea3251f8-9e38-4094-86f1-98187e5b2c75-bound-sa-token\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.123050 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fn4g6\" (UniqueName: \"kubernetes.io/projected/5737390b-6370-438a-9096-e47bdff12392-kube-api-access-fn4g6\") pod \"etcd-operator-b45778765-zvkh2\" (UID: \"5737390b-6370-438a-9096-e47bdff12392\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zvkh2" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.127966 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-h7drl"] Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.134864 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/51e0b974-fa59-4ecb-b9f4-1dfe9381c17d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-hvh5b\" (UID: \"51e0b974-fa59-4ecb-b9f4-1dfe9381c17d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hvh5b" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.134874 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/51e0b974-fa59-4ecb-b9f4-1dfe9381c17d-proxy-tls\") pod \"machine-config-controller-84d6567774-hvh5b\" (UID: \"51e0b974-fa59-4ecb-b9f4-1dfe9381c17d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hvh5b" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.145610 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c378c365-855f-480d-9089-f6abd1b6a743-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-zr2w7\" (UID: \"c378c365-855f-480d-9089-f6abd1b6a743\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zr2w7" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.165908 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fs4cj"] Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.176727 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:41:00 crc kubenswrapper[4854]: E0103 05:41:00.177411 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 05:41:00.677389091 +0000 UTC m=+39.003965663 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.196834 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfktk\" (UniqueName: \"kubernetes.io/projected/07007d77-4861-45ac-aacd-17b840bef2ee-kube-api-access-wfktk\") pod \"package-server-manager-789f6589d5-99h4j\" (UID: \"07007d77-4861-45ac-aacd-17b840bef2ee\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-99h4j" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.203707 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ea2ee243-24c1-48aa-befb-ff2e4e839819-bound-sa-token\") pod \"ingress-operator-5b745b69d9-2brf7\" (UID: \"ea2ee243-24c1-48aa-befb-ff2e4e839819\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2brf7" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.224372 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jz2hq\" (UniqueName: \"kubernetes.io/projected/fc73bdbb-e111-427a-b2d6-95976be94058-kube-api-access-jz2hq\") pod \"control-plane-machine-set-operator-78cbb6b69f-mgczz\" (UID: \"fc73bdbb-e111-427a-b2d6-95976be94058\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mgczz" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.231505 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7qqff"] Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.250500 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mgczz" Jan 03 05:41:00 crc kubenswrapper[4854]: W0103 05:41:00.263860 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f1ea809_34a6_45e1_87b1_6cce4f74ced0.slice/crio-ef8b123734927e7485d02f2d1c0eb243a9e0d53c352141333f5a18cdc8d3f2fb WatchSource:0}: Error finding container ef8b123734927e7485d02f2d1c0eb243a9e0d53c352141333f5a18cdc8d3f2fb: Status 404 returned error can't find the container with id ef8b123734927e7485d02f2d1c0eb243a9e0d53c352141333f5a18cdc8d3f2fb Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.264671 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2nh8\" (UniqueName: \"kubernetes.io/projected/58b842c2-f723-45ae-9d08-9218837bb66a-kube-api-access-x2nh8\") pod \"olm-operator-6b444d44fb-8d5t5\" (UID: \"58b842c2-f723-45ae-9d08-9218837bb66a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8d5t5" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.280681 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:41:00 crc kubenswrapper[4854]: E0103 05:41:00.281701 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 05:41:00.781681053 +0000 UTC m=+39.108257625 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wc7xf" (UID: "ea3251f8-9e38-4094-86f1-98187e5b2c75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.293114 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8d5t5" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.302834 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqtkh\" (UniqueName: \"kubernetes.io/projected/36a86a6b-3a2c-4994-af93-2b4ae754edfa-kube-api-access-vqtkh\") pod \"collect-profiles-29456970-f9qt7\" (UID: \"36a86a6b-3a2c-4994-af93-2b4ae754edfa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456970-f9qt7" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.323296 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2z6bm\" (UniqueName: \"kubernetes.io/projected/77084a3a-5610-4014-a3bf-6d4073a74d44-kube-api-access-2z6bm\") pod \"catalog-operator-68c6474976-lltxw\" (UID: \"77084a3a-5610-4014-a3bf-6d4073a74d44\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lltxw" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.323612 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lltxw" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.324060 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-99h4j" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.329182 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wn28\" (UniqueName: \"kubernetes.io/projected/4343974e-fe2d-4bac-b5aa-c7cfcbfdec02-kube-api-access-6wn28\") pod \"dns-default-2q7dr\" (UID: \"4343974e-fe2d-4bac-b5aa-c7cfcbfdec02\") " pod="openshift-dns/dns-default-2q7dr" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.332563 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-tdlx9" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.343631 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-zvkh2" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.355799 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-2q7dr" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.357357 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9h89z\" (UniqueName: \"kubernetes.io/projected/5ab7ee8b-9182-43e2-85de-f8d92aa12587-kube-api-access-9h89z\") pod \"packageserver-d55dfcdfc-fwgd2\" (UID: \"5ab7ee8b-9182-43e2-85de-f8d92aa12587\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fwgd2" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.361292 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jscr9\" (UniqueName: \"kubernetes.io/projected/4fd0b8a0-fbaa-4655-b2dd-a15e0bd3f55a-kube-api-access-jscr9\") pod \"machine-config-operator-74547568cd-sh5ck\" (UID: \"4fd0b8a0-fbaa-4655-b2dd-a15e0bd3f55a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sh5ck" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.366408 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-vd2jp"] Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.367259 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zr2w7" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.374561 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnx77\" (UniqueName: \"kubernetes.io/projected/9958a532-c481-4e0d-9bb4-f1303bf8b1a9-kube-api-access-nnx77\") pod \"service-ca-9c57cc56f-d4txl\" (UID: \"9958a532-c481-4e0d-9bb4-f1303bf8b1a9\") " pod="openshift-service-ca/service-ca-9c57cc56f-d4txl" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.383174 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:41:00 crc kubenswrapper[4854]: E0103 05:41:00.386310 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 05:41:00.884818665 +0000 UTC m=+39.211395237 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.390508 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sh5ck" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.403067 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6j9m\" (UniqueName: \"kubernetes.io/projected/d53f65da-2081-4c98-8807-d86727ac7f89-kube-api-access-q6j9m\") pod \"machine-config-server-84wq5\" (UID: \"d53f65da-2081-4c98-8807-d86727ac7f89\") " pod="openshift-machine-config-operator/machine-config-server-84wq5" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.407437 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-d4txl" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.417781 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29456970-f9qt7" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.420833 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/caf95282-b4d9-4814-bccc-e6c8c77658c5-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xrcwd\" (UID: \"caf95282-b4d9-4814-bccc-e6c8c77658c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xrcwd" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.435835 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnggr\" (UniqueName: \"kubernetes.io/projected/29a7524a-4f1c-4e10-ae41-8e05f91cbde6-kube-api-access-hnggr\") pod \"csi-hostpathplugin-qmphn\" (UID: \"29a7524a-4f1c-4e10-ae41-8e05f91cbde6\") " pod="hostpath-provisioner/csi-hostpathplugin-qmphn" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.446295 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdlsv\" (UniqueName: \"kubernetes.io/projected/5b7f5d78-25a5-497a-9315-494fe26edb93-kube-api-access-pdlsv\") pod \"marketplace-operator-79b997595-2jw8v\" (UID: \"5b7f5d78-25a5-497a-9315-494fe26edb93\") " pod="openshift-marketplace/marketplace-operator-79b997595-2jw8v" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.448975 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-qmphn" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.457917 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xrcwd" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.467157 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j29wc\" (UniqueName: \"kubernetes.io/projected/51e0b974-fa59-4ecb-b9f4-1dfe9381c17d-kube-api-access-j29wc\") pod \"machine-config-controller-84d6567774-hvh5b\" (UID: \"51e0b974-fa59-4ecb-b9f4-1dfe9381c17d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hvh5b" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.485620 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrtk5\" (UniqueName: \"kubernetes.io/projected/997ae328-13fc-41c0-9d10-fde36789b6c4-kube-api-access-mrtk5\") pod \"multus-admission-controller-857f4d67dd-59h6x\" (UID: \"997ae328-13fc-41c0-9d10-fde36789b6c4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-59h6x" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.486313 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.486461 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbtmc\" (UniqueName: \"kubernetes.io/projected/e0316818-6edd-4e11-9a85-cdc385194515-kube-api-access-pbtmc\") pod \"route-controller-manager-6576b87f9c-9tbks\" (UID: \"e0316818-6edd-4e11-9a85-cdc385194515\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9tbks" Jan 03 05:41:00 crc kubenswrapper[4854]: E0103 05:41:00.486733 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 05:41:00.986707985 +0000 UTC m=+39.313284757 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wc7xf" (UID: "ea3251f8-9e38-4094-86f1-98187e5b2c75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.493801 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbtmc\" (UniqueName: \"kubernetes.io/projected/e0316818-6edd-4e11-9a85-cdc385194515-kube-api-access-pbtmc\") pod \"route-controller-manager-6576b87f9c-9tbks\" (UID: \"e0316818-6edd-4e11-9a85-cdc385194515\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9tbks" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.510753 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbh62\" (UniqueName: \"kubernetes.io/projected/2e68edf2-12f6-4758-aed1-2d72186bc7de-kube-api-access-lbh62\") pod \"ingress-canary-qr6h4\" (UID: \"2e68edf2-12f6-4758-aed1-2d72186bc7de\") " pod="openshift-ingress-canary/ingress-canary-qr6h4" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.525331 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fwgd2" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.542868 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zv5cx\" (UniqueName: \"kubernetes.io/projected/5f526fe3-5134-42d0-a52f-e3c821137ef0-kube-api-access-zv5cx\") pod \"service-ca-operator-777779d784-phv75\" (UID: \"5f526fe3-5134-42d0-a52f-e3c821137ef0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-phv75" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.552794 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tjwx\" (UniqueName: \"kubernetes.io/projected/7f1b22aa-8d19-479a-8439-52a095edf970-kube-api-access-8tjwx\") pod \"migrator-59844c95c7-q7mpr\" (UID: \"7f1b22aa-8d19-479a-8439-52a095edf970\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-q7mpr" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.562010 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-59h6x" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.572336 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-84wq5" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.572881 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqf22\" (UniqueName: \"kubernetes.io/projected/ea2ee243-24c1-48aa-befb-ff2e4e839819-kube-api-access-sqf22\") pod \"ingress-operator-5b745b69d9-2brf7\" (UID: \"ea2ee243-24c1-48aa-befb-ff2e4e839819\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2brf7" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.574659 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-phv75" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.587491 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:41:00 crc kubenswrapper[4854]: E0103 05:41:00.587819 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 05:41:01.087802523 +0000 UTC m=+39.414379095 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.590435 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6q92r\" (UniqueName: \"kubernetes.io/projected/3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f-kube-api-access-6q92r\") pod \"cni-sysctl-allowlist-ds-9d9fw\" (UID: \"3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9d9fw" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.604721 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-wcbst" event={"ID":"c6974d72-3008-4b24-ab6c-332aa56cfd3b","Type":"ContainerStarted","Data":"092b485db5512a38bd98290ac92813bd6b4f543f1ac739e1cc2fd14547613911"} Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.604772 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-wcbst" event={"ID":"c6974d72-3008-4b24-ab6c-332aa56cfd3b","Type":"ContainerStarted","Data":"75827f7c4dd6005c526abe24463443105e16ae40b1b4e3a7e34599dd3316719a"} Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.606657 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7qqff" event={"ID":"6f1ea809-34a6-45e1-87b1-6cce4f74ced0","Type":"ContainerStarted","Data":"ef8b123734927e7485d02f2d1c0eb243a9e0d53c352141333f5a18cdc8d3f2fb"} Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.607998 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q9s7n" event={"ID":"55c2bf44-bf9f-4dd8-910d-f24744fa629d","Type":"ContainerStarted","Data":"a10c2d88108ae4a1edfe13016f2f3998538e580ef4dba10a98dda9ec527fe0db"} Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.608028 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q9s7n" event={"ID":"55c2bf44-bf9f-4dd8-910d-f24744fa629d","Type":"ContainerStarted","Data":"401cddecbe6cd7f2211be5197bc531938b008cdcf1b2e69a5e845e2b06e42b74"} Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.610734 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-24g4d" event={"ID":"0ab5b4f5-1bd3-4bbe-b749-4f5607aa9f79","Type":"ContainerStarted","Data":"0342f65659a73ff38b19531b6bc40896f549f45cb97f4ffeb924cd17b585f67b"} Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.611668 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" event={"ID":"c31b366b-2182-4c59-8777-e552553ba8a8","Type":"ContainerStarted","Data":"9d5b94dfd9b8041d415b0d27ce6a6626e408d486e23c12e02993ec66fee529fe"} Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.613363 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-qv6qz" event={"ID":"cfdb7138-2cd3-450f-9421-e213122022af","Type":"ContainerStarted","Data":"5f368978a8d8798d9bf18e9e812b132ea190b42a80a67589c45847f778922e3a"} Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.613387 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-qv6qz" event={"ID":"cfdb7138-2cd3-450f-9421-e213122022af","Type":"ContainerStarted","Data":"c14904a1488ec58f79095491fbd6cd8c535543cd916c7960f541824494674660"} Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.614015 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qr6h4" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.621293 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-wfmjq" event={"ID":"7d7acbec-3363-42f1-b14d-150409b8c40b","Type":"ContainerStarted","Data":"ab2c5997e913c32b81190d8a32299720151d9d0dd9d33b021bee394839863bf9"} Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.621386 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-wfmjq" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.621405 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-wfmjq" event={"ID":"7d7acbec-3363-42f1-b14d-150409b8c40b","Type":"ContainerStarted","Data":"99b23c817a8b2e751a4084c606f1d332ca954e7e5a718cddc376d1d0cac5d9d7"} Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.626154 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fs4cj" event={"ID":"3266855e-377c-4d01-ab10-7779bb871699","Type":"ContainerStarted","Data":"8d013b4538f154388067515f0dcb8d0f6a46a4a959a0fc6e158eed48ddf735b1"} Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.644012 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2brf7" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.645063 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-vtpbv" event={"ID":"b0379c6e-b02d-40ef-b9ae-add1e633bc4a","Type":"ContainerStarted","Data":"62516bf1c6e88af27dff1c7ec7ad53eb135e880f323c36e5a9c86fba339ce00c"} Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.654238 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-wfmjq" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.660731 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-9d9fw" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.673658 4854 generic.go:334] "Generic (PLEG): container finished" podID="2259a421-a8dd-45e8-baa8-15cf1d37782e" containerID="c1ef1ba47478e27500f9a1e99ba8b9a0087aaff7171dec6593e8c459f5146ba3" exitCode=0 Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.673762 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-pzlj8" event={"ID":"2259a421-a8dd-45e8-baa8-15cf1d37782e","Type":"ContainerDied","Data":"c1ef1ba47478e27500f9a1e99ba8b9a0087aaff7171dec6593e8c459f5146ba3"} Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.678298 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hvh5b" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.681020 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zhzlw" event={"ID":"ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc","Type":"ContainerStarted","Data":"26fc79b066ff144cb2fbb1b2afc5e1bddcf9b6336e482cca9f1dd2f20889a365"} Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.681062 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zhzlw" event={"ID":"ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc","Type":"ContainerStarted","Data":"94f668d9c1065fe601f661008a84a0de38236b33c529e8668a457a45879bc164"} Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.689148 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:41:00 crc kubenswrapper[4854]: E0103 05:41:00.690850 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 05:41:01.190834013 +0000 UTC m=+39.517410585 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wc7xf" (UID: "ea3251f8-9e38-4094-86f1-98187e5b2c75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.712650 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2jw8v" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.725927 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-q7mpr" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.789814 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:41:00 crc kubenswrapper[4854]: E0103 05:41:00.791382 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 05:41:01.291360007 +0000 UTC m=+39.617936589 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.846730 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-58vx9"] Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.879178 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-n82hj"] Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.889561 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-2lwzj"] Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.897577 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.897660 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jdzfq"] Jan 03 05:41:00 crc kubenswrapper[4854]: E0103 05:41:00.898382 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 05:41:01.39836434 +0000 UTC m=+39.724940912 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wc7xf" (UID: "ea3251f8-9e38-4094-86f1-98187e5b2c75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.912532 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-dmlm5"] Jan 03 05:41:00 crc kubenswrapper[4854]: I0103 05:41:00.981621 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-b9llh"] Jan 03 05:41:01 crc kubenswrapper[4854]: I0103 05:41:01.001276 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:41:01 crc kubenswrapper[4854]: I0103 05:41:01.001452 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0316818-6edd-4e11-9a85-cdc385194515-serving-cert\") pod \"route-controller-manager-6576b87f9c-9tbks\" (UID: \"e0316818-6edd-4e11-9a85-cdc385194515\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9tbks" Jan 03 05:41:01 crc kubenswrapper[4854]: E0103 05:41:01.003625 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 05:41:01.503581526 +0000 UTC m=+39.830158098 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:01 crc kubenswrapper[4854]: I0103 05:41:01.014596 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0316818-6edd-4e11-9a85-cdc385194515-serving-cert\") pod \"route-controller-manager-6576b87f9c-9tbks\" (UID: \"e0316818-6edd-4e11-9a85-cdc385194515\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9tbks" Jan 03 05:41:01 crc kubenswrapper[4854]: I0103 05:41:01.103444 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:41:01 crc kubenswrapper[4854]: E0103 05:41:01.104407 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 05:41:01.604390168 +0000 UTC m=+39.930966740 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wc7xf" (UID: "ea3251f8-9e38-4094-86f1-98187e5b2c75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:01 crc kubenswrapper[4854]: I0103 05:41:01.204683 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9tbks" Jan 03 05:41:01 crc kubenswrapper[4854]: I0103 05:41:01.205553 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:41:01 crc kubenswrapper[4854]: E0103 05:41:01.205969 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 05:41:01.705954119 +0000 UTC m=+40.032530691 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:01 crc kubenswrapper[4854]: I0103 05:41:01.230749 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-99h4j"] Jan 03 05:41:01 crc kubenswrapper[4854]: W0103 05:41:01.293651 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ecb343a_f88c_49d3_a792_696f8b94eca3.slice/crio-420cc408acefa49174194609c5320a88728113dd7fe08562a863207ce3caceeb WatchSource:0}: Error finding container 420cc408acefa49174194609c5320a88728113dd7fe08562a863207ce3caceeb: Status 404 returned error can't find the container with id 420cc408acefa49174194609c5320a88728113dd7fe08562a863207ce3caceeb Jan 03 05:41:01 crc kubenswrapper[4854]: W0103 05:41:01.302682 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd53f65da_2081_4c98_8807_d86727ac7f89.slice/crio-ae407763193c2c1b9c7875de5b2deefba1736ec54d075954859c4f295167584a WatchSource:0}: Error finding container ae407763193c2c1b9c7875de5b2deefba1736ec54d075954859c4f295167584a: Status 404 returned error can't find the container with id ae407763193c2c1b9c7875de5b2deefba1736ec54d075954859c4f295167584a Jan 03 05:41:01 crc kubenswrapper[4854]: I0103 05:41:01.307637 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:41:01 crc kubenswrapper[4854]: E0103 05:41:01.307954 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 05:41:01.807938761 +0000 UTC m=+40.134515333 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wc7xf" (UID: "ea3251f8-9e38-4094-86f1-98187e5b2c75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:01 crc kubenswrapper[4854]: I0103 05:41:01.309028 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8d5t5"] Jan 03 05:41:01 crc kubenswrapper[4854]: I0103 05:41:01.409887 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:41:01 crc kubenswrapper[4854]: I0103 05:41:01.410142 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c9a35ce4-6254-4744-b9a8-966399ae89cc-metrics-certs\") pod \"network-metrics-daemon-6wgwf\" (UID: \"c9a35ce4-6254-4744-b9a8-966399ae89cc\") " pod="openshift-multus/network-metrics-daemon-6wgwf" Jan 03 05:41:01 crc kubenswrapper[4854]: E0103 05:41:01.410882 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 05:41:01.910858778 +0000 UTC m=+40.237435350 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:01 crc kubenswrapper[4854]: I0103 05:41:01.423163 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c9a35ce4-6254-4744-b9a8-966399ae89cc-metrics-certs\") pod \"network-metrics-daemon-6wgwf\" (UID: \"c9a35ce4-6254-4744-b9a8-966399ae89cc\") " pod="openshift-multus/network-metrics-daemon-6wgwf" Jan 03 05:41:01 crc kubenswrapper[4854]: I0103 05:41:01.504149 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mgczz"] Jan 03 05:41:01 crc kubenswrapper[4854]: W0103 05:41:01.509844 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07007d77_4861_45ac_aacd_17b840bef2ee.slice/crio-6301118ae960e58e010c944ffaf13e7bc1104952853a8fcf7fe7ed49ac4019b6 WatchSource:0}: Error finding container 6301118ae960e58e010c944ffaf13e7bc1104952853a8fcf7fe7ed49ac4019b6: Status 404 returned error can't find the container with id 6301118ae960e58e010c944ffaf13e7bc1104952853a8fcf7fe7ed49ac4019b6 Jan 03 05:41:01 crc kubenswrapper[4854]: I0103 05:41:01.510974 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:41:01 crc kubenswrapper[4854]: E0103 05:41:01.511313 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 05:41:02.0113025 +0000 UTC m=+40.337879072 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wc7xf" (UID: "ea3251f8-9e38-4094-86f1-98187e5b2c75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:01 crc kubenswrapper[4854]: I0103 05:41:01.554305 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6wgwf" Jan 03 05:41:01 crc kubenswrapper[4854]: I0103 05:41:01.614931 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:41:01 crc kubenswrapper[4854]: E0103 05:41:01.615653 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 05:41:02.115631433 +0000 UTC m=+40.442208005 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:01 crc kubenswrapper[4854]: I0103 05:41:01.617328 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-d4txl"] Jan 03 05:41:01 crc kubenswrapper[4854]: I0103 05:41:01.646490 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-wfmjq" podStartSLOduration=16.646469672 podStartE2EDuration="16.646469672s" podCreationTimestamp="2026-01-03 05:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:41:01.645846816 +0000 UTC m=+39.972423388" watchObservedRunningTime="2026-01-03 05:41:01.646469672 +0000 UTC m=+39.973046264" Jan 03 05:41:01 crc kubenswrapper[4854]: I0103 05:41:01.720758 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:41:01 crc kubenswrapper[4854]: E0103 05:41:01.721109 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 05:41:02.221096706 +0000 UTC m=+40.547673278 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wc7xf" (UID: "ea3251f8-9e38-4094-86f1-98187e5b2c75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:01 crc kubenswrapper[4854]: I0103 05:41:01.793031 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-2q7dr"] Jan 03 05:41:01 crc kubenswrapper[4854]: I0103 05:41:01.798099 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29456970-f9qt7"] Jan 03 05:41:01 crc kubenswrapper[4854]: I0103 05:41:01.816005 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-zvkh2"] Jan 03 05:41:01 crc kubenswrapper[4854]: I0103 05:41:01.831626 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:41:01 crc kubenswrapper[4854]: E0103 05:41:01.831930 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 05:41:02.331915677 +0000 UTC m=+40.658492249 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:01 crc kubenswrapper[4854]: I0103 05:41:01.833566 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lltxw"] Jan 03 05:41:01 crc kubenswrapper[4854]: I0103 05:41:01.938091 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-24g4d" event={"ID":"0ab5b4f5-1bd3-4bbe-b749-4f5607aa9f79","Type":"ContainerStarted","Data":"6008db19429c72bf7cc829a150033f722655242fa511dfe12ded87d89debe7ca"} Jan 03 05:41:01 crc kubenswrapper[4854]: I0103 05:41:01.938286 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:41:01 crc kubenswrapper[4854]: E0103 05:41:01.938542 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 05:41:02.43852988 +0000 UTC m=+40.765106452 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wc7xf" (UID: "ea3251f8-9e38-4094-86f1-98187e5b2c75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:01 crc kubenswrapper[4854]: I0103 05:41:01.951775 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-b9llh" event={"ID":"c3a93788-c0a0-412d-aabd-1e93727e72f0","Type":"ContainerStarted","Data":"e3ed6f3c183f6ebafa1e5a731a1aadf7c5d96b9fb8ebd90bb895d3c5866d9332"} Jan 03 05:41:01 crc kubenswrapper[4854]: I0103 05:41:01.954319 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-9d9fw" event={"ID":"3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f","Type":"ContainerStarted","Data":"b6aea1a9e8cada6f94502eaa194ee03c1df61ec2d2d5b33e29f8f1b563f17baa"} Jan 03 05:41:01 crc kubenswrapper[4854]: I0103 05:41:01.957172 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-84wq5" event={"ID":"d53f65da-2081-4c98-8807-d86727ac7f89","Type":"ContainerStarted","Data":"ae407763193c2c1b9c7875de5b2deefba1736ec54d075954859c4f295167584a"} Jan 03 05:41:01 crc kubenswrapper[4854]: I0103 05:41:01.959326 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-dmlm5" event={"ID":"d5805efa-800c-43df-ba80-7a7db226ebb3","Type":"ContainerStarted","Data":"b97b69a7212c49dc44dfd99bba8eff7477cc7f53a83b156f9b121e0855c46631"} Jan 03 05:41:01 crc kubenswrapper[4854]: I0103 05:41:01.971901 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-zhzlw" podStartSLOduration=16.971886504 podStartE2EDuration="16.971886504s" podCreationTimestamp="2026-01-03 05:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:41:01.971286608 +0000 UTC m=+40.297863180" watchObservedRunningTime="2026-01-03 05:41:01.971886504 +0000 UTC m=+40.298463076" Jan 03 05:41:02 crc kubenswrapper[4854]: I0103 05:41:02.033333 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xrcwd"] Jan 03 05:41:02 crc kubenswrapper[4854]: I0103 05:41:02.033364 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-2lwzj" event={"ID":"dcde1a7d-7025-45cb-92de-483da7a86296","Type":"ContainerStarted","Data":"b15d01debcc25a73e38ef770779ae8bb7ab5856c1561802febb71574d8f4cc76"} Jan 03 05:41:02 crc kubenswrapper[4854]: I0103 05:41:02.039480 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-2brf7"] Jan 03 05:41:02 crc kubenswrapper[4854]: I0103 05:41:02.039787 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:41:02 crc kubenswrapper[4854]: E0103 05:41:02.039924 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 05:41:02.539901406 +0000 UTC m=+40.866477978 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:02 crc kubenswrapper[4854]: I0103 05:41:02.040109 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:41:02 crc kubenswrapper[4854]: E0103 05:41:02.040370 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 05:41:02.540358508 +0000 UTC m=+40.866935080 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wc7xf" (UID: "ea3251f8-9e38-4094-86f1-98187e5b2c75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:02 crc kubenswrapper[4854]: I0103 05:41:02.048809 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-qmphn"] Jan 03 05:41:02 crc kubenswrapper[4854]: I0103 05:41:02.061802 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-sh5ck"] Jan 03 05:41:02 crc kubenswrapper[4854]: I0103 05:41:02.063411 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zr2w7"] Jan 03 05:41:02 crc kubenswrapper[4854]: I0103 05:41:02.116159 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q9s7n" event={"ID":"55c2bf44-bf9f-4dd8-910d-f24744fa629d","Type":"ContainerStarted","Data":"bf96eda72c66bab956ee211eff6445aa93b017f2cb442ff2878deedb9c806bdc"} Jan 03 05:41:02 crc kubenswrapper[4854]: I0103 05:41:02.147347 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:41:02 crc kubenswrapper[4854]: E0103 05:41:02.148755 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 05:41:02.648738546 +0000 UTC m=+40.975315108 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:02 crc kubenswrapper[4854]: I0103 05:41:02.184500 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" Jan 03 05:41:02 crc kubenswrapper[4854]: I0103 05:41:02.184531 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fwgd2"] Jan 03 05:41:02 crc kubenswrapper[4854]: I0103 05:41:02.184550 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8d5t5" event={"ID":"58b842c2-f723-45ae-9d08-9218837bb66a","Type":"ContainerStarted","Data":"554e5ec7958fd0f932c1ef1a1da6e5fd9c4aff5aa53515928902a9a5debff5c3"} Jan 03 05:41:02 crc kubenswrapper[4854]: I0103 05:41:02.184567 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" event={"ID":"c31b366b-2182-4c59-8777-e552553ba8a8","Type":"ContainerStarted","Data":"91a30b02a02410d5306acdb48fd96666de1bed90ab02f525930c913ecb5b8fbb"} Jan 03 05:41:02 crc kubenswrapper[4854]: I0103 05:41:02.184577 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-tdlx9" event={"ID":"ab6ec22e-2a2c-4e28-8242-5bd783990843","Type":"ContainerStarted","Data":"5202c8742cfe64613a0279c50c9ba5ecab7296448cfca6010ba76bd644abeb98"} Jan 03 05:41:02 crc kubenswrapper[4854]: I0103 05:41:02.198452 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jdzfq" event={"ID":"8f4db886-9214-4c75-931e-acea9a580541","Type":"ContainerStarted","Data":"322b9709480ea755476555da6beb69db7d4ef1bcba9dcd4de1e16f5b5520bffa"} Jan 03 05:41:02 crc kubenswrapper[4854]: I0103 05:41:02.229792 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-wcbst" event={"ID":"c6974d72-3008-4b24-ab6c-332aa56cfd3b","Type":"ContainerStarted","Data":"f885650081aac2790ff4feea41422bbf491c853f65bef158178bc40969393271"} Jan 03 05:41:02 crc kubenswrapper[4854]: I0103 05:41:02.237559 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n82hj" event={"ID":"9ecb343a-f88c-49d3-a792-696f8b94eca3","Type":"ContainerStarted","Data":"420cc408acefa49174194609c5320a88728113dd7fe08562a863207ce3caceeb"} Jan 03 05:41:02 crc kubenswrapper[4854]: I0103 05:41:02.249626 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:41:02 crc kubenswrapper[4854]: I0103 05:41:02.250995 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vd2jp" event={"ID":"e05124c8-4705-4d57-82ec-b1ae0658e98e","Type":"ContainerStarted","Data":"9e1cf3c129706143b18aaad0b18b6d5f82024486a0f2a0effbf456a6fa5e4b76"} Jan 03 05:41:02 crc kubenswrapper[4854]: E0103 05:41:02.254277 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 05:41:02.75426021 +0000 UTC m=+41.080836772 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wc7xf" (UID: "ea3251f8-9e38-4094-86f1-98187e5b2c75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:02 crc kubenswrapper[4854]: I0103 05:41:02.262678 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-q7mpr"] Jan 03 05:41:02 crc kubenswrapper[4854]: I0103 05:41:02.291519 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-vtpbv" event={"ID":"b0379c6e-b02d-40ef-b9ae-add1e633bc4a","Type":"ContainerStarted","Data":"b003306c709cdb4b4c71e9bbbb037118b8466a77a516a31cf224abfbcfbcd931"} Jan 03 05:41:02 crc kubenswrapper[4854]: I0103 05:41:02.298606 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-hvh5b"] Jan 03 05:41:02 crc kubenswrapper[4854]: I0103 05:41:02.306287 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mgczz" event={"ID":"fc73bdbb-e111-427a-b2d6-95976be94058","Type":"ContainerStarted","Data":"7687e84b15962301556bfe83eb6b2306f11e5685fdf11ae7a7e7be20a6180965"} Jan 03 05:41:02 crc kubenswrapper[4854]: I0103 05:41:02.316445 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-99h4j" event={"ID":"07007d77-4861-45ac-aacd-17b840bef2ee","Type":"ContainerStarted","Data":"6301118ae960e58e010c944ffaf13e7bc1104952853a8fcf7fe7ed49ac4019b6"} Jan 03 05:41:02 crc kubenswrapper[4854]: I0103 05:41:02.317912 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-58vx9" event={"ID":"154b4721-3c78-4469-946c-cdf5a68bd110","Type":"ContainerStarted","Data":"798b3dae11a786821e4d5ce6a6c060b0d00584a48dcb3a832a73dbc605096984"} Jan 03 05:41:02 crc kubenswrapper[4854]: I0103 05:41:02.319282 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7qqff" event={"ID":"6f1ea809-34a6-45e1-87b1-6cce4f74ced0","Type":"ContainerStarted","Data":"61ec314ef47d8f59b9c0c97b789d70d5bb8b2da5ca46be095ce5effaa2420d74"} Jan 03 05:41:02 crc kubenswrapper[4854]: I0103 05:41:02.350427 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:41:02 crc kubenswrapper[4854]: E0103 05:41:02.354040 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 05:41:02.854024975 +0000 UTC m=+41.180601547 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:02 crc kubenswrapper[4854]: I0103 05:41:02.364823 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fs4cj" event={"ID":"3266855e-377c-4d01-ab10-7779bb871699","Type":"ContainerStarted","Data":"8dfc8d20d08e218aa9798740d8824b336e900cbb16f9acf3875dd3533846c10d"} Jan 03 05:41:02 crc kubenswrapper[4854]: I0103 05:41:02.379833 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-wcbst" podStartSLOduration=17.379818543 podStartE2EDuration="17.379818543s" podCreationTimestamp="2026-01-03 05:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:41:02.378235142 +0000 UTC m=+40.704811724" watchObservedRunningTime="2026-01-03 05:41:02.379818543 +0000 UTC m=+40.706395115" Jan 03 05:41:02 crc kubenswrapper[4854]: I0103 05:41:02.461470 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:41:02 crc kubenswrapper[4854]: E0103 05:41:02.465446 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 05:41:02.965425881 +0000 UTC m=+41.292002453 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wc7xf" (UID: "ea3251f8-9e38-4094-86f1-98187e5b2c75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:02 crc kubenswrapper[4854]: I0103 05:41:02.519520 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" podStartSLOduration=17.519494782 podStartE2EDuration="17.519494782s" podCreationTimestamp="2026-01-03 05:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:41:02.518758133 +0000 UTC m=+40.845334705" watchObservedRunningTime="2026-01-03 05:41:02.519494782 +0000 UTC m=+40.846071354" Jan 03 05:41:02 crc kubenswrapper[4854]: I0103 05:41:02.569051 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:41:02 crc kubenswrapper[4854]: E0103 05:41:02.569861 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 05:41:03.069832786 +0000 UTC m=+41.396409358 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:02 crc kubenswrapper[4854]: I0103 05:41:02.593241 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-24g4d" podStartSLOduration=17.593217542 podStartE2EDuration="17.593217542s" podCreationTimestamp="2026-01-03 05:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:41:02.555917326 +0000 UTC m=+40.882493898" watchObservedRunningTime="2026-01-03 05:41:02.593217542 +0000 UTC m=+40.919794104" Jan 03 05:41:02 crc kubenswrapper[4854]: I0103 05:41:02.674593 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:41:02 crc kubenswrapper[4854]: E0103 05:41:02.675147 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 05:41:03.175129024 +0000 UTC m=+41.501705586 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wc7xf" (UID: "ea3251f8-9e38-4094-86f1-98187e5b2c75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:02 crc kubenswrapper[4854]: I0103 05:41:02.765247 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-vtpbv" podStartSLOduration=17.765224509 podStartE2EDuration="17.765224509s" podCreationTimestamp="2026-01-03 05:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:41:02.763328099 +0000 UTC m=+41.089904701" watchObservedRunningTime="2026-01-03 05:41:02.765224509 +0000 UTC m=+41.091801081" Jan 03 05:41:02 crc kubenswrapper[4854]: I0103 05:41:02.767822 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" Jan 03 05:41:02 crc kubenswrapper[4854]: I0103 05:41:02.776653 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:41:02 crc kubenswrapper[4854]: E0103 05:41:02.777129 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 05:41:03.277104256 +0000 UTC m=+41.603680828 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:02 crc kubenswrapper[4854]: I0103 05:41:02.827155 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-59h6x"] Jan 03 05:41:02 crc kubenswrapper[4854]: I0103 05:41:02.827548 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-84wq5" podStartSLOduration=4.827531253 podStartE2EDuration="4.827531253s" podCreationTimestamp="2026-01-03 05:40:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:41:02.80617664 +0000 UTC m=+41.132753212" watchObservedRunningTime="2026-01-03 05:41:02.827531253 +0000 UTC m=+41.154107825" Jan 03 05:41:02 crc kubenswrapper[4854]: I0103 05:41:02.881818 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:41:02 crc kubenswrapper[4854]: E0103 05:41:02.882161 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 05:41:03.382148608 +0000 UTC m=+41.708725180 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wc7xf" (UID: "ea3251f8-9e38-4094-86f1-98187e5b2c75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:02 crc kubenswrapper[4854]: I0103 05:41:02.905119 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q9s7n" podStartSLOduration=17.905052951000002 podStartE2EDuration="17.905052951s" podCreationTimestamp="2026-01-03 05:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:41:02.899029245 +0000 UTC m=+41.225605817" watchObservedRunningTime="2026-01-03 05:41:02.905052951 +0000 UTC m=+41.231629523" Jan 03 05:41:02 crc kubenswrapper[4854]: W0103 05:41:02.935433 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod997ae328_13fc_41c0_9d10_fde36789b6c4.slice/crio-b1f2a1f3a3607484c7b0cf477fdb5d04f0442a7f6a7ce0b910ee79cfbe871e9a WatchSource:0}: Error finding container b1f2a1f3a3607484c7b0cf477fdb5d04f0442a7f6a7ce0b910ee79cfbe871e9a: Status 404 returned error can't find the container with id b1f2a1f3a3607484c7b0cf477fdb5d04f0442a7f6a7ce0b910ee79cfbe871e9a Jan 03 05:41:02 crc kubenswrapper[4854]: I0103 05:41:02.949241 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-qr6h4"] Jan 03 05:41:02 crc kubenswrapper[4854]: I0103 05:41:02.949975 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fs4cj" podStartSLOduration=17.949955915 podStartE2EDuration="17.949955915s" podCreationTimestamp="2026-01-03 05:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:41:02.949849582 +0000 UTC m=+41.276426154" watchObservedRunningTime="2026-01-03 05:41:02.949955915 +0000 UTC m=+41.276532487" Jan 03 05:41:02 crc kubenswrapper[4854]: I0103 05:41:02.980406 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-phv75"] Jan 03 05:41:02 crc kubenswrapper[4854]: I0103 05:41:02.985502 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:41:02 crc kubenswrapper[4854]: E0103 05:41:02.985828 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 05:41:03.485812004 +0000 UTC m=+41.812388566 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:03 crc kubenswrapper[4854]: W0103 05:41:03.038089 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f526fe3_5134_42d0_a52f_e3c821137ef0.slice/crio-82b9a7109d765b434918f73737bfab05ad565fac1b79fccc6341c7f11b838295 WatchSource:0}: Error finding container 82b9a7109d765b434918f73737bfab05ad565fac1b79fccc6341c7f11b838295: Status 404 returned error can't find the container with id 82b9a7109d765b434918f73737bfab05ad565fac1b79fccc6341c7f11b838295 Jan 03 05:41:03 crc kubenswrapper[4854]: I0103 05:41:03.038535 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7qqff" podStartSLOduration=18.038512309 podStartE2EDuration="18.038512309s" podCreationTimestamp="2026-01-03 05:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:41:03.011554591 +0000 UTC m=+41.338131163" watchObservedRunningTime="2026-01-03 05:41:03.038512309 +0000 UTC m=+41.365088891" Jan 03 05:41:03 crc kubenswrapper[4854]: I0103 05:41:03.039347 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9tbks"] Jan 03 05:41:03 crc kubenswrapper[4854]: I0103 05:41:03.113574 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:41:03 crc kubenswrapper[4854]: E0103 05:41:03.115498 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 05:41:03.615478253 +0000 UTC m=+41.942054825 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wc7xf" (UID: "ea3251f8-9e38-4094-86f1-98187e5b2c75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:03 crc kubenswrapper[4854]: I0103 05:41:03.159106 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2jw8v"] Jan 03 05:41:03 crc kubenswrapper[4854]: I0103 05:41:03.218275 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:41:03 crc kubenswrapper[4854]: E0103 05:41:03.218656 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 05:41:03.718641096 +0000 UTC m=+42.045217668 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:03 crc kubenswrapper[4854]: I0103 05:41:03.320933 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:41:03 crc kubenswrapper[4854]: E0103 05:41:03.321565 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 05:41:03.821537592 +0000 UTC m=+42.148114164 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wc7xf" (UID: "ea3251f8-9e38-4094-86f1-98187e5b2c75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:03 crc kubenswrapper[4854]: I0103 05:41:03.394001 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-d4txl" event={"ID":"9958a532-c481-4e0d-9bb4-f1303bf8b1a9","Type":"ContainerStarted","Data":"95f2ce8db9c0674bc06b05edf496aa52cbf78c638307d8dc3f9bc467465db41f"} Jan 03 05:41:03 crc kubenswrapper[4854]: I0103 05:41:03.398413 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-2q7dr" event={"ID":"4343974e-fe2d-4bac-b5aa-c7cfcbfdec02","Type":"ContainerStarted","Data":"4163866e1e6fe9aa1817208302cd30de92b266fe93d5ed2e0b7cdd3b0a1f8d11"} Jan 03 05:41:03 crc kubenswrapper[4854]: I0103 05:41:03.409365 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2jw8v" event={"ID":"5b7f5d78-25a5-497a-9315-494fe26edb93","Type":"ContainerStarted","Data":"e0665b3045a4304a58bf0d3e0540ab570d559dd3be1331fcbf5c0932309d1f22"} Jan 03 05:41:03 crc kubenswrapper[4854]: I0103 05:41:03.419491 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-6wgwf"] Jan 03 05:41:03 crc kubenswrapper[4854]: I0103 05:41:03.427988 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-q7mpr" event={"ID":"7f1b22aa-8d19-479a-8439-52a095edf970","Type":"ContainerStarted","Data":"6b9d7b5d7c6ba9cc02b315ed24bfd7d084b1c27296a75047c0fe61917f4ab245"} Jan 03 05:41:03 crc kubenswrapper[4854]: I0103 05:41:03.428861 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:41:03 crc kubenswrapper[4854]: E0103 05:41:03.429366 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 05:41:03.929346235 +0000 UTC m=+42.255922807 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:03 crc kubenswrapper[4854]: I0103 05:41:03.461958 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sh5ck" event={"ID":"4fd0b8a0-fbaa-4655-b2dd-a15e0bd3f55a","Type":"ContainerStarted","Data":"b6cfd088454e70fc7a50495d4706ed5d200e847bca7395714184574e166923c0"} Jan 03 05:41:03 crc kubenswrapper[4854]: I0103 05:41:03.479277 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-dmlm5" event={"ID":"d5805efa-800c-43df-ba80-7a7db226ebb3","Type":"ContainerStarted","Data":"c2b3c6e4d59744e0480291ace9fa25df054d39ceabe25ca5c973ad62312a662f"} Jan 03 05:41:03 crc kubenswrapper[4854]: I0103 05:41:03.481416 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-dmlm5" Jan 03 05:41:03 crc kubenswrapper[4854]: I0103 05:41:03.501145 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lltxw" event={"ID":"77084a3a-5610-4014-a3bf-6d4073a74d44","Type":"ContainerStarted","Data":"dd3a50f6342ee441578443fb3ebe85a3f506acd74ae26bb1f429a8768a6cd6b2"} Jan 03 05:41:03 crc kubenswrapper[4854]: I0103 05:41:03.501350 4854 patch_prober.go:28] interesting pod/downloads-7954f5f757-dmlm5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 03 05:41:03 crc kubenswrapper[4854]: I0103 05:41:03.501424 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dmlm5" podUID="d5805efa-800c-43df-ba80-7a7db226ebb3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 03 05:41:03 crc kubenswrapper[4854]: I0103 05:41:03.519197 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-qr6h4" event={"ID":"2e68edf2-12f6-4758-aed1-2d72186bc7de","Type":"ContainerStarted","Data":"dea6f64a12054cdcd3258e62988f8a6428105b23199c1d7f40b486ffa2af6c32"} Jan 03 05:41:03 crc kubenswrapper[4854]: I0103 05:41:03.534257 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-59h6x" event={"ID":"997ae328-13fc-41c0-9d10-fde36789b6c4","Type":"ContainerStarted","Data":"b1f2a1f3a3607484c7b0cf477fdb5d04f0442a7f6a7ce0b910ee79cfbe871e9a"} Jan 03 05:41:03 crc kubenswrapper[4854]: I0103 05:41:03.536612 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:41:03 crc kubenswrapper[4854]: E0103 05:41:03.537048 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 05:41:04.037032776 +0000 UTC m=+42.363609348 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wc7xf" (UID: "ea3251f8-9e38-4094-86f1-98187e5b2c75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:03 crc kubenswrapper[4854]: I0103 05:41:03.544964 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-2lwzj" event={"ID":"dcde1a7d-7025-45cb-92de-483da7a86296","Type":"ContainerStarted","Data":"e471bf3617e0cc1e81dc107d06d8c5e3583056df720e8fbf7f91a51f43b4521b"} Jan 03 05:41:03 crc kubenswrapper[4854]: I0103 05:41:03.545649 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-2lwzj" Jan 03 05:41:03 crc kubenswrapper[4854]: I0103 05:41:03.551786 4854 patch_prober.go:28] interesting pod/console-operator-58897d9998-2lwzj container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/readyz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Jan 03 05:41:03 crc kubenswrapper[4854]: I0103 05:41:03.551854 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-2lwzj" podUID="dcde1a7d-7025-45cb-92de-483da7a86296" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.17:8443/readyz\": dial tcp 10.217.0.17:8443: connect: connection refused" Jan 03 05:41:03 crc kubenswrapper[4854]: I0103 05:41:03.552193 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qmphn" event={"ID":"29a7524a-4f1c-4e10-ae41-8e05f91cbde6","Type":"ContainerStarted","Data":"b274536420159da3457c57e7ecbd1218fa76398e63341738cc6c98a96e06ec66"} Jan 03 05:41:03 crc kubenswrapper[4854]: I0103 05:41:03.554417 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lltxw" podStartSLOduration=18.554403826 podStartE2EDuration="18.554403826s" podCreationTimestamp="2026-01-03 05:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:41:03.552507486 +0000 UTC m=+41.879084078" watchObservedRunningTime="2026-01-03 05:41:03.554403826 +0000 UTC m=+41.880980408" Jan 03 05:41:03 crc kubenswrapper[4854]: I0103 05:41:03.555210 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-dmlm5" podStartSLOduration=18.555203046 podStartE2EDuration="18.555203046s" podCreationTimestamp="2026-01-03 05:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:41:03.517619603 +0000 UTC m=+41.844196185" watchObservedRunningTime="2026-01-03 05:41:03.555203046 +0000 UTC m=+41.881779618" Jan 03 05:41:03 crc kubenswrapper[4854]: I0103 05:41:03.566313 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-84wq5" event={"ID":"d53f65da-2081-4c98-8807-d86727ac7f89","Type":"ContainerStarted","Data":"61c6616c8c733cd67cf98569716903c65dc37ac5518f6dcddc8ca2e75c86545b"} Jan 03 05:41:03 crc kubenswrapper[4854]: I0103 05:41:03.574797 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zr2w7" event={"ID":"c378c365-855f-480d-9089-f6abd1b6a743","Type":"ContainerStarted","Data":"3d005b90c7e3beb77c901725fbfe5cb7cd73bfff5d0cefdb3d8540fe08d1808b"} Jan 03 05:41:03 crc kubenswrapper[4854]: I0103 05:41:03.579233 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-phv75" event={"ID":"5f526fe3-5134-42d0-a52f-e3c821137ef0","Type":"ContainerStarted","Data":"82b9a7109d765b434918f73737bfab05ad565fac1b79fccc6341c7f11b838295"} Jan 03 05:41:03 crc kubenswrapper[4854]: I0103 05:41:03.581274 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fwgd2" event={"ID":"5ab7ee8b-9182-43e2-85de-f8d92aa12587","Type":"ContainerStarted","Data":"db11f191f3b5a58f5aadbd2afb663d569bfae28393b7b887317eef9c2d29dfd1"} Jan 03 05:41:03 crc kubenswrapper[4854]: I0103 05:41:03.586860 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29456970-f9qt7" event={"ID":"36a86a6b-3a2c-4994-af93-2b4ae754edfa","Type":"ContainerStarted","Data":"e1ac23ce95f6c0a8b2048609d4e6e05117cec6d1a4d93da1d5fce1491008a127"} Jan 03 05:41:03 crc kubenswrapper[4854]: I0103 05:41:03.589467 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-2lwzj" podStartSLOduration=18.589455284 podStartE2EDuration="18.589455284s" podCreationTimestamp="2026-01-03 05:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:41:03.588021577 +0000 UTC m=+41.914598159" watchObservedRunningTime="2026-01-03 05:41:03.589455284 +0000 UTC m=+41.916031856" Jan 03 05:41:03 crc kubenswrapper[4854]: I0103 05:41:03.592969 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hvh5b" event={"ID":"51e0b974-fa59-4ecb-b9f4-1dfe9381c17d","Type":"ContainerStarted","Data":"af903a5f6991e349d15fd23ca5114fd6dc2b15ef8c020a077281990cfa77a4a0"} Jan 03 05:41:03 crc kubenswrapper[4854]: I0103 05:41:03.602837 4854 generic.go:334] "Generic (PLEG): container finished" podID="e05124c8-4705-4d57-82ec-b1ae0658e98e" containerID="d104c2a7ffc25f5bc215449bd9ea28e94109b52cdb314be367f7b7042468cfea" exitCode=0 Jan 03 05:41:03 crc kubenswrapper[4854]: I0103 05:41:03.603253 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vd2jp" event={"ID":"e05124c8-4705-4d57-82ec-b1ae0658e98e","Type":"ContainerDied","Data":"d104c2a7ffc25f5bc215449bd9ea28e94109b52cdb314be367f7b7042468cfea"} Jan 03 05:41:03 crc kubenswrapper[4854]: I0103 05:41:03.609858 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xrcwd" event={"ID":"caf95282-b4d9-4814-bccc-e6c8c77658c5","Type":"ContainerStarted","Data":"bafd603455c4390a5bb8893e7f19de8c7ac573ed963fdd8adb89470aefdf341b"} Jan 03 05:41:03 crc kubenswrapper[4854]: I0103 05:41:03.616885 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9tbks" event={"ID":"e0316818-6edd-4e11-9a85-cdc385194515","Type":"ContainerStarted","Data":"1f705b963af8804db36c9b3e9ffa96ee6202d0757667c41c4379d77c1c54db92"} Jan 03 05:41:03 crc kubenswrapper[4854]: I0103 05:41:03.618304 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-zvkh2" event={"ID":"5737390b-6370-438a-9096-e47bdff12392","Type":"ContainerStarted","Data":"1990ef40335745fafdc9ce4b52ffb3e0ee0a712de44e801fc6dabe15e43f4d6b"} Jan 03 05:41:03 crc kubenswrapper[4854]: I0103 05:41:03.620402 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2brf7" event={"ID":"ea2ee243-24c1-48aa-befb-ff2e4e839819","Type":"ContainerStarted","Data":"cdc6368a5ad57d529e017927290473d19856c0e06537ed830973beac1d1c7d34"} Jan 03 05:41:03 crc kubenswrapper[4854]: I0103 05:41:03.621811 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-tdlx9" event={"ID":"ab6ec22e-2a2c-4e28-8242-5bd783990843","Type":"ContainerStarted","Data":"63cc6c355f6397dba553d8cb89d15fb9ff68767748c5f862c6a7a5d7d0806e07"} Jan 03 05:41:03 crc kubenswrapper[4854]: I0103 05:41:03.623657 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jdzfq" event={"ID":"8f4db886-9214-4c75-931e-acea9a580541","Type":"ContainerStarted","Data":"d6961a8abe295e4bd220709b43f2131ade406a63052cdcd018b90bf8ac1ff1a1"} Jan 03 05:41:03 crc kubenswrapper[4854]: I0103 05:41:03.625189 4854 generic.go:334] "Generic (PLEG): container finished" podID="9ecb343a-f88c-49d3-a792-696f8b94eca3" containerID="20717c96531953aa1f28312d5f545e61d40fd099538847ccad37fc0ee70e3e14" exitCode=0 Jan 03 05:41:03 crc kubenswrapper[4854]: I0103 05:41:03.625973 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n82hj" event={"ID":"9ecb343a-f88c-49d3-a792-696f8b94eca3","Type":"ContainerDied","Data":"20717c96531953aa1f28312d5f545e61d40fd099538847ccad37fc0ee70e3e14"} Jan 03 05:41:03 crc kubenswrapper[4854]: I0103 05:41:03.638668 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:41:03 crc kubenswrapper[4854]: E0103 05:41:03.656248 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 05:41:04.156222514 +0000 UTC m=+42.482799086 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:03 crc kubenswrapper[4854]: I0103 05:41:03.694018 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jdzfq" podStartSLOduration=18.693997422 podStartE2EDuration="18.693997422s" podCreationTimestamp="2026-01-03 05:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:41:03.67887125 +0000 UTC m=+42.005447822" watchObservedRunningTime="2026-01-03 05:41:03.693997422 +0000 UTC m=+42.020573994" Jan 03 05:41:03 crc kubenswrapper[4854]: I0103 05:41:03.735743 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-tdlx9" podStartSLOduration=18.735727984 podStartE2EDuration="18.735727984s" podCreationTimestamp="2026-01-03 05:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:41:03.735260071 +0000 UTC m=+42.061836653" watchObservedRunningTime="2026-01-03 05:41:03.735727984 +0000 UTC m=+42.062304556" Jan 03 05:41:03 crc kubenswrapper[4854]: I0103 05:41:03.742158 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:41:03 crc kubenswrapper[4854]: E0103 05:41:03.747658 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 05:41:04.247640972 +0000 UTC m=+42.574217544 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wc7xf" (UID: "ea3251f8-9e38-4094-86f1-98187e5b2c75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:03 crc kubenswrapper[4854]: I0103 05:41:03.846607 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:41:03 crc kubenswrapper[4854]: E0103 05:41:03.849606 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 05:41:04.347987432 +0000 UTC m=+42.674563994 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:03 crc kubenswrapper[4854]: I0103 05:41:03.849687 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:41:03 crc kubenswrapper[4854]: E0103 05:41:03.850123 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 05:41:04.350115827 +0000 UTC m=+42.676692389 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wc7xf" (UID: "ea3251f8-9e38-4094-86f1-98187e5b2c75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:03 crc kubenswrapper[4854]: I0103 05:41:03.950448 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:41:03 crc kubenswrapper[4854]: E0103 05:41:03.950677 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 05:41:04.450635102 +0000 UTC m=+42.777211674 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:03 crc kubenswrapper[4854]: I0103 05:41:03.950964 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:41:03 crc kubenswrapper[4854]: E0103 05:41:03.951422 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 05:41:04.451401472 +0000 UTC m=+42.777978044 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wc7xf" (UID: "ea3251f8-9e38-4094-86f1-98187e5b2c75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:04 crc kubenswrapper[4854]: I0103 05:41:04.054824 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:41:04 crc kubenswrapper[4854]: E0103 05:41:04.055572 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 05:41:04.55555727 +0000 UTC m=+42.882133832 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:04 crc kubenswrapper[4854]: I0103 05:41:04.157046 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:41:04 crc kubenswrapper[4854]: E0103 05:41:04.157370 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 05:41:04.657357657 +0000 UTC m=+42.983934219 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wc7xf" (UID: "ea3251f8-9e38-4094-86f1-98187e5b2c75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:04 crc kubenswrapper[4854]: I0103 05:41:04.257518 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:41:04 crc kubenswrapper[4854]: E0103 05:41:04.258288 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 05:41:04.758267461 +0000 UTC m=+43.084844033 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:04 crc kubenswrapper[4854]: I0103 05:41:04.337093 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-tdlx9" Jan 03 05:41:04 crc kubenswrapper[4854]: I0103 05:41:04.345405 4854 patch_prober.go:28] interesting pod/router-default-5444994796-tdlx9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 03 05:41:04 crc kubenswrapper[4854]: [-]has-synced failed: reason withheld Jan 03 05:41:04 crc kubenswrapper[4854]: [+]process-running ok Jan 03 05:41:04 crc kubenswrapper[4854]: healthz check failed Jan 03 05:41:04 crc kubenswrapper[4854]: I0103 05:41:04.345472 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tdlx9" podUID="ab6ec22e-2a2c-4e28-8242-5bd783990843" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 03 05:41:04 crc kubenswrapper[4854]: I0103 05:41:04.358927 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:41:04 crc kubenswrapper[4854]: E0103 05:41:04.359261 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 05:41:04.859248848 +0000 UTC m=+43.185825420 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wc7xf" (UID: "ea3251f8-9e38-4094-86f1-98187e5b2c75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:04 crc kubenswrapper[4854]: I0103 05:41:04.460738 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:41:04 crc kubenswrapper[4854]: E0103 05:41:04.461253 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 05:41:04.961063586 +0000 UTC m=+43.287640158 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:04 crc kubenswrapper[4854]: I0103 05:41:04.461383 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:41:04 crc kubenswrapper[4854]: E0103 05:41:04.461803 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 05:41:04.961795374 +0000 UTC m=+43.288371936 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wc7xf" (UID: "ea3251f8-9e38-4094-86f1-98187e5b2c75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:04 crc kubenswrapper[4854]: I0103 05:41:04.565257 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:41:04 crc kubenswrapper[4854]: E0103 05:41:04.565424 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 05:41:05.065393209 +0000 UTC m=+43.391969781 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:04 crc kubenswrapper[4854]: I0103 05:41:04.565698 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:41:04 crc kubenswrapper[4854]: E0103 05:41:04.565973 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 05:41:05.065964983 +0000 UTC m=+43.392541555 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wc7xf" (UID: "ea3251f8-9e38-4094-86f1-98187e5b2c75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:04 crc kubenswrapper[4854]: I0103 05:41:04.637001 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-9d9fw" event={"ID":"3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f","Type":"ContainerStarted","Data":"032197ad0435416f99cd15bb36715f4d3b236460e5ef843df74f3f01287f741f"} Jan 03 05:41:04 crc kubenswrapper[4854]: I0103 05:41:04.638547 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-9d9fw" Jan 03 05:41:04 crc kubenswrapper[4854]: I0103 05:41:04.660449 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qmphn" event={"ID":"29a7524a-4f1c-4e10-ae41-8e05f91cbde6","Type":"ContainerStarted","Data":"161aa8978d0e162a2d5fef70db9445adca1e8119b53f12d630da478dbffc384e"} Jan 03 05:41:04 crc kubenswrapper[4854]: I0103 05:41:04.661945 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-d4txl" event={"ID":"9958a532-c481-4e0d-9bb4-f1303bf8b1a9","Type":"ContainerStarted","Data":"e2634208c06d6dca1050a92cf997abdb8ed53b192978679b754251923d14a826"} Jan 03 05:41:04 crc kubenswrapper[4854]: I0103 05:41:04.670316 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:41:04 crc kubenswrapper[4854]: E0103 05:41:04.670551 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 05:41:05.170509722 +0000 UTC m=+43.497086294 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:04 crc kubenswrapper[4854]: I0103 05:41:04.670637 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:41:04 crc kubenswrapper[4854]: E0103 05:41:04.670915 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 05:41:05.170901792 +0000 UTC m=+43.497478364 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wc7xf" (UID: "ea3251f8-9e38-4094-86f1-98187e5b2c75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:04 crc kubenswrapper[4854]: I0103 05:41:04.672267 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29456970-f9qt7" event={"ID":"36a86a6b-3a2c-4994-af93-2b4ae754edfa","Type":"ContainerStarted","Data":"0d2642e00dc964fc0711de5e89b1f8f3eb4f9c0a908d3d734cf3e533f0ab34e1"} Jan 03 05:41:04 crc kubenswrapper[4854]: I0103 05:41:04.689687 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-9d9fw" podStartSLOduration=6.689664288 podStartE2EDuration="6.689664288s" podCreationTimestamp="2026-01-03 05:40:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:41:04.689445573 +0000 UTC m=+43.016022145" watchObservedRunningTime="2026-01-03 05:41:04.689664288 +0000 UTC m=+43.016240860" Jan 03 05:41:04 crc kubenswrapper[4854]: I0103 05:41:04.700432 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9tbks" event={"ID":"e0316818-6edd-4e11-9a85-cdc385194515","Type":"ContainerStarted","Data":"5fef0a059ca9fa95817cffca2c3d17f70f1a1fe2384ed6b3b334f9d0c005b83e"} Jan 03 05:41:04 crc kubenswrapper[4854]: I0103 05:41:04.700869 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9tbks" Jan 03 05:41:04 crc kubenswrapper[4854]: I0103 05:41:04.726546 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-qv6qz" event={"ID":"cfdb7138-2cd3-450f-9421-e213122022af","Type":"ContainerStarted","Data":"2ae4f8390f23b1d5f57f060fe65769ae0d46fd7cb8fd458df9eef1ed6c81a5a1"} Jan 03 05:41:04 crc kubenswrapper[4854]: I0103 05:41:04.750305 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29456970-f9qt7" podStartSLOduration=19.750279389 podStartE2EDuration="19.750279389s" podCreationTimestamp="2026-01-03 05:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:41:04.748175084 +0000 UTC m=+43.074751656" watchObservedRunningTime="2026-01-03 05:41:04.750279389 +0000 UTC m=+43.076855961" Jan 03 05:41:04 crc kubenswrapper[4854]: I0103 05:41:04.781162 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-2q7dr" event={"ID":"4343974e-fe2d-4bac-b5aa-c7cfcbfdec02","Type":"ContainerStarted","Data":"96d13de1dbe788902363a2d5afa5e2f854041539cda535c92ec5e963a4786d68"} Jan 03 05:41:04 crc kubenswrapper[4854]: I0103 05:41:04.781234 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-2q7dr" event={"ID":"4343974e-fe2d-4bac-b5aa-c7cfcbfdec02","Type":"ContainerStarted","Data":"401810fe2bdc36d108b36e59b6afbc3a8bf6490877fc9f06a4955f4e4b711d94"} Jan 03 05:41:04 crc kubenswrapper[4854]: I0103 05:41:04.782726 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-2q7dr" Jan 03 05:41:04 crc kubenswrapper[4854]: I0103 05:41:04.784491 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:41:04 crc kubenswrapper[4854]: E0103 05:41:04.796884 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 05:41:05.296845465 +0000 UTC m=+43.623422037 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:04 crc kubenswrapper[4854]: I0103 05:41:04.817866 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-zvkh2" event={"ID":"5737390b-6370-438a-9096-e47bdff12392","Type":"ContainerStarted","Data":"eb6b69aab22e2ed686c74e0b9cb39fd8484e765407f2004ee55f2212a9e32f08"} Jan 03 05:41:04 crc kubenswrapper[4854]: I0103 05:41:04.821229 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-d4txl" podStartSLOduration=18.821201886 podStartE2EDuration="18.821201886s" podCreationTimestamp="2026-01-03 05:40:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:41:04.800069359 +0000 UTC m=+43.126645931" watchObservedRunningTime="2026-01-03 05:41:04.821201886 +0000 UTC m=+43.147778458" Jan 03 05:41:04 crc kubenswrapper[4854]: I0103 05:41:04.863503 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hvh5b" event={"ID":"51e0b974-fa59-4ecb-b9f4-1dfe9381c17d","Type":"ContainerStarted","Data":"a870418b50a7db7ae3b2015f468d39dda1bc339ec716fdf19d5609e60088f5bc"} Jan 03 05:41:04 crc kubenswrapper[4854]: I0103 05:41:04.863916 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hvh5b" event={"ID":"51e0b974-fa59-4ecb-b9f4-1dfe9381c17d","Type":"ContainerStarted","Data":"98221fd917c601d7fa1022a47cce4a5b93b9efc160c9a2a1f143df694e1ad3ca"} Jan 03 05:41:04 crc kubenswrapper[4854]: I0103 05:41:04.865579 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-59h6x" event={"ID":"997ae328-13fc-41c0-9d10-fde36789b6c4","Type":"ContainerStarted","Data":"376591f7c24ab8341c0a7753f834e423cff82b90cbfcbf63359ef691a03380ba"} Jan 03 05:41:04 crc kubenswrapper[4854]: I0103 05:41:04.865681 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-2q7dr" podStartSLOduration=6.865671289 podStartE2EDuration="6.865671289s" podCreationTimestamp="2026-01-03 05:40:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:41:04.839661135 +0000 UTC m=+43.166237707" watchObservedRunningTime="2026-01-03 05:41:04.865671289 +0000 UTC m=+43.192247861" Jan 03 05:41:04 crc kubenswrapper[4854]: I0103 05:41:04.866372 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-58vx9" event={"ID":"154b4721-3c78-4469-946c-cdf5a68bd110","Type":"ContainerStarted","Data":"4eff9b02658cb75816fb2e19ee3779ddeac5a8d8a98c79090d93ea711b1d7403"} Jan 03 05:41:04 crc kubenswrapper[4854]: I0103 05:41:04.892476 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n82hj" Jan 03 05:41:04 crc kubenswrapper[4854]: I0103 05:41:04.895024 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:41:04 crc kubenswrapper[4854]: E0103 05:41:04.897144 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 05:41:05.397131544 +0000 UTC m=+43.723708116 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wc7xf" (UID: "ea3251f8-9e38-4094-86f1-98187e5b2c75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:04 crc kubenswrapper[4854]: I0103 05:41:04.939122 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xrcwd" event={"ID":"caf95282-b4d9-4814-bccc-e6c8c77658c5","Type":"ContainerStarted","Data":"cb330775a194cd8c084650ce0aa16375b244ca58b93cff04bdbb3036001a7db4"} Jan 03 05:41:04 crc kubenswrapper[4854]: I0103 05:41:04.946710 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-qv6qz" podStartSLOduration=19.946692958 podStartE2EDuration="19.946692958s" podCreationTimestamp="2026-01-03 05:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:41:04.944770368 +0000 UTC m=+43.271346940" watchObservedRunningTime="2026-01-03 05:41:04.946692958 +0000 UTC m=+43.273269530" Jan 03 05:41:04 crc kubenswrapper[4854]: I0103 05:41:04.947883 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9tbks" podStartSLOduration=18.947877228 podStartE2EDuration="18.947877228s" podCreationTimestamp="2026-01-03 05:40:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:41:04.895878611 +0000 UTC m=+43.222455213" watchObservedRunningTime="2026-01-03 05:41:04.947877228 +0000 UTC m=+43.274453800" Jan 03 05:41:04 crc kubenswrapper[4854]: I0103 05:41:04.956186 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-q7mpr" event={"ID":"7f1b22aa-8d19-479a-8439-52a095edf970","Type":"ContainerStarted","Data":"1ae9983181968c101992b90019a7dc903fd36904fc1bc1bed4aa9a680d40aedd"} Jan 03 05:41:04 crc kubenswrapper[4854]: I0103 05:41:04.956235 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-q7mpr" event={"ID":"7f1b22aa-8d19-479a-8439-52a095edf970","Type":"ContainerStarted","Data":"fa1e12d3884b9a5c57be90ac44aaf8f8a8e031154737fb017c43c2f241a2b081"} Jan 03 05:41:04 crc kubenswrapper[4854]: I0103 05:41:04.996932 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:41:04 crc kubenswrapper[4854]: E0103 05:41:04.997981 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 05:41:05.497965616 +0000 UTC m=+43.824542188 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.006324 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sh5ck" event={"ID":"4fd0b8a0-fbaa-4655-b2dd-a15e0bd3f55a","Type":"ContainerStarted","Data":"f09fae929a359ec61a961693c6febc726da9d2a9eaf09e04bc28c6d945301190"} Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.006367 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sh5ck" event={"ID":"4fd0b8a0-fbaa-4655-b2dd-a15e0bd3f55a","Type":"ContainerStarted","Data":"05b60bcc4ea61461a2c440a09c775ff37995c0f6f968785dae5a005118ebbb55"} Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.012331 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2brf7" event={"ID":"ea2ee243-24c1-48aa-befb-ff2e4e839819","Type":"ContainerStarted","Data":"a4fcf1980f09ffb15a08c2ded215e8f8d56d700bdda8981fc935ea0c3a368b26"} Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.012378 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2brf7" event={"ID":"ea2ee243-24c1-48aa-befb-ff2e4e839819","Type":"ContainerStarted","Data":"ba3d9f3b0a6c7e1ecea27fdf57bfb06f357c757549456565475317b414317879"} Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.050054 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-pzlj8" event={"ID":"2259a421-a8dd-45e8-baa8-15cf1d37782e","Type":"ContainerStarted","Data":"a968852ba2040515d1c0c202c6ef1af18c49d1a9cc5ac54414430b5b1b296173"} Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.077098 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n82hj" podStartSLOduration=20.077036915 podStartE2EDuration="20.077036915s" podCreationTimestamp="2026-01-03 05:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:41:05.046671128 +0000 UTC m=+43.373247700" watchObservedRunningTime="2026-01-03 05:41:05.077036915 +0000 UTC m=+43.403613487" Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.078812 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-58vx9" podStartSLOduration=20.078804071 podStartE2EDuration="20.078804071s" podCreationTimestamp="2026-01-03 05:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:41:05.075223498 +0000 UTC m=+43.401800060" watchObservedRunningTime="2026-01-03 05:41:05.078804071 +0000 UTC m=+43.405380643" Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.085175 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-99h4j" event={"ID":"07007d77-4861-45ac-aacd-17b840bef2ee","Type":"ContainerStarted","Data":"e358653d3c0264c2563deb30d5d7bac78dd6b3c6928d4ced8530a840bd9b67a4"} Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.085917 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-99h4j" Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.101210 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:41:05 crc kubenswrapper[4854]: E0103 05:41:05.102244 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 05:41:05.602233308 +0000 UTC m=+43.928809880 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wc7xf" (UID: "ea3251f8-9e38-4094-86f1-98187e5b2c75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.105388 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-zvkh2" podStartSLOduration=20.105360079 podStartE2EDuration="20.105360079s" podCreationTimestamp="2026-01-03 05:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:41:05.100556964 +0000 UTC m=+43.427133536" watchObservedRunningTime="2026-01-03 05:41:05.105360079 +0000 UTC m=+43.431936641" Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.107108 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-qr6h4" event={"ID":"2e68edf2-12f6-4758-aed1-2d72186bc7de","Type":"ContainerStarted","Data":"71542860709fb48769735a4327e71e3c1233f479abec93a4830ad6e0632ecbd5"} Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.109594 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-phv75" event={"ID":"5f526fe3-5134-42d0-a52f-e3c821137ef0","Type":"ContainerStarted","Data":"bb73817f888179fbe1c5dc949ec4d02b821c75e3b79096d4271e67061b3c9ded"} Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.112588 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-b9llh" event={"ID":"c3a93788-c0a0-412d-aabd-1e93727e72f0","Type":"ContainerStarted","Data":"29a6708852980da0aae8cb879ad2592c9f9c409559d99e13b161616826b2ebb3"} Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.140032 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xrcwd" podStartSLOduration=20.140011266 podStartE2EDuration="20.140011266s" podCreationTimestamp="2026-01-03 05:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:41:05.138047036 +0000 UTC m=+43.464623608" watchObservedRunningTime="2026-01-03 05:41:05.140011266 +0000 UTC m=+43.466587838" Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.147276 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8d5t5" event={"ID":"58b842c2-f723-45ae-9d08-9218837bb66a","Type":"ContainerStarted","Data":"7a2d320f9d34c6e37138106368a4f8d6b97fc5aa2c245bc9d703044c699520e3"} Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.148117 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8d5t5" Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.160786 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hvh5b" podStartSLOduration=20.160769274 podStartE2EDuration="20.160769274s" podCreationTimestamp="2026-01-03 05:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:41:05.158194698 +0000 UTC m=+43.484771260" watchObservedRunningTime="2026-01-03 05:41:05.160769274 +0000 UTC m=+43.487345846" Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.183574 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8d5t5" Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.183806 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fwgd2" event={"ID":"5ab7ee8b-9182-43e2-85de-f8d92aa12587","Type":"ContainerStarted","Data":"c021757414cec9b59fdae5efd408a03412abeb6822003d9b893eb05bdeb3f029"} Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.184829 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fwgd2" Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.206049 4854 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-fwgd2 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:5443/healthz\": dial tcp 10.217.0.31:5443: connect: connection refused" start-of-body= Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.206117 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fwgd2" podUID="5ab7ee8b-9182-43e2-85de-f8d92aa12587" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.31:5443/healthz\": dial tcp 10.217.0.31:5443: connect: connection refused" Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.207090 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-b9llh" podStartSLOduration=20.207059784 podStartE2EDuration="20.207059784s" podCreationTimestamp="2026-01-03 05:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:41:05.206547 +0000 UTC m=+43.533123562" watchObservedRunningTime="2026-01-03 05:41:05.207059784 +0000 UTC m=+43.533636356" Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.207153 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:41:05 crc kubenswrapper[4854]: E0103 05:41:05.207378 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 05:41:05.707344561 +0000 UTC m=+44.033921133 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.207609 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:41:05 crc kubenswrapper[4854]: E0103 05:41:05.208038 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 05:41:05.708022209 +0000 UTC m=+44.034598781 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wc7xf" (UID: "ea3251f8-9e38-4094-86f1-98187e5b2c75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.241002 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vd2jp" event={"ID":"e05124c8-4705-4d57-82ec-b1ae0658e98e","Type":"ContainerStarted","Data":"9c4c6e3ff46bdf1188cc1623b925e2f60732f62527b95f4ec3ccd9fe62a148d1"} Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.265307 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zr2w7" event={"ID":"c378c365-855f-480d-9089-f6abd1b6a743","Type":"ContainerStarted","Data":"0b87ee35109868408720472c2b717c9b24e5af29c1e9c7c2d21c35eac911e4d1"} Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.296636 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2jw8v" event={"ID":"5b7f5d78-25a5-497a-9315-494fe26edb93","Type":"ContainerStarted","Data":"10cde2faee74631a8c6185f6e956d7af2bdb78e3cb320f987bb30ac1860b9571"} Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.297495 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-2jw8v" Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.318658 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:41:05 crc kubenswrapper[4854]: E0103 05:41:05.318793 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 05:41:05.818773618 +0000 UTC m=+44.145350190 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.319048 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:41:05 crc kubenswrapper[4854]: E0103 05:41:05.319483 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 05:41:05.819466336 +0000 UTC m=+44.146042898 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wc7xf" (UID: "ea3251f8-9e38-4094-86f1-98187e5b2c75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.334012 4854 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-2jw8v container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.334052 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-2jw8v" podUID="5b7f5d78-25a5-497a-9315-494fe26edb93" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.335353 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2brf7" podStartSLOduration=20.335335537 podStartE2EDuration="20.335335537s" podCreationTimestamp="2026-01-03 05:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:41:05.334636379 +0000 UTC m=+43.661212941" watchObservedRunningTime="2026-01-03 05:41:05.335335537 +0000 UTC m=+43.661912109" Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.336165 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mgczz" event={"ID":"fc73bdbb-e111-427a-b2d6-95976be94058","Type":"ContainerStarted","Data":"650032f9d47d20412da9bc73b65e2fa48143afdc13442d41a4498236241f6256"} Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.342901 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lltxw" event={"ID":"77084a3a-5610-4014-a3bf-6d4073a74d44","Type":"ContainerStarted","Data":"0da95cae6db85b0c5d3e13e5ed80e896d17465b69e5fcda34ad41246e340f5d0"} Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.343724 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lltxw" Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.343789 4854 patch_prober.go:28] interesting pod/router-default-5444994796-tdlx9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 03 05:41:05 crc kubenswrapper[4854]: [-]has-synced failed: reason withheld Jan 03 05:41:05 crc kubenswrapper[4854]: [+]process-running ok Jan 03 05:41:05 crc kubenswrapper[4854]: healthz check failed Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.343829 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tdlx9" podUID="ab6ec22e-2a2c-4e28-8242-5bd783990843" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.368190 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sh5ck" podStartSLOduration=20.368176328 podStartE2EDuration="20.368176328s" podCreationTimestamp="2026-01-03 05:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:41:05.363483876 +0000 UTC m=+43.690060448" watchObservedRunningTime="2026-01-03 05:41:05.368176328 +0000 UTC m=+43.694752900" Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.380369 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-6wgwf" event={"ID":"c9a35ce4-6254-4744-b9a8-966399ae89cc","Type":"ContainerStarted","Data":"605ee64ae8dc04f56355d7e3cd5885cee13b8a1b13dbc8e60690a0bf4be57488"} Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.382333 4854 patch_prober.go:28] interesting pod/downloads-7954f5f757-dmlm5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.382580 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dmlm5" podUID="d5805efa-800c-43df-ba80-7a7db226ebb3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.383757 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lltxw" Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.392062 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9tbks" Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.394776 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-64gkx"] Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.395787 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-64gkx" Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.401473 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-2lwzj" Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.401727 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.402368 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-phv75" podStartSLOduration=19.402352794 podStartE2EDuration="19.402352794s" podCreationTimestamp="2026-01-03 05:40:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:41:05.401623085 +0000 UTC m=+43.728199647" watchObservedRunningTime="2026-01-03 05:41:05.402352794 +0000 UTC m=+43.728929356" Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.419808 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:41:05 crc kubenswrapper[4854]: E0103 05:41:05.420933 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 05:41:05.920917485 +0000 UTC m=+44.247494057 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.451672 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-64gkx"] Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.487985 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-pzlj8" podStartSLOduration=20.487967362 podStartE2EDuration="20.487967362s" podCreationTimestamp="2026-01-03 05:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:41:05.486925415 +0000 UTC m=+43.813502007" watchObservedRunningTime="2026-01-03 05:41:05.487967362 +0000 UTC m=+43.814543924" Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.490545 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-qr6h4" podStartSLOduration=8.490536398 podStartE2EDuration="8.490536398s" podCreationTimestamp="2026-01-03 05:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:41:05.456439135 +0000 UTC m=+43.783015707" watchObservedRunningTime="2026-01-03 05:41:05.490536398 +0000 UTC m=+43.817112970" Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.520893 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.520951 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtmpx\" (UniqueName: \"kubernetes.io/projected/80855e9f-3a0c-439c-87cf-933b8825c398-kube-api-access-jtmpx\") pod \"certified-operators-64gkx\" (UID: \"80855e9f-3a0c-439c-87cf-933b8825c398\") " pod="openshift-marketplace/certified-operators-64gkx" Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.521025 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80855e9f-3a0c-439c-87cf-933b8825c398-utilities\") pod \"certified-operators-64gkx\" (UID: \"80855e9f-3a0c-439c-87cf-933b8825c398\") " pod="openshift-marketplace/certified-operators-64gkx" Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.521170 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80855e9f-3a0c-439c-87cf-933b8825c398-catalog-content\") pod \"certified-operators-64gkx\" (UID: \"80855e9f-3a0c-439c-87cf-933b8825c398\") " pod="openshift-marketplace/certified-operators-64gkx" Jan 03 05:41:05 crc kubenswrapper[4854]: E0103 05:41:05.521780 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 05:41:06.021768368 +0000 UTC m=+44.348344940 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wc7xf" (UID: "ea3251f8-9e38-4094-86f1-98187e5b2c75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.569671 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-99h4j" podStartSLOduration=20.569655558 podStartE2EDuration="20.569655558s" podCreationTimestamp="2026-01-03 05:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:41:05.518525594 +0000 UTC m=+43.845102166" watchObservedRunningTime="2026-01-03 05:41:05.569655558 +0000 UTC m=+43.896232130" Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.586696 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-q7mpr" podStartSLOduration=20.586669859 podStartE2EDuration="20.586669859s" podCreationTimestamp="2026-01-03 05:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:41:05.569382361 +0000 UTC m=+43.895958933" watchObservedRunningTime="2026-01-03 05:41:05.586669859 +0000 UTC m=+43.913246431" Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.607466 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8d5t5" podStartSLOduration=20.607441767 podStartE2EDuration="20.607441767s" podCreationTimestamp="2026-01-03 05:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:41:05.602506949 +0000 UTC m=+43.929083521" watchObservedRunningTime="2026-01-03 05:41:05.607441767 +0000 UTC m=+43.934018349" Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.609128 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bqxfg"] Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.610252 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bqxfg"] Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.610347 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bqxfg" Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.613264 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.625173 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.625630 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80855e9f-3a0c-439c-87cf-933b8825c398-utilities\") pod \"certified-operators-64gkx\" (UID: \"80855e9f-3a0c-439c-87cf-933b8825c398\") " pod="openshift-marketplace/certified-operators-64gkx" Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.625757 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80855e9f-3a0c-439c-87cf-933b8825c398-catalog-content\") pod \"certified-operators-64gkx\" (UID: \"80855e9f-3a0c-439c-87cf-933b8825c398\") " pod="openshift-marketplace/certified-operators-64gkx" Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.625909 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtmpx\" (UniqueName: \"kubernetes.io/projected/80855e9f-3a0c-439c-87cf-933b8825c398-kube-api-access-jtmpx\") pod \"certified-operators-64gkx\" (UID: \"80855e9f-3a0c-439c-87cf-933b8825c398\") " pod="openshift-marketplace/certified-operators-64gkx" Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.626785 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80855e9f-3a0c-439c-87cf-933b8825c398-utilities\") pod \"certified-operators-64gkx\" (UID: \"80855e9f-3a0c-439c-87cf-933b8825c398\") " pod="openshift-marketplace/certified-operators-64gkx" Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.626864 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80855e9f-3a0c-439c-87cf-933b8825c398-catalog-content\") pod \"certified-operators-64gkx\" (UID: \"80855e9f-3a0c-439c-87cf-933b8825c398\") " pod="openshift-marketplace/certified-operators-64gkx" Jan 03 05:41:05 crc kubenswrapper[4854]: E0103 05:41:05.627001 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 05:41:06.126981434 +0000 UTC m=+44.453558006 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.669552 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtmpx\" (UniqueName: \"kubernetes.io/projected/80855e9f-3a0c-439c-87cf-933b8825c398-kube-api-access-jtmpx\") pod \"certified-operators-64gkx\" (UID: \"80855e9f-3a0c-439c-87cf-933b8825c398\") " pod="openshift-marketplace/certified-operators-64gkx" Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.678863 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mgczz" podStartSLOduration=20.678844067 podStartE2EDuration="20.678844067s" podCreationTimestamp="2026-01-03 05:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:41:05.678658482 +0000 UTC m=+44.005235074" watchObservedRunningTime="2026-01-03 05:41:05.678844067 +0000 UTC m=+44.005420639" Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.728487 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gshw8\" (UniqueName: \"kubernetes.io/projected/6127414f-e3e3-4c52-81a8-f6fea70b7d0c-kube-api-access-gshw8\") pod \"community-operators-bqxfg\" (UID: \"6127414f-e3e3-4c52-81a8-f6fea70b7d0c\") " pod="openshift-marketplace/community-operators-bqxfg" Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.728606 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6127414f-e3e3-4c52-81a8-f6fea70b7d0c-catalog-content\") pod \"community-operators-bqxfg\" (UID: \"6127414f-e3e3-4c52-81a8-f6fea70b7d0c\") " pod="openshift-marketplace/community-operators-bqxfg" Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.728713 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.728762 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6127414f-e3e3-4c52-81a8-f6fea70b7d0c-utilities\") pod \"community-operators-bqxfg\" (UID: \"6127414f-e3e3-4c52-81a8-f6fea70b7d0c\") " pod="openshift-marketplace/community-operators-bqxfg" Jan 03 05:41:05 crc kubenswrapper[4854]: E0103 05:41:05.729173 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 05:41:06.229156221 +0000 UTC m=+44.555733003 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wc7xf" (UID: "ea3251f8-9e38-4094-86f1-98187e5b2c75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.759300 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-64gkx" Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.761991 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vd2jp" podStartSLOduration=20.761965601 podStartE2EDuration="20.761965601s" podCreationTimestamp="2026-01-03 05:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:41:05.726601385 +0000 UTC m=+44.053177977" watchObservedRunningTime="2026-01-03 05:41:05.761965601 +0000 UTC m=+44.088542173" Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.791760 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-2jw8v" podStartSLOduration=20.791734982 podStartE2EDuration="20.791734982s" podCreationTimestamp="2026-01-03 05:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:41:05.758683256 +0000 UTC m=+44.085259848" watchObservedRunningTime="2026-01-03 05:41:05.791734982 +0000 UTC m=+44.118311554" Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.799593 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dsx2k"] Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.800983 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dsx2k" Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.813143 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dsx2k"] Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.830900 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.831260 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6127414f-e3e3-4c52-81a8-f6fea70b7d0c-catalog-content\") pod \"community-operators-bqxfg\" (UID: \"6127414f-e3e3-4c52-81a8-f6fea70b7d0c\") " pod="openshift-marketplace/community-operators-bqxfg" Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.831319 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6127414f-e3e3-4c52-81a8-f6fea70b7d0c-utilities\") pod \"community-operators-bqxfg\" (UID: \"6127414f-e3e3-4c52-81a8-f6fea70b7d0c\") " pod="openshift-marketplace/community-operators-bqxfg" Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.831382 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gshw8\" (UniqueName: \"kubernetes.io/projected/6127414f-e3e3-4c52-81a8-f6fea70b7d0c-kube-api-access-gshw8\") pod \"community-operators-bqxfg\" (UID: \"6127414f-e3e3-4c52-81a8-f6fea70b7d0c\") " pod="openshift-marketplace/community-operators-bqxfg" Jan 03 05:41:05 crc kubenswrapper[4854]: E0103 05:41:05.831675 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 05:41:06.331660347 +0000 UTC m=+44.658236919 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.832010 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6127414f-e3e3-4c52-81a8-f6fea70b7d0c-catalog-content\") pod \"community-operators-bqxfg\" (UID: \"6127414f-e3e3-4c52-81a8-f6fea70b7d0c\") " pod="openshift-marketplace/community-operators-bqxfg" Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.832099 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zr2w7" podStartSLOduration=20.832069407 podStartE2EDuration="20.832069407s" podCreationTimestamp="2026-01-03 05:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:41:05.825828855 +0000 UTC m=+44.152405427" watchObservedRunningTime="2026-01-03 05:41:05.832069407 +0000 UTC m=+44.158645979" Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.832354 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6127414f-e3e3-4c52-81a8-f6fea70b7d0c-utilities\") pod \"community-operators-bqxfg\" (UID: \"6127414f-e3e3-4c52-81a8-f6fea70b7d0c\") " pod="openshift-marketplace/community-operators-bqxfg" Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.871360 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gshw8\" (UniqueName: \"kubernetes.io/projected/6127414f-e3e3-4c52-81a8-f6fea70b7d0c-kube-api-access-gshw8\") pod \"community-operators-bqxfg\" (UID: \"6127414f-e3e3-4c52-81a8-f6fea70b7d0c\") " pod="openshift-marketplace/community-operators-bqxfg" Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.871940 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fwgd2" podStartSLOduration=20.8719258 podStartE2EDuration="20.8719258s" podCreationTimestamp="2026-01-03 05:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:41:05.869009734 +0000 UTC m=+44.195586306" watchObservedRunningTime="2026-01-03 05:41:05.8719258 +0000 UTC m=+44.198502362" Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.928462 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bqxfg" Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.934275 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7eefb384-ef8d-4c37-9287-20114d60743d-utilities\") pod \"certified-operators-dsx2k\" (UID: \"7eefb384-ef8d-4c37-9287-20114d60743d\") " pod="openshift-marketplace/certified-operators-dsx2k" Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.934323 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.934392 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7eefb384-ef8d-4c37-9287-20114d60743d-catalog-content\") pod \"certified-operators-dsx2k\" (UID: \"7eefb384-ef8d-4c37-9287-20114d60743d\") " pod="openshift-marketplace/certified-operators-dsx2k" Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.934429 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lvbg\" (UniqueName: \"kubernetes.io/projected/7eefb384-ef8d-4c37-9287-20114d60743d-kube-api-access-4lvbg\") pod \"certified-operators-dsx2k\" (UID: \"7eefb384-ef8d-4c37-9287-20114d60743d\") " pod="openshift-marketplace/certified-operators-dsx2k" Jan 03 05:41:05 crc kubenswrapper[4854]: E0103 05:41:05.934818 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 05:41:06.434798139 +0000 UTC m=+44.761374711 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wc7xf" (UID: "ea3251f8-9e38-4094-86f1-98187e5b2c75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.981331 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mfbxz"] Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.982228 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mfbxz" Jan 03 05:41:05 crc kubenswrapper[4854]: I0103 05:41:05.994596 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mfbxz"] Jan 03 05:41:06 crc kubenswrapper[4854]: I0103 05:41:06.035989 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:41:06 crc kubenswrapper[4854]: E0103 05:41:06.036630 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 05:41:06.536599356 +0000 UTC m=+44.863175928 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:06 crc kubenswrapper[4854]: I0103 05:41:06.036940 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57qpt\" (UniqueName: \"kubernetes.io/projected/54056ea8-c177-4995-8261-209eb3200f5f-kube-api-access-57qpt\") pod \"community-operators-mfbxz\" (UID: \"54056ea8-c177-4995-8261-209eb3200f5f\") " pod="openshift-marketplace/community-operators-mfbxz" Jan 03 05:41:06 crc kubenswrapper[4854]: I0103 05:41:06.037020 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7eefb384-ef8d-4c37-9287-20114d60743d-utilities\") pod \"certified-operators-dsx2k\" (UID: \"7eefb384-ef8d-4c37-9287-20114d60743d\") " pod="openshift-marketplace/certified-operators-dsx2k" Jan 03 05:41:06 crc kubenswrapper[4854]: I0103 05:41:06.037051 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:41:06 crc kubenswrapper[4854]: I0103 05:41:06.037125 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54056ea8-c177-4995-8261-209eb3200f5f-catalog-content\") pod \"community-operators-mfbxz\" (UID: \"54056ea8-c177-4995-8261-209eb3200f5f\") " pod="openshift-marketplace/community-operators-mfbxz" Jan 03 05:41:06 crc kubenswrapper[4854]: I0103 05:41:06.037185 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7eefb384-ef8d-4c37-9287-20114d60743d-catalog-content\") pod \"certified-operators-dsx2k\" (UID: \"7eefb384-ef8d-4c37-9287-20114d60743d\") " pod="openshift-marketplace/certified-operators-dsx2k" Jan 03 05:41:06 crc kubenswrapper[4854]: I0103 05:41:06.037213 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lvbg\" (UniqueName: \"kubernetes.io/projected/7eefb384-ef8d-4c37-9287-20114d60743d-kube-api-access-4lvbg\") pod \"certified-operators-dsx2k\" (UID: \"7eefb384-ef8d-4c37-9287-20114d60743d\") " pod="openshift-marketplace/certified-operators-dsx2k" Jan 03 05:41:06 crc kubenswrapper[4854]: I0103 05:41:06.037253 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54056ea8-c177-4995-8261-209eb3200f5f-utilities\") pod \"community-operators-mfbxz\" (UID: \"54056ea8-c177-4995-8261-209eb3200f5f\") " pod="openshift-marketplace/community-operators-mfbxz" Jan 03 05:41:06 crc kubenswrapper[4854]: I0103 05:41:06.037933 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7eefb384-ef8d-4c37-9287-20114d60743d-utilities\") pod \"certified-operators-dsx2k\" (UID: \"7eefb384-ef8d-4c37-9287-20114d60743d\") " pod="openshift-marketplace/certified-operators-dsx2k" Jan 03 05:41:06 crc kubenswrapper[4854]: I0103 05:41:06.038201 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7eefb384-ef8d-4c37-9287-20114d60743d-catalog-content\") pod \"certified-operators-dsx2k\" (UID: \"7eefb384-ef8d-4c37-9287-20114d60743d\") " pod="openshift-marketplace/certified-operators-dsx2k" Jan 03 05:41:06 crc kubenswrapper[4854]: E0103 05:41:06.039317 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 05:41:06.539298866 +0000 UTC m=+44.865875438 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wc7xf" (UID: "ea3251f8-9e38-4094-86f1-98187e5b2c75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:06 crc kubenswrapper[4854]: I0103 05:41:06.062603 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lvbg\" (UniqueName: \"kubernetes.io/projected/7eefb384-ef8d-4c37-9287-20114d60743d-kube-api-access-4lvbg\") pod \"certified-operators-dsx2k\" (UID: \"7eefb384-ef8d-4c37-9287-20114d60743d\") " pod="openshift-marketplace/certified-operators-dsx2k" Jan 03 05:41:06 crc kubenswrapper[4854]: I0103 05:41:06.138328 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:41:06 crc kubenswrapper[4854]: I0103 05:41:06.138713 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57qpt\" (UniqueName: \"kubernetes.io/projected/54056ea8-c177-4995-8261-209eb3200f5f-kube-api-access-57qpt\") pod \"community-operators-mfbxz\" (UID: \"54056ea8-c177-4995-8261-209eb3200f5f\") " pod="openshift-marketplace/community-operators-mfbxz" Jan 03 05:41:06 crc kubenswrapper[4854]: I0103 05:41:06.138849 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54056ea8-c177-4995-8261-209eb3200f5f-catalog-content\") pod \"community-operators-mfbxz\" (UID: \"54056ea8-c177-4995-8261-209eb3200f5f\") " pod="openshift-marketplace/community-operators-mfbxz" Jan 03 05:41:06 crc kubenswrapper[4854]: I0103 05:41:06.138926 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54056ea8-c177-4995-8261-209eb3200f5f-utilities\") pod \"community-operators-mfbxz\" (UID: \"54056ea8-c177-4995-8261-209eb3200f5f\") " pod="openshift-marketplace/community-operators-mfbxz" Jan 03 05:41:06 crc kubenswrapper[4854]: I0103 05:41:06.139775 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dsx2k" Jan 03 05:41:06 crc kubenswrapper[4854]: E0103 05:41:06.140428 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 05:41:06.640383435 +0000 UTC m=+44.966960007 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:06 crc kubenswrapper[4854]: I0103 05:41:06.162166 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bqxfg"] Jan 03 05:41:06 crc kubenswrapper[4854]: I0103 05:41:06.242529 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:41:06 crc kubenswrapper[4854]: E0103 05:41:06.243002 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 05:41:06.742986484 +0000 UTC m=+45.069563056 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wc7xf" (UID: "ea3251f8-9e38-4094-86f1-98187e5b2c75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:06 crc kubenswrapper[4854]: I0103 05:41:06.246731 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-64gkx"] Jan 03 05:41:06 crc kubenswrapper[4854]: I0103 05:41:06.248244 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54056ea8-c177-4995-8261-209eb3200f5f-utilities\") pod \"community-operators-mfbxz\" (UID: \"54056ea8-c177-4995-8261-209eb3200f5f\") " pod="openshift-marketplace/community-operators-mfbxz" Jan 03 05:41:06 crc kubenswrapper[4854]: I0103 05:41:06.267311 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54056ea8-c177-4995-8261-209eb3200f5f-catalog-content\") pod \"community-operators-mfbxz\" (UID: \"54056ea8-c177-4995-8261-209eb3200f5f\") " pod="openshift-marketplace/community-operators-mfbxz" Jan 03 05:41:06 crc kubenswrapper[4854]: I0103 05:41:06.275803 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57qpt\" (UniqueName: \"kubernetes.io/projected/54056ea8-c177-4995-8261-209eb3200f5f-kube-api-access-57qpt\") pod \"community-operators-mfbxz\" (UID: \"54056ea8-c177-4995-8261-209eb3200f5f\") " pod="openshift-marketplace/community-operators-mfbxz" Jan 03 05:41:06 crc kubenswrapper[4854]: I0103 05:41:06.310920 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mfbxz" Jan 03 05:41:06 crc kubenswrapper[4854]: I0103 05:41:06.341714 4854 patch_prober.go:28] interesting pod/router-default-5444994796-tdlx9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 03 05:41:06 crc kubenswrapper[4854]: [-]has-synced failed: reason withheld Jan 03 05:41:06 crc kubenswrapper[4854]: [+]process-running ok Jan 03 05:41:06 crc kubenswrapper[4854]: healthz check failed Jan 03 05:41:06 crc kubenswrapper[4854]: I0103 05:41:06.342100 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tdlx9" podUID="ab6ec22e-2a2c-4e28-8242-5bd783990843" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 03 05:41:06 crc kubenswrapper[4854]: I0103 05:41:06.343835 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:41:06 crc kubenswrapper[4854]: E0103 05:41:06.344007 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 05:41:06.84398392 +0000 UTC m=+45.170560502 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:06 crc kubenswrapper[4854]: I0103 05:41:06.344264 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:41:06 crc kubenswrapper[4854]: E0103 05:41:06.344567 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 05:41:06.844557215 +0000 UTC m=+45.171133787 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wc7xf" (UID: "ea3251f8-9e38-4094-86f1-98187e5b2c75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:06 crc kubenswrapper[4854]: I0103 05:41:06.445569 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:41:06 crc kubenswrapper[4854]: E0103 05:41:06.445868 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 05:41:06.94585316 +0000 UTC m=+45.272429732 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:06 crc kubenswrapper[4854]: I0103 05:41:06.450995 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n82hj" event={"ID":"9ecb343a-f88c-49d3-a792-696f8b94eca3","Type":"ContainerStarted","Data":"21f27f09d6dbc1e7c9b44ed26845c77c7130232e16ad10ca00346ecd3f3f82a6"} Jan 03 05:41:06 crc kubenswrapper[4854]: I0103 05:41:06.469850 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-6wgwf" event={"ID":"c9a35ce4-6254-4744-b9a8-966399ae89cc","Type":"ContainerStarted","Data":"db8f3263a5cb074ee78a1b7df269bec764a5715dc054ca06861fe36a2f5e2042"} Jan 03 05:41:06 crc kubenswrapper[4854]: I0103 05:41:06.477406 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-pzlj8" event={"ID":"2259a421-a8dd-45e8-baa8-15cf1d37782e","Type":"ContainerStarted","Data":"6dad453159b3b21cdf0717b8e8e4a06227f488df4f328009cf685aa117a4d10e"} Jan 03 05:41:06 crc kubenswrapper[4854]: I0103 05:41:06.479579 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-99h4j" event={"ID":"07007d77-4861-45ac-aacd-17b840bef2ee","Type":"ContainerStarted","Data":"cd5d4db41f7b67e9b596b2078363907a0118e5e595d471db2365026bf43e6851"} Jan 03 05:41:06 crc kubenswrapper[4854]: I0103 05:41:06.483280 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-64gkx" event={"ID":"80855e9f-3a0c-439c-87cf-933b8825c398","Type":"ContainerStarted","Data":"d28c158b59cc86c2e867be449bab788745fbdae228a2129c303acc502bd7f9dd"} Jan 03 05:41:06 crc kubenswrapper[4854]: I0103 05:41:06.500145 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-59h6x" event={"ID":"997ae328-13fc-41c0-9d10-fde36789b6c4","Type":"ContainerStarted","Data":"2897deffedcc40a67a98bd06ecea312d9d5a16b02fb79f580bede598b879f19c"} Jan 03 05:41:06 crc kubenswrapper[4854]: I0103 05:41:06.507388 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dsx2k"] Jan 03 05:41:06 crc kubenswrapper[4854]: I0103 05:41:06.509505 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bqxfg" event={"ID":"6127414f-e3e3-4c52-81a8-f6fea70b7d0c","Type":"ContainerStarted","Data":"292e5e8a3392ac1fdc0b9b4546a40aad83a6f2d190d0d5215204fb53bca1581f"} Jan 03 05:41:06 crc kubenswrapper[4854]: I0103 05:41:06.519341 4854 patch_prober.go:28] interesting pod/downloads-7954f5f757-dmlm5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 03 05:41:06 crc kubenswrapper[4854]: I0103 05:41:06.519392 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dmlm5" podUID="d5805efa-800c-43df-ba80-7a7db226ebb3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 03 05:41:06 crc kubenswrapper[4854]: I0103 05:41:06.521688 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-59h6x" podStartSLOduration=21.521672454 podStartE2EDuration="21.521672454s" podCreationTimestamp="2026-01-03 05:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:41:06.519895248 +0000 UTC m=+44.846471820" watchObservedRunningTime="2026-01-03 05:41:06.521672454 +0000 UTC m=+44.848249026" Jan 03 05:41:06 crc kubenswrapper[4854]: I0103 05:41:06.522739 4854 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-2jw8v container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Jan 03 05:41:06 crc kubenswrapper[4854]: I0103 05:41:06.522800 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-2jw8v" podUID="5b7f5d78-25a5-497a-9315-494fe26edb93" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Jan 03 05:41:06 crc kubenswrapper[4854]: I0103 05:41:06.548212 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:41:06 crc kubenswrapper[4854]: E0103 05:41:06.557312 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 05:41:07.057296887 +0000 UTC m=+45.383873459 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wc7xf" (UID: "ea3251f8-9e38-4094-86f1-98187e5b2c75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:06 crc kubenswrapper[4854]: I0103 05:41:06.571315 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-9d9fw" Jan 03 05:41:06 crc kubenswrapper[4854]: I0103 05:41:06.650504 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:41:06 crc kubenswrapper[4854]: E0103 05:41:06.652609 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 05:41:07.152594356 +0000 UTC m=+45.479170928 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:06 crc kubenswrapper[4854]: I0103 05:41:06.756171 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:41:06 crc kubenswrapper[4854]: E0103 05:41:06.756784 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 05:41:07.256772476 +0000 UTC m=+45.583349048 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wc7xf" (UID: "ea3251f8-9e38-4094-86f1-98187e5b2c75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:06 crc kubenswrapper[4854]: I0103 05:41:06.862550 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:41:06 crc kubenswrapper[4854]: E0103 05:41:06.863446 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 05:41:07.363416038 +0000 UTC m=+45.689992610 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:06 crc kubenswrapper[4854]: I0103 05:41:06.970938 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:41:06 crc kubenswrapper[4854]: E0103 05:41:06.971408 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 05:41:07.471377146 +0000 UTC m=+45.797953718 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wc7xf" (UID: "ea3251f8-9e38-4094-86f1-98187e5b2c75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:07 crc kubenswrapper[4854]: I0103 05:41:07.071559 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:41:07 crc kubenswrapper[4854]: E0103 05:41:07.071977 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 05:41:07.571961492 +0000 UTC m=+45.898538064 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:07 crc kubenswrapper[4854]: I0103 05:41:07.148813 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mfbxz"] Jan 03 05:41:07 crc kubenswrapper[4854]: I0103 05:41:07.173116 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:41:07 crc kubenswrapper[4854]: E0103 05:41:07.173958 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 05:41:07.673936634 +0000 UTC m=+46.000513206 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wc7xf" (UID: "ea3251f8-9e38-4094-86f1-98187e5b2c75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:07 crc kubenswrapper[4854]: W0103 05:41:07.178839 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54056ea8_c177_4995_8261_209eb3200f5f.slice/crio-39ad847905265176cc0e5e459ad99dd3086d02c764b303959bc23f43071c1b5f WatchSource:0}: Error finding container 39ad847905265176cc0e5e459ad99dd3086d02c764b303959bc23f43071c1b5f: Status 404 returned error can't find the container with id 39ad847905265176cc0e5e459ad99dd3086d02c764b303959bc23f43071c1b5f Jan 03 05:41:07 crc kubenswrapper[4854]: I0103 05:41:07.222795 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fwgd2" Jan 03 05:41:07 crc kubenswrapper[4854]: I0103 05:41:07.274164 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:41:07 crc kubenswrapper[4854]: E0103 05:41:07.274689 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 05:41:07.774672424 +0000 UTC m=+46.101248996 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:07 crc kubenswrapper[4854]: I0103 05:41:07.337998 4854 patch_prober.go:28] interesting pod/router-default-5444994796-tdlx9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 03 05:41:07 crc kubenswrapper[4854]: [-]has-synced failed: reason withheld Jan 03 05:41:07 crc kubenswrapper[4854]: [+]process-running ok Jan 03 05:41:07 crc kubenswrapper[4854]: healthz check failed Jan 03 05:41:07 crc kubenswrapper[4854]: I0103 05:41:07.338401 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tdlx9" podUID="ab6ec22e-2a2c-4e28-8242-5bd783990843" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 03 05:41:07 crc kubenswrapper[4854]: I0103 05:41:07.376545 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:41:07 crc kubenswrapper[4854]: E0103 05:41:07.376934 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 05:41:07.876921383 +0000 UTC m=+46.203497955 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wc7xf" (UID: "ea3251f8-9e38-4094-86f1-98187e5b2c75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:07 crc kubenswrapper[4854]: I0103 05:41:07.399548 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-9d9fw"] Jan 03 05:41:07 crc kubenswrapper[4854]: I0103 05:41:07.477863 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:41:07 crc kubenswrapper[4854]: E0103 05:41:07.478299 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 05:41:07.978266049 +0000 UTC m=+46.304842621 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:07 crc kubenswrapper[4854]: I0103 05:41:07.478808 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:41:07 crc kubenswrapper[4854]: E0103 05:41:07.493365 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 05:41:07.993305569 +0000 UTC m=+46.319882141 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wc7xf" (UID: "ea3251f8-9e38-4094-86f1-98187e5b2c75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:07 crc kubenswrapper[4854]: I0103 05:41:07.571792 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qmphn" event={"ID":"29a7524a-4f1c-4e10-ae41-8e05f91cbde6","Type":"ContainerStarted","Data":"d4c7481a523430e583ccdbfb470a096944336fea448c60739e716d1ba7853ca9"} Jan 03 05:41:07 crc kubenswrapper[4854]: I0103 05:41:07.583134 4854 generic.go:334] "Generic (PLEG): container finished" podID="7eefb384-ef8d-4c37-9287-20114d60743d" containerID="533aa8162348383e5432b6d8e1683685373f199c3205a906b6042d464da94f97" exitCode=0 Jan 03 05:41:07 crc kubenswrapper[4854]: I0103 05:41:07.583223 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dsx2k" event={"ID":"7eefb384-ef8d-4c37-9287-20114d60743d","Type":"ContainerDied","Data":"533aa8162348383e5432b6d8e1683685373f199c3205a906b6042d464da94f97"} Jan 03 05:41:07 crc kubenswrapper[4854]: I0103 05:41:07.583258 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dsx2k" event={"ID":"7eefb384-ef8d-4c37-9287-20114d60743d","Type":"ContainerStarted","Data":"26049466ec30dc63065403ce278018f93b77d1f52acd1548d1605be3414a49e6"} Jan 03 05:41:07 crc kubenswrapper[4854]: I0103 05:41:07.589204 4854 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 03 05:41:07 crc kubenswrapper[4854]: I0103 05:41:07.589576 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-c8dxw"] Jan 03 05:41:07 crc kubenswrapper[4854]: I0103 05:41:07.590506 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c8dxw" Jan 03 05:41:07 crc kubenswrapper[4854]: I0103 05:41:07.593240 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 03 05:41:07 crc kubenswrapper[4854]: I0103 05:41:07.603282 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:41:07 crc kubenswrapper[4854]: I0103 05:41:07.604269 4854 generic.go:334] "Generic (PLEG): container finished" podID="36a86a6b-3a2c-4994-af93-2b4ae754edfa" containerID="0d2642e00dc964fc0711de5e89b1f8f3eb4f9c0a908d3d734cf3e533f0ab34e1" exitCode=0 Jan 03 05:41:07 crc kubenswrapper[4854]: I0103 05:41:07.604359 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29456970-f9qt7" event={"ID":"36a86a6b-3a2c-4994-af93-2b4ae754edfa","Type":"ContainerDied","Data":"0d2642e00dc964fc0711de5e89b1f8f3eb4f9c0a908d3d734cf3e533f0ab34e1"} Jan 03 05:41:07 crc kubenswrapper[4854]: E0103 05:41:07.603753 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 05:41:08.10372763 +0000 UTC m=+46.430304202 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:07 crc kubenswrapper[4854]: I0103 05:41:07.604984 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:41:07 crc kubenswrapper[4854]: E0103 05:41:07.605437 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 05:41:08.105427924 +0000 UTC m=+46.432004496 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wc7xf" (UID: "ea3251f8-9e38-4094-86f1-98187e5b2c75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:07 crc kubenswrapper[4854]: I0103 05:41:07.627582 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c8dxw"] Jan 03 05:41:07 crc kubenswrapper[4854]: I0103 05:41:07.629475 4854 generic.go:334] "Generic (PLEG): container finished" podID="80855e9f-3a0c-439c-87cf-933b8825c398" containerID="ea8e5712cadd80a475b2f05b210acffb5e24206ae558a8139fe633a1c7bf8f0b" exitCode=0 Jan 03 05:41:07 crc kubenswrapper[4854]: I0103 05:41:07.629540 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-64gkx" event={"ID":"80855e9f-3a0c-439c-87cf-933b8825c398","Type":"ContainerDied","Data":"ea8e5712cadd80a475b2f05b210acffb5e24206ae558a8139fe633a1c7bf8f0b"} Jan 03 05:41:07 crc kubenswrapper[4854]: I0103 05:41:07.664411 4854 generic.go:334] "Generic (PLEG): container finished" podID="6127414f-e3e3-4c52-81a8-f6fea70b7d0c" containerID="56a849392b60c950d251d6f57a5fc8a99f1af50bc3d3301a78065d9e3b1a5e1b" exitCode=0 Jan 03 05:41:07 crc kubenswrapper[4854]: I0103 05:41:07.664480 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bqxfg" event={"ID":"6127414f-e3e3-4c52-81a8-f6fea70b7d0c","Type":"ContainerDied","Data":"56a849392b60c950d251d6f57a5fc8a99f1af50bc3d3301a78065d9e3b1a5e1b"} Jan 03 05:41:07 crc kubenswrapper[4854]: I0103 05:41:07.684507 4854 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 03 05:41:07 crc kubenswrapper[4854]: I0103 05:41:07.703384 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-6wgwf" event={"ID":"c9a35ce4-6254-4744-b9a8-966399ae89cc","Type":"ContainerStarted","Data":"bd1fbfb43110f3dda4dbbdc0b969d890ea3fb38348792487e43770cbc4f6149c"} Jan 03 05:41:07 crc kubenswrapper[4854]: I0103 05:41:07.705886 4854 generic.go:334] "Generic (PLEG): container finished" podID="54056ea8-c177-4995-8261-209eb3200f5f" containerID="02aa45d7932041c095b0d160f4283abc73127fc0b55b44cbca74b6b43b39a74f" exitCode=0 Jan 03 05:41:07 crc kubenswrapper[4854]: I0103 05:41:07.707231 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mfbxz" event={"ID":"54056ea8-c177-4995-8261-209eb3200f5f","Type":"ContainerDied","Data":"02aa45d7932041c095b0d160f4283abc73127fc0b55b44cbca74b6b43b39a74f"} Jan 03 05:41:07 crc kubenswrapper[4854]: I0103 05:41:07.707363 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mfbxz" event={"ID":"54056ea8-c177-4995-8261-209eb3200f5f","Type":"ContainerStarted","Data":"39ad847905265176cc0e5e459ad99dd3086d02c764b303959bc23f43071c1b5f"} Jan 03 05:41:07 crc kubenswrapper[4854]: I0103 05:41:07.708068 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:41:07 crc kubenswrapper[4854]: I0103 05:41:07.708535 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be6f79dd-0ea7-442e-ab1e-e35b15d45721-utilities\") pod \"redhat-marketplace-c8dxw\" (UID: \"be6f79dd-0ea7-442e-ab1e-e35b15d45721\") " pod="openshift-marketplace/redhat-marketplace-c8dxw" Jan 03 05:41:07 crc kubenswrapper[4854]: I0103 05:41:07.708691 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9kcs\" (UniqueName: \"kubernetes.io/projected/be6f79dd-0ea7-442e-ab1e-e35b15d45721-kube-api-access-c9kcs\") pod \"redhat-marketplace-c8dxw\" (UID: \"be6f79dd-0ea7-442e-ab1e-e35b15d45721\") " pod="openshift-marketplace/redhat-marketplace-c8dxw" Jan 03 05:41:07 crc kubenswrapper[4854]: I0103 05:41:07.708774 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be6f79dd-0ea7-442e-ab1e-e35b15d45721-catalog-content\") pod \"redhat-marketplace-c8dxw\" (UID: \"be6f79dd-0ea7-442e-ab1e-e35b15d45721\") " pod="openshift-marketplace/redhat-marketplace-c8dxw" Jan 03 05:41:07 crc kubenswrapper[4854]: I0103 05:41:07.708855 4854 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-2jw8v container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Jan 03 05:41:07 crc kubenswrapper[4854]: I0103 05:41:07.708922 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-2jw8v" podUID="5b7f5d78-25a5-497a-9315-494fe26edb93" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Jan 03 05:41:07 crc kubenswrapper[4854]: E0103 05:41:07.710668 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 05:41:08.210649929 +0000 UTC m=+46.537226501 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:07 crc kubenswrapper[4854]: I0103 05:41:07.749543 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n82hj" Jan 03 05:41:07 crc kubenswrapper[4854]: I0103 05:41:07.804158 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-6wgwf" podStartSLOduration=22.804127121 podStartE2EDuration="22.804127121s" podCreationTimestamp="2026-01-03 05:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:41:07.749899526 +0000 UTC m=+46.076476098" watchObservedRunningTime="2026-01-03 05:41:07.804127121 +0000 UTC m=+46.130703683" Jan 03 05:41:07 crc kubenswrapper[4854]: I0103 05:41:07.812835 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:41:07 crc kubenswrapper[4854]: I0103 05:41:07.813122 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9kcs\" (UniqueName: \"kubernetes.io/projected/be6f79dd-0ea7-442e-ab1e-e35b15d45721-kube-api-access-c9kcs\") pod \"redhat-marketplace-c8dxw\" (UID: \"be6f79dd-0ea7-442e-ab1e-e35b15d45721\") " pod="openshift-marketplace/redhat-marketplace-c8dxw" Jan 03 05:41:07 crc kubenswrapper[4854]: I0103 05:41:07.813183 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be6f79dd-0ea7-442e-ab1e-e35b15d45721-catalog-content\") pod \"redhat-marketplace-c8dxw\" (UID: \"be6f79dd-0ea7-442e-ab1e-e35b15d45721\") " pod="openshift-marketplace/redhat-marketplace-c8dxw" Jan 03 05:41:07 crc kubenswrapper[4854]: I0103 05:41:07.813496 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be6f79dd-0ea7-442e-ab1e-e35b15d45721-utilities\") pod \"redhat-marketplace-c8dxw\" (UID: \"be6f79dd-0ea7-442e-ab1e-e35b15d45721\") " pod="openshift-marketplace/redhat-marketplace-c8dxw" Jan 03 05:41:07 crc kubenswrapper[4854]: E0103 05:41:07.813779 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 05:41:08.313761501 +0000 UTC m=+46.640338073 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wc7xf" (UID: "ea3251f8-9e38-4094-86f1-98187e5b2c75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:07 crc kubenswrapper[4854]: I0103 05:41:07.814566 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be6f79dd-0ea7-442e-ab1e-e35b15d45721-utilities\") pod \"redhat-marketplace-c8dxw\" (UID: \"be6f79dd-0ea7-442e-ab1e-e35b15d45721\") " pod="openshift-marketplace/redhat-marketplace-c8dxw" Jan 03 05:41:07 crc kubenswrapper[4854]: I0103 05:41:07.814824 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be6f79dd-0ea7-442e-ab1e-e35b15d45721-catalog-content\") pod \"redhat-marketplace-c8dxw\" (UID: \"be6f79dd-0ea7-442e-ab1e-e35b15d45721\") " pod="openshift-marketplace/redhat-marketplace-c8dxw" Jan 03 05:41:07 crc kubenswrapper[4854]: I0103 05:41:07.845916 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9kcs\" (UniqueName: \"kubernetes.io/projected/be6f79dd-0ea7-442e-ab1e-e35b15d45721-kube-api-access-c9kcs\") pod \"redhat-marketplace-c8dxw\" (UID: \"be6f79dd-0ea7-442e-ab1e-e35b15d45721\") " pod="openshift-marketplace/redhat-marketplace-c8dxw" Jan 03 05:41:07 crc kubenswrapper[4854]: I0103 05:41:07.916457 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c8dxw" Jan 03 05:41:07 crc kubenswrapper[4854]: I0103 05:41:07.916591 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:41:07 crc kubenswrapper[4854]: E0103 05:41:07.916914 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 05:41:08.416899753 +0000 UTC m=+46.743476325 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.002122 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cj878"] Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.002990 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cj878" Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.004386 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cj878"] Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.018237 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:41:08 crc kubenswrapper[4854]: E0103 05:41:08.020967 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 05:41:08.520954469 +0000 UTC m=+46.847531041 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wc7xf" (UID: "ea3251f8-9e38-4094-86f1-98187e5b2c75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.119557 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:41:08 crc kubenswrapper[4854]: E0103 05:41:08.119766 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 05:41:08.619734669 +0000 UTC m=+46.946311241 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.120535 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd260432-f4bc-4c81-a5e1-e3205534cda8-utilities\") pod \"redhat-marketplace-cj878\" (UID: \"dd260432-f4bc-4c81-a5e1-e3205534cda8\") " pod="openshift-marketplace/redhat-marketplace-cj878" Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.120672 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd260432-f4bc-4c81-a5e1-e3205534cda8-catalog-content\") pod \"redhat-marketplace-cj878\" (UID: \"dd260432-f4bc-4c81-a5e1-e3205534cda8\") " pod="openshift-marketplace/redhat-marketplace-cj878" Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.120692 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82x46\" (UniqueName: \"kubernetes.io/projected/dd260432-f4bc-4c81-a5e1-e3205534cda8-kube-api-access-82x46\") pod \"redhat-marketplace-cj878\" (UID: \"dd260432-f4bc-4c81-a5e1-e3205534cda8\") " pod="openshift-marketplace/redhat-marketplace-cj878" Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.120946 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:41:08 crc kubenswrapper[4854]: E0103 05:41:08.121322 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 05:41:08.621292859 +0000 UTC m=+46.947869431 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wc7xf" (UID: "ea3251f8-9e38-4094-86f1-98187e5b2c75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.184968 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c8dxw"] Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.213337 4854 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-03T05:41:07.684535453Z","Handler":null,"Name":""} Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.222586 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:41:08 crc kubenswrapper[4854]: E0103 05:41:08.222807 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 05:41:08.722771259 +0000 UTC m=+47.049347831 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.222991 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd260432-f4bc-4c81-a5e1-e3205534cda8-catalog-content\") pod \"redhat-marketplace-cj878\" (UID: \"dd260432-f4bc-4c81-a5e1-e3205534cda8\") " pod="openshift-marketplace/redhat-marketplace-cj878" Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.223041 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82x46\" (UniqueName: \"kubernetes.io/projected/dd260432-f4bc-4c81-a5e1-e3205534cda8-kube-api-access-82x46\") pod \"redhat-marketplace-cj878\" (UID: \"dd260432-f4bc-4c81-a5e1-e3205534cda8\") " pod="openshift-marketplace/redhat-marketplace-cj878" Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.223441 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.223576 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd260432-f4bc-4c81-a5e1-e3205534cda8-utilities\") pod \"redhat-marketplace-cj878\" (UID: \"dd260432-f4bc-4c81-a5e1-e3205534cda8\") " pod="openshift-marketplace/redhat-marketplace-cj878" Jan 03 05:41:08 crc kubenswrapper[4854]: E0103 05:41:08.223880 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 05:41:08.723861057 +0000 UTC m=+47.050437629 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wc7xf" (UID: "ea3251f8-9e38-4094-86f1-98187e5b2c75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.224065 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd260432-f4bc-4c81-a5e1-e3205534cda8-catalog-content\") pod \"redhat-marketplace-cj878\" (UID: \"dd260432-f4bc-4c81-a5e1-e3205534cda8\") " pod="openshift-marketplace/redhat-marketplace-cj878" Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.224269 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd260432-f4bc-4c81-a5e1-e3205534cda8-utilities\") pod \"redhat-marketplace-cj878\" (UID: \"dd260432-f4bc-4c81-a5e1-e3205534cda8\") " pod="openshift-marketplace/redhat-marketplace-cj878" Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.235674 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-pzlj8" Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.235782 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-pzlj8" Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.246348 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82x46\" (UniqueName: \"kubernetes.io/projected/dd260432-f4bc-4c81-a5e1-e3205534cda8-kube-api-access-82x46\") pod \"redhat-marketplace-cj878\" (UID: \"dd260432-f4bc-4c81-a5e1-e3205534cda8\") " pod="openshift-marketplace/redhat-marketplace-cj878" Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.247315 4854 patch_prober.go:28] interesting pod/apiserver-76f77b778f-pzlj8 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 03 05:41:08 crc kubenswrapper[4854]: [+]log ok Jan 03 05:41:08 crc kubenswrapper[4854]: [+]etcd ok Jan 03 05:41:08 crc kubenswrapper[4854]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 03 05:41:08 crc kubenswrapper[4854]: [+]poststarthook/generic-apiserver-start-informers ok Jan 03 05:41:08 crc kubenswrapper[4854]: [+]poststarthook/max-in-flight-filter ok Jan 03 05:41:08 crc kubenswrapper[4854]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 03 05:41:08 crc kubenswrapper[4854]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 03 05:41:08 crc kubenswrapper[4854]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 03 05:41:08 crc kubenswrapper[4854]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 03 05:41:08 crc kubenswrapper[4854]: [+]poststarthook/project.openshift.io-projectcache ok Jan 03 05:41:08 crc kubenswrapper[4854]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 03 05:41:08 crc kubenswrapper[4854]: [+]poststarthook/openshift.io-startinformers ok Jan 03 05:41:08 crc kubenswrapper[4854]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 03 05:41:08 crc kubenswrapper[4854]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 03 05:41:08 crc kubenswrapper[4854]: livez check failed Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.247435 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-pzlj8" podUID="2259a421-a8dd-45e8-baa8-15cf1d37782e" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.259641 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-zhzlw" Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.260774 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-zhzlw" Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.268088 4854 patch_prober.go:28] interesting pod/console-f9d7485db-zhzlw container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.268130 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-zhzlw" podUID="ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc" containerName="console" probeResult="failure" output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.320501 4854 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.320556 4854 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.324543 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.342822 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.343270 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cj878" Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.344513 4854 patch_prober.go:28] interesting pod/router-default-5444994796-tdlx9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 03 05:41:08 crc kubenswrapper[4854]: [-]has-synced failed: reason withheld Jan 03 05:41:08 crc kubenswrapper[4854]: [+]process-running ok Jan 03 05:41:08 crc kubenswrapper[4854]: healthz check failed Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.344593 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tdlx9" podUID="ab6ec22e-2a2c-4e28-8242-5bd783990843" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.426485 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.435461 4854 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.435502 4854 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.499630 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wc7xf\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.587212 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-f2b22"] Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.588194 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f2b22" Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.591052 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.599726 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f2b22"] Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.645197 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cj878"] Jan 03 05:41:08 crc kubenswrapper[4854]: W0103 05:41:08.675805 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd260432_f4bc_4c81_a5e1_e3205534cda8.slice/crio-34c09feab4f8d7b19a4e0db7da17b6b2e00008df846238b2e01ae7f51a5e02c6 WatchSource:0}: Error finding container 34c09feab4f8d7b19a4e0db7da17b6b2e00008df846238b2e01ae7f51a5e02c6: Status 404 returned error can't find the container with id 34c09feab4f8d7b19a4e0db7da17b6b2e00008df846238b2e01ae7f51a5e02c6 Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.715569 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qmphn" event={"ID":"29a7524a-4f1c-4e10-ae41-8e05f91cbde6","Type":"ContainerStarted","Data":"bc39fe0ad3c89ce1ba2a6b062e69481645bb4f21f0b1d7997562058d716115ed"} Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.715657 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qmphn" event={"ID":"29a7524a-4f1c-4e10-ae41-8e05f91cbde6","Type":"ContainerStarted","Data":"3547d3f51c37ff3e8c7a8aa297fa6555764ab533f62946126c427d81d9fe134b"} Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.726813 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.727282 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cj878" event={"ID":"dd260432-f4bc-4c81-a5e1-e3205534cda8","Type":"ContainerStarted","Data":"34c09feab4f8d7b19a4e0db7da17b6b2e00008df846238b2e01ae7f51a5e02c6"} Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.729622 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c8dxw" event={"ID":"be6f79dd-0ea7-442e-ab1e-e35b15d45721","Type":"ContainerStarted","Data":"6e85964ce18a1663b8c86086eb0ea005f34d83633036e6ee5c67aa5f0cdea28c"} Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.729668 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c8dxw" event={"ID":"be6f79dd-0ea7-442e-ab1e-e35b15d45721","Type":"ContainerStarted","Data":"714fcb62e30045c4d93426893a2a510ee20c068a585a9edf26cc6be8db3fb41e"} Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.729680 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-9d9fw" podUID="3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://032197ad0435416f99cd15bb36715f4d3b236460e5ef843df74f3f01287f741f" gracePeriod=30 Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.732719 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/332dcfb7-8bcf-46bf-9168-4bdb4411e55e-catalog-content\") pod \"redhat-operators-f2b22\" (UID: \"332dcfb7-8bcf-46bf-9168-4bdb4411e55e\") " pod="openshift-marketplace/redhat-operators-f2b22" Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.732801 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/332dcfb7-8bcf-46bf-9168-4bdb4411e55e-utilities\") pod \"redhat-operators-f2b22\" (UID: \"332dcfb7-8bcf-46bf-9168-4bdb4411e55e\") " pod="openshift-marketplace/redhat-operators-f2b22" Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.732844 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz766\" (UniqueName: \"kubernetes.io/projected/332dcfb7-8bcf-46bf-9168-4bdb4411e55e-kube-api-access-kz766\") pod \"redhat-operators-f2b22\" (UID: \"332dcfb7-8bcf-46bf-9168-4bdb4411e55e\") " pod="openshift-marketplace/redhat-operators-f2b22" Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.735512 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-qmphn" podStartSLOduration=11.735499763 podStartE2EDuration="11.735499763s" podCreationTimestamp="2026-01-03 05:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:41:08.734275841 +0000 UTC m=+47.060852423" watchObservedRunningTime="2026-01-03 05:41:08.735499763 +0000 UTC m=+47.062076325" Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.833985 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz766\" (UniqueName: \"kubernetes.io/projected/332dcfb7-8bcf-46bf-9168-4bdb4411e55e-kube-api-access-kz766\") pod \"redhat-operators-f2b22\" (UID: \"332dcfb7-8bcf-46bf-9168-4bdb4411e55e\") " pod="openshift-marketplace/redhat-operators-f2b22" Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.834127 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/332dcfb7-8bcf-46bf-9168-4bdb4411e55e-catalog-content\") pod \"redhat-operators-f2b22\" (UID: \"332dcfb7-8bcf-46bf-9168-4bdb4411e55e\") " pod="openshift-marketplace/redhat-operators-f2b22" Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.834342 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/332dcfb7-8bcf-46bf-9168-4bdb4411e55e-utilities\") pod \"redhat-operators-f2b22\" (UID: \"332dcfb7-8bcf-46bf-9168-4bdb4411e55e\") " pod="openshift-marketplace/redhat-operators-f2b22" Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.836318 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/332dcfb7-8bcf-46bf-9168-4bdb4411e55e-catalog-content\") pod \"redhat-operators-f2b22\" (UID: \"332dcfb7-8bcf-46bf-9168-4bdb4411e55e\") " pod="openshift-marketplace/redhat-operators-f2b22" Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.836380 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/332dcfb7-8bcf-46bf-9168-4bdb4411e55e-utilities\") pod \"redhat-operators-f2b22\" (UID: \"332dcfb7-8bcf-46bf-9168-4bdb4411e55e\") " pod="openshift-marketplace/redhat-operators-f2b22" Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.869875 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.871985 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.878644 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz766\" (UniqueName: \"kubernetes.io/projected/332dcfb7-8bcf-46bf-9168-4bdb4411e55e-kube-api-access-kz766\") pod \"redhat-operators-f2b22\" (UID: \"332dcfb7-8bcf-46bf-9168-4bdb4411e55e\") " pod="openshift-marketplace/redhat-operators-f2b22" Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.879419 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.882521 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.882924 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 03 05:41:08 crc kubenswrapper[4854]: I0103 05:41:08.917962 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f2b22" Jan 03 05:41:09 crc kubenswrapper[4854]: I0103 05:41:09.001392 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ztvzs"] Jan 03 05:41:09 crc kubenswrapper[4854]: I0103 05:41:09.007721 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ztvzs" Jan 03 05:41:09 crc kubenswrapper[4854]: I0103 05:41:09.025118 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ztvzs"] Jan 03 05:41:09 crc kubenswrapper[4854]: I0103 05:41:09.042940 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cba05caa-c55b-409f-a0da-d1a064def5b0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cba05caa-c55b-409f-a0da-d1a064def5b0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 03 05:41:09 crc kubenswrapper[4854]: I0103 05:41:09.044262 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cba05caa-c55b-409f-a0da-d1a064def5b0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cba05caa-c55b-409f-a0da-d1a064def5b0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 03 05:41:09 crc kubenswrapper[4854]: I0103 05:41:09.145335 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgv75\" (UniqueName: \"kubernetes.io/projected/28406837-5e09-49b4-8583-54a450f07ae4-kube-api-access-hgv75\") pod \"redhat-operators-ztvzs\" (UID: \"28406837-5e09-49b4-8583-54a450f07ae4\") " pod="openshift-marketplace/redhat-operators-ztvzs" Jan 03 05:41:09 crc kubenswrapper[4854]: I0103 05:41:09.145400 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28406837-5e09-49b4-8583-54a450f07ae4-catalog-content\") pod \"redhat-operators-ztvzs\" (UID: \"28406837-5e09-49b4-8583-54a450f07ae4\") " pod="openshift-marketplace/redhat-operators-ztvzs" Jan 03 05:41:09 crc kubenswrapper[4854]: I0103 05:41:09.145449 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28406837-5e09-49b4-8583-54a450f07ae4-utilities\") pod \"redhat-operators-ztvzs\" (UID: \"28406837-5e09-49b4-8583-54a450f07ae4\") " pod="openshift-marketplace/redhat-operators-ztvzs" Jan 03 05:41:09 crc kubenswrapper[4854]: I0103 05:41:09.145476 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cba05caa-c55b-409f-a0da-d1a064def5b0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cba05caa-c55b-409f-a0da-d1a064def5b0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 03 05:41:09 crc kubenswrapper[4854]: I0103 05:41:09.145504 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cba05caa-c55b-409f-a0da-d1a064def5b0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cba05caa-c55b-409f-a0da-d1a064def5b0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 03 05:41:09 crc kubenswrapper[4854]: I0103 05:41:09.145610 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cba05caa-c55b-409f-a0da-d1a064def5b0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cba05caa-c55b-409f-a0da-d1a064def5b0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 03 05:41:09 crc kubenswrapper[4854]: I0103 05:41:09.159954 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29456970-f9qt7" Jan 03 05:41:09 crc kubenswrapper[4854]: I0103 05:41:09.164934 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cba05caa-c55b-409f-a0da-d1a064def5b0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cba05caa-c55b-409f-a0da-d1a064def5b0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 03 05:41:09 crc kubenswrapper[4854]: I0103 05:41:09.215755 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 03 05:41:09 crc kubenswrapper[4854]: I0103 05:41:09.246528 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28406837-5e09-49b4-8583-54a450f07ae4-catalog-content\") pod \"redhat-operators-ztvzs\" (UID: \"28406837-5e09-49b4-8583-54a450f07ae4\") " pod="openshift-marketplace/redhat-operators-ztvzs" Jan 03 05:41:09 crc kubenswrapper[4854]: I0103 05:41:09.246598 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28406837-5e09-49b4-8583-54a450f07ae4-utilities\") pod \"redhat-operators-ztvzs\" (UID: \"28406837-5e09-49b4-8583-54a450f07ae4\") " pod="openshift-marketplace/redhat-operators-ztvzs" Jan 03 05:41:09 crc kubenswrapper[4854]: I0103 05:41:09.246656 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgv75\" (UniqueName: \"kubernetes.io/projected/28406837-5e09-49b4-8583-54a450f07ae4-kube-api-access-hgv75\") pod \"redhat-operators-ztvzs\" (UID: \"28406837-5e09-49b4-8583-54a450f07ae4\") " pod="openshift-marketplace/redhat-operators-ztvzs" Jan 03 05:41:09 crc kubenswrapper[4854]: I0103 05:41:09.247674 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28406837-5e09-49b4-8583-54a450f07ae4-catalog-content\") pod \"redhat-operators-ztvzs\" (UID: \"28406837-5e09-49b4-8583-54a450f07ae4\") " pod="openshift-marketplace/redhat-operators-ztvzs" Jan 03 05:41:09 crc kubenswrapper[4854]: I0103 05:41:09.247792 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28406837-5e09-49b4-8583-54a450f07ae4-utilities\") pod \"redhat-operators-ztvzs\" (UID: \"28406837-5e09-49b4-8583-54a450f07ae4\") " pod="openshift-marketplace/redhat-operators-ztvzs" Jan 03 05:41:09 crc kubenswrapper[4854]: I0103 05:41:09.254980 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f2b22"] Jan 03 05:41:09 crc kubenswrapper[4854]: I0103 05:41:09.263345 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgv75\" (UniqueName: \"kubernetes.io/projected/28406837-5e09-49b4-8583-54a450f07ae4-kube-api-access-hgv75\") pod \"redhat-operators-ztvzs\" (UID: \"28406837-5e09-49b4-8583-54a450f07ae4\") " pod="openshift-marketplace/redhat-operators-ztvzs" Jan 03 05:41:09 crc kubenswrapper[4854]: I0103 05:41:09.339054 4854 patch_prober.go:28] interesting pod/router-default-5444994796-tdlx9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 03 05:41:09 crc kubenswrapper[4854]: [-]has-synced failed: reason withheld Jan 03 05:41:09 crc kubenswrapper[4854]: [+]process-running ok Jan 03 05:41:09 crc kubenswrapper[4854]: healthz check failed Jan 03 05:41:09 crc kubenswrapper[4854]: I0103 05:41:09.339141 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tdlx9" podUID="ab6ec22e-2a2c-4e28-8242-5bd783990843" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 03 05:41:09 crc kubenswrapper[4854]: I0103 05:41:09.347521 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/36a86a6b-3a2c-4994-af93-2b4ae754edfa-config-volume\") pod \"36a86a6b-3a2c-4994-af93-2b4ae754edfa\" (UID: \"36a86a6b-3a2c-4994-af93-2b4ae754edfa\") " Jan 03 05:41:09 crc kubenswrapper[4854]: I0103 05:41:09.347617 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/36a86a6b-3a2c-4994-af93-2b4ae754edfa-secret-volume\") pod \"36a86a6b-3a2c-4994-af93-2b4ae754edfa\" (UID: \"36a86a6b-3a2c-4994-af93-2b4ae754edfa\") " Jan 03 05:41:09 crc kubenswrapper[4854]: I0103 05:41:09.347675 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqtkh\" (UniqueName: \"kubernetes.io/projected/36a86a6b-3a2c-4994-af93-2b4ae754edfa-kube-api-access-vqtkh\") pod \"36a86a6b-3a2c-4994-af93-2b4ae754edfa\" (UID: \"36a86a6b-3a2c-4994-af93-2b4ae754edfa\") " Jan 03 05:41:09 crc kubenswrapper[4854]: I0103 05:41:09.355752 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36a86a6b-3a2c-4994-af93-2b4ae754edfa-config-volume" (OuterVolumeSpecName: "config-volume") pod "36a86a6b-3a2c-4994-af93-2b4ae754edfa" (UID: "36a86a6b-3a2c-4994-af93-2b4ae754edfa"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:41:09 crc kubenswrapper[4854]: I0103 05:41:09.360714 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36a86a6b-3a2c-4994-af93-2b4ae754edfa-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "36a86a6b-3a2c-4994-af93-2b4ae754edfa" (UID: "36a86a6b-3a2c-4994-af93-2b4ae754edfa"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:41:09 crc kubenswrapper[4854]: I0103 05:41:09.360740 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36a86a6b-3a2c-4994-af93-2b4ae754edfa-kube-api-access-vqtkh" (OuterVolumeSpecName: "kube-api-access-vqtkh") pod "36a86a6b-3a2c-4994-af93-2b4ae754edfa" (UID: "36a86a6b-3a2c-4994-af93-2b4ae754edfa"). InnerVolumeSpecName "kube-api-access-vqtkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:41:09 crc kubenswrapper[4854]: I0103 05:41:09.427141 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 03 05:41:09 crc kubenswrapper[4854]: I0103 05:41:09.446394 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wc7xf"] Jan 03 05:41:09 crc kubenswrapper[4854]: I0103 05:41:09.449566 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqtkh\" (UniqueName: \"kubernetes.io/projected/36a86a6b-3a2c-4994-af93-2b4ae754edfa-kube-api-access-vqtkh\") on node \"crc\" DevicePath \"\"" Jan 03 05:41:09 crc kubenswrapper[4854]: I0103 05:41:09.449608 4854 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/36a86a6b-3a2c-4994-af93-2b4ae754edfa-config-volume\") on node \"crc\" DevicePath \"\"" Jan 03 05:41:09 crc kubenswrapper[4854]: I0103 05:41:09.449618 4854 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/36a86a6b-3a2c-4994-af93-2b4ae754edfa-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 03 05:41:09 crc kubenswrapper[4854]: I0103 05:41:09.478064 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ztvzs" Jan 03 05:41:09 crc kubenswrapper[4854]: W0103 05:41:09.498577 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea3251f8_9e38_4094_86f1_98187e5b2c75.slice/crio-bc8bcb436e40f9b244ffa717821590a37f65f804f0ec9c69c9a9fcb2fe572167 WatchSource:0}: Error finding container bc8bcb436e40f9b244ffa717821590a37f65f804f0ec9c69c9a9fcb2fe572167: Status 404 returned error can't find the container with id bc8bcb436e40f9b244ffa717821590a37f65f804f0ec9c69c9a9fcb2fe572167 Jan 03 05:41:09 crc kubenswrapper[4854]: W0103 05:41:09.500019 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podcba05caa_c55b_409f_a0da_d1a064def5b0.slice/crio-abf3f15f156f71a23b4060b290702a5833ef037e09f9d7d0979202b3e2762785 WatchSource:0}: Error finding container abf3f15f156f71a23b4060b290702a5833ef037e09f9d7d0979202b3e2762785: Status 404 returned error can't find the container with id abf3f15f156f71a23b4060b290702a5833ef037e09f9d7d0979202b3e2762785 Jan 03 05:41:09 crc kubenswrapper[4854]: I0103 05:41:09.741792 4854 generic.go:334] "Generic (PLEG): container finished" podID="332dcfb7-8bcf-46bf-9168-4bdb4411e55e" containerID="fe03d90b389b8feb1e2d2b0401e8de71976317947ab776a2536d1d01888eedc4" exitCode=0 Jan 03 05:41:09 crc kubenswrapper[4854]: I0103 05:41:09.742069 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f2b22" event={"ID":"332dcfb7-8bcf-46bf-9168-4bdb4411e55e","Type":"ContainerDied","Data":"fe03d90b389b8feb1e2d2b0401e8de71976317947ab776a2536d1d01888eedc4"} Jan 03 05:41:09 crc kubenswrapper[4854]: I0103 05:41:09.742417 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f2b22" event={"ID":"332dcfb7-8bcf-46bf-9168-4bdb4411e55e","Type":"ContainerStarted","Data":"f7c4e2e159f4fd1ad4dda96be9bcc2a89a389df5d1c707fb0a48335017eb6b64"} Jan 03 05:41:09 crc kubenswrapper[4854]: I0103 05:41:09.743890 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ztvzs"] Jan 03 05:41:09 crc kubenswrapper[4854]: I0103 05:41:09.745917 4854 generic.go:334] "Generic (PLEG): container finished" podID="dd260432-f4bc-4c81-a5e1-e3205534cda8" containerID="9bafbbf545ecf1586c3bf1dd33b7e0e2d6763b89f8ea6a903aa9d245d45c1fb7" exitCode=0 Jan 03 05:41:09 crc kubenswrapper[4854]: I0103 05:41:09.745994 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cj878" event={"ID":"dd260432-f4bc-4c81-a5e1-e3205534cda8","Type":"ContainerDied","Data":"9bafbbf545ecf1586c3bf1dd33b7e0e2d6763b89f8ea6a903aa9d245d45c1fb7"} Jan 03 05:41:09 crc kubenswrapper[4854]: I0103 05:41:09.748949 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29456970-f9qt7" event={"ID":"36a86a6b-3a2c-4994-af93-2b4ae754edfa","Type":"ContainerDied","Data":"e1ac23ce95f6c0a8b2048609d4e6e05117cec6d1a4d93da1d5fce1491008a127"} Jan 03 05:41:09 crc kubenswrapper[4854]: I0103 05:41:09.748977 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1ac23ce95f6c0a8b2048609d4e6e05117cec6d1a4d93da1d5fce1491008a127" Jan 03 05:41:09 crc kubenswrapper[4854]: I0103 05:41:09.749029 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29456970-f9qt7" Jan 03 05:41:09 crc kubenswrapper[4854]: I0103 05:41:09.752125 4854 generic.go:334] "Generic (PLEG): container finished" podID="be6f79dd-0ea7-442e-ab1e-e35b15d45721" containerID="6e85964ce18a1663b8c86086eb0ea005f34d83633036e6ee5c67aa5f0cdea28c" exitCode=0 Jan 03 05:41:09 crc kubenswrapper[4854]: I0103 05:41:09.752292 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c8dxw" event={"ID":"be6f79dd-0ea7-442e-ab1e-e35b15d45721","Type":"ContainerDied","Data":"6e85964ce18a1663b8c86086eb0ea005f34d83633036e6ee5c67aa5f0cdea28c"} Jan 03 05:41:09 crc kubenswrapper[4854]: I0103 05:41:09.763743 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" event={"ID":"ea3251f8-9e38-4094-86f1-98187e5b2c75","Type":"ContainerStarted","Data":"bc8bcb436e40f9b244ffa717821590a37f65f804f0ec9c69c9a9fcb2fe572167"} Jan 03 05:41:09 crc kubenswrapper[4854]: I0103 05:41:09.768990 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"cba05caa-c55b-409f-a0da-d1a064def5b0","Type":"ContainerStarted","Data":"abf3f15f156f71a23b4060b290702a5833ef037e09f9d7d0979202b3e2762785"} Jan 03 05:41:09 crc kubenswrapper[4854]: W0103 05:41:09.775929 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28406837_5e09_49b4_8583_54a450f07ae4.slice/crio-593aaf5288fe9008494031595e2b1989b5c96ac91dc2e4f533d639ab666db3e3 WatchSource:0}: Error finding container 593aaf5288fe9008494031595e2b1989b5c96ac91dc2e4f533d639ab666db3e3: Status 404 returned error can't find the container with id 593aaf5288fe9008494031595e2b1989b5c96ac91dc2e4f533d639ab666db3e3 Jan 03 05:41:09 crc kubenswrapper[4854]: I0103 05:41:09.907240 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vd2jp" Jan 03 05:41:09 crc kubenswrapper[4854]: I0103 05:41:09.907408 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vd2jp" Jan 03 05:41:09 crc kubenswrapper[4854]: I0103 05:41:09.919009 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vd2jp" Jan 03 05:41:09 crc kubenswrapper[4854]: I0103 05:41:09.942832 4854 patch_prober.go:28] interesting pod/downloads-7954f5f757-dmlm5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 03 05:41:09 crc kubenswrapper[4854]: I0103 05:41:09.942854 4854 patch_prober.go:28] interesting pod/downloads-7954f5f757-dmlm5 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 03 05:41:09 crc kubenswrapper[4854]: I0103 05:41:09.942880 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dmlm5" podUID="d5805efa-800c-43df-ba80-7a7db226ebb3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 03 05:41:09 crc kubenswrapper[4854]: I0103 05:41:09.942907 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-dmlm5" podUID="d5805efa-800c-43df-ba80-7a7db226ebb3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 03 05:41:10 crc kubenswrapper[4854]: I0103 05:41:10.152231 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 03 05:41:10 crc kubenswrapper[4854]: I0103 05:41:10.333846 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-tdlx9" Jan 03 05:41:10 crc kubenswrapper[4854]: I0103 05:41:10.338844 4854 patch_prober.go:28] interesting pod/router-default-5444994796-tdlx9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 03 05:41:10 crc kubenswrapper[4854]: [-]has-synced failed: reason withheld Jan 03 05:41:10 crc kubenswrapper[4854]: [+]process-running ok Jan 03 05:41:10 crc kubenswrapper[4854]: healthz check failed Jan 03 05:41:10 crc kubenswrapper[4854]: I0103 05:41:10.338914 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tdlx9" podUID="ab6ec22e-2a2c-4e28-8242-5bd783990843" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 03 05:41:10 crc kubenswrapper[4854]: E0103 05:41:10.663886 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="032197ad0435416f99cd15bb36715f4d3b236460e5ef843df74f3f01287f741f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 03 05:41:10 crc kubenswrapper[4854]: E0103 05:41:10.685733 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="032197ad0435416f99cd15bb36715f4d3b236460e5ef843df74f3f01287f741f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 03 05:41:10 crc kubenswrapper[4854]: E0103 05:41:10.692675 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="032197ad0435416f99cd15bb36715f4d3b236460e5ef843df74f3f01287f741f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 03 05:41:10 crc kubenswrapper[4854]: E0103 05:41:10.692758 4854 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-9d9fw" podUID="3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f" containerName="kube-multus-additional-cni-plugins" Jan 03 05:41:10 crc kubenswrapper[4854]: I0103 05:41:10.716467 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-2jw8v" Jan 03 05:41:10 crc kubenswrapper[4854]: I0103 05:41:10.777840 4854 generic.go:334] "Generic (PLEG): container finished" podID="28406837-5e09-49b4-8583-54a450f07ae4" containerID="1233ca8735e35f0f568d29e6123b6567da8d1baeccdaf9497d0bcbb1d794da0f" exitCode=0 Jan 03 05:41:10 crc kubenswrapper[4854]: I0103 05:41:10.777902 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ztvzs" event={"ID":"28406837-5e09-49b4-8583-54a450f07ae4","Type":"ContainerDied","Data":"1233ca8735e35f0f568d29e6123b6567da8d1baeccdaf9497d0bcbb1d794da0f"} Jan 03 05:41:10 crc kubenswrapper[4854]: I0103 05:41:10.777928 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ztvzs" event={"ID":"28406837-5e09-49b4-8583-54a450f07ae4","Type":"ContainerStarted","Data":"593aaf5288fe9008494031595e2b1989b5c96ac91dc2e4f533d639ab666db3e3"} Jan 03 05:41:10 crc kubenswrapper[4854]: I0103 05:41:10.785092 4854 generic.go:334] "Generic (PLEG): container finished" podID="cba05caa-c55b-409f-a0da-d1a064def5b0" containerID="89da069298c0c6bffe3b8686cf9f0c6af2b82ff50d9d49e4f819aee29a9e7262" exitCode=0 Jan 03 05:41:10 crc kubenswrapper[4854]: I0103 05:41:10.785147 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"cba05caa-c55b-409f-a0da-d1a064def5b0","Type":"ContainerDied","Data":"89da069298c0c6bffe3b8686cf9f0c6af2b82ff50d9d49e4f819aee29a9e7262"} Jan 03 05:41:10 crc kubenswrapper[4854]: I0103 05:41:10.791573 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" event={"ID":"ea3251f8-9e38-4094-86f1-98187e5b2c75","Type":"ContainerStarted","Data":"139b7c856f5bdcb9662e9b7169a664887b143754dfffc4e657b867f6147a922b"} Jan 03 05:41:10 crc kubenswrapper[4854]: I0103 05:41:10.791747 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:41:10 crc kubenswrapper[4854]: I0103 05:41:10.803920 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vd2jp" Jan 03 05:41:10 crc kubenswrapper[4854]: I0103 05:41:10.835784 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" podStartSLOduration=25.83576222 podStartE2EDuration="25.83576222s" podCreationTimestamp="2026-01-03 05:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:41:10.835537664 +0000 UTC m=+49.162114236" watchObservedRunningTime="2026-01-03 05:41:10.83576222 +0000 UTC m=+49.162338792" Jan 03 05:41:11 crc kubenswrapper[4854]: I0103 05:41:11.336499 4854 patch_prober.go:28] interesting pod/router-default-5444994796-tdlx9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 03 05:41:11 crc kubenswrapper[4854]: [-]has-synced failed: reason withheld Jan 03 05:41:11 crc kubenswrapper[4854]: [+]process-running ok Jan 03 05:41:11 crc kubenswrapper[4854]: healthz check failed Jan 03 05:41:11 crc kubenswrapper[4854]: I0103 05:41:11.336563 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tdlx9" podUID="ab6ec22e-2a2c-4e28-8242-5bd783990843" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 03 05:41:12 crc kubenswrapper[4854]: I0103 05:41:12.192190 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 03 05:41:12 crc kubenswrapper[4854]: I0103 05:41:12.305614 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cba05caa-c55b-409f-a0da-d1a064def5b0-kubelet-dir\") pod \"cba05caa-c55b-409f-a0da-d1a064def5b0\" (UID: \"cba05caa-c55b-409f-a0da-d1a064def5b0\") " Jan 03 05:41:12 crc kubenswrapper[4854]: I0103 05:41:12.305707 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cba05caa-c55b-409f-a0da-d1a064def5b0-kube-api-access\") pod \"cba05caa-c55b-409f-a0da-d1a064def5b0\" (UID: \"cba05caa-c55b-409f-a0da-d1a064def5b0\") " Jan 03 05:41:12 crc kubenswrapper[4854]: I0103 05:41:12.307066 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 03 05:41:12 crc kubenswrapper[4854]: I0103 05:41:12.307260 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cba05caa-c55b-409f-a0da-d1a064def5b0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "cba05caa-c55b-409f-a0da-d1a064def5b0" (UID: "cba05caa-c55b-409f-a0da-d1a064def5b0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 05:41:12 crc kubenswrapper[4854]: I0103 05:41:12.342816 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cba05caa-c55b-409f-a0da-d1a064def5b0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "cba05caa-c55b-409f-a0da-d1a064def5b0" (UID: "cba05caa-c55b-409f-a0da-d1a064def5b0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:41:12 crc kubenswrapper[4854]: I0103 05:41:12.343109 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 03 05:41:12 crc kubenswrapper[4854]: I0103 05:41:12.346329 4854 patch_prober.go:28] interesting pod/router-default-5444994796-tdlx9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 03 05:41:12 crc kubenswrapper[4854]: [-]has-synced failed: reason withheld Jan 03 05:41:12 crc kubenswrapper[4854]: [+]process-running ok Jan 03 05:41:12 crc kubenswrapper[4854]: healthz check failed Jan 03 05:41:12 crc kubenswrapper[4854]: I0103 05:41:12.346387 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tdlx9" podUID="ab6ec22e-2a2c-4e28-8242-5bd783990843" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 03 05:41:12 crc kubenswrapper[4854]: I0103 05:41:12.407992 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cba05caa-c55b-409f-a0da-d1a064def5b0-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 03 05:41:12 crc kubenswrapper[4854]: I0103 05:41:12.408015 4854 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cba05caa-c55b-409f-a0da-d1a064def5b0-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 03 05:41:12 crc kubenswrapper[4854]: I0103 05:41:12.810273 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"cba05caa-c55b-409f-a0da-d1a064def5b0","Type":"ContainerDied","Data":"abf3f15f156f71a23b4060b290702a5833ef037e09f9d7d0979202b3e2762785"} Jan 03 05:41:12 crc kubenswrapper[4854]: I0103 05:41:12.810321 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abf3f15f156f71a23b4060b290702a5833ef037e09f9d7d0979202b3e2762785" Jan 03 05:41:12 crc kubenswrapper[4854]: I0103 05:41:12.810366 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 03 05:41:12 crc kubenswrapper[4854]: I0103 05:41:12.812771 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 05:41:12 crc kubenswrapper[4854]: I0103 05:41:12.812809 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 05:41:12 crc kubenswrapper[4854]: I0103 05:41:12.812925 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 05:41:12 crc kubenswrapper[4854]: I0103 05:41:12.812952 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 05:41:12 crc kubenswrapper[4854]: I0103 05:41:12.816788 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 05:41:12 crc kubenswrapper[4854]: I0103 05:41:12.816857 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 05:41:12 crc kubenswrapper[4854]: I0103 05:41:12.833002 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 05:41:12 crc kubenswrapper[4854]: I0103 05:41:12.941841 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 05:41:13 crc kubenswrapper[4854]: I0103 05:41:13.005069 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 05:41:13 crc kubenswrapper[4854]: I0103 05:41:13.015941 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 05:41:13 crc kubenswrapper[4854]: I0103 05:41:13.078721 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 05:41:13 crc kubenswrapper[4854]: I0103 05:41:13.212359 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=1.212343044 podStartE2EDuration="1.212343044s" podCreationTimestamp="2026-01-03 05:41:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:41:12.859226765 +0000 UTC m=+51.185803347" watchObservedRunningTime="2026-01-03 05:41:13.212343044 +0000 UTC m=+51.538919616" Jan 03 05:41:13 crc kubenswrapper[4854]: I0103 05:41:13.243185 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-pzlj8" Jan 03 05:41:13 crc kubenswrapper[4854]: I0103 05:41:13.249395 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-pzlj8" Jan 03 05:41:13 crc kubenswrapper[4854]: I0103 05:41:13.340152 4854 patch_prober.go:28] interesting pod/router-default-5444994796-tdlx9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 03 05:41:13 crc kubenswrapper[4854]: [-]has-synced failed: reason withheld Jan 03 05:41:13 crc kubenswrapper[4854]: [+]process-running ok Jan 03 05:41:13 crc kubenswrapper[4854]: healthz check failed Jan 03 05:41:13 crc kubenswrapper[4854]: I0103 05:41:13.340203 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tdlx9" podUID="ab6ec22e-2a2c-4e28-8242-5bd783990843" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 03 05:41:13 crc kubenswrapper[4854]: W0103 05:41:13.351067 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-197aed1a8c994109e34c74c0446df7e7aebeec72b7c291c1f7b5fe18b18013b5 WatchSource:0}: Error finding container 197aed1a8c994109e34c74c0446df7e7aebeec72b7c291c1f7b5fe18b18013b5: Status 404 returned error can't find the container with id 197aed1a8c994109e34c74c0446df7e7aebeec72b7c291c1f7b5fe18b18013b5 Jan 03 05:41:13 crc kubenswrapper[4854]: W0103 05:41:13.514531 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-069824e0729eb1ff38b6fda540c23f1a0cf57ac06146c2eccca4c024a2e55458 WatchSource:0}: Error finding container 069824e0729eb1ff38b6fda540c23f1a0cf57ac06146c2eccca4c024a2e55458: Status 404 returned error can't find the container with id 069824e0729eb1ff38b6fda540c23f1a0cf57ac06146c2eccca4c024a2e55458 Jan 03 05:41:13 crc kubenswrapper[4854]: I0103 05:41:13.823547 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"197aed1a8c994109e34c74c0446df7e7aebeec72b7c291c1f7b5fe18b18013b5"} Jan 03 05:41:13 crc kubenswrapper[4854]: I0103 05:41:13.827016 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"3c6b1df77a171fe9f74fe974883de39abfb55fa5d30ee21adc30df156e1e43de"} Jan 03 05:41:13 crc kubenswrapper[4854]: I0103 05:41:13.828700 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"069824e0729eb1ff38b6fda540c23f1a0cf57ac06146c2eccca4c024a2e55458"} Jan 03 05:41:14 crc kubenswrapper[4854]: I0103 05:41:14.282263 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 03 05:41:14 crc kubenswrapper[4854]: E0103 05:41:14.282481 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36a86a6b-3a2c-4994-af93-2b4ae754edfa" containerName="collect-profiles" Jan 03 05:41:14 crc kubenswrapper[4854]: I0103 05:41:14.282494 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="36a86a6b-3a2c-4994-af93-2b4ae754edfa" containerName="collect-profiles" Jan 03 05:41:14 crc kubenswrapper[4854]: E0103 05:41:14.282510 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cba05caa-c55b-409f-a0da-d1a064def5b0" containerName="pruner" Jan 03 05:41:14 crc kubenswrapper[4854]: I0103 05:41:14.282516 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="cba05caa-c55b-409f-a0da-d1a064def5b0" containerName="pruner" Jan 03 05:41:14 crc kubenswrapper[4854]: I0103 05:41:14.282615 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="cba05caa-c55b-409f-a0da-d1a064def5b0" containerName="pruner" Jan 03 05:41:14 crc kubenswrapper[4854]: I0103 05:41:14.282628 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="36a86a6b-3a2c-4994-af93-2b4ae754edfa" containerName="collect-profiles" Jan 03 05:41:14 crc kubenswrapper[4854]: I0103 05:41:14.282961 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 03 05:41:14 crc kubenswrapper[4854]: I0103 05:41:14.284639 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 03 05:41:14 crc kubenswrapper[4854]: I0103 05:41:14.288355 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 03 05:41:14 crc kubenswrapper[4854]: I0103 05:41:14.298755 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 03 05:41:14 crc kubenswrapper[4854]: I0103 05:41:14.339563 4854 patch_prober.go:28] interesting pod/router-default-5444994796-tdlx9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 03 05:41:14 crc kubenswrapper[4854]: [-]has-synced failed: reason withheld Jan 03 05:41:14 crc kubenswrapper[4854]: [+]process-running ok Jan 03 05:41:14 crc kubenswrapper[4854]: healthz check failed Jan 03 05:41:14 crc kubenswrapper[4854]: I0103 05:41:14.339611 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tdlx9" podUID="ab6ec22e-2a2c-4e28-8242-5bd783990843" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 03 05:41:14 crc kubenswrapper[4854]: I0103 05:41:14.445697 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa8121a5-787e-4628-b1fc-2b714896b279-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"fa8121a5-787e-4628-b1fc-2b714896b279\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 03 05:41:14 crc kubenswrapper[4854]: I0103 05:41:14.445800 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fa8121a5-787e-4628-b1fc-2b714896b279-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"fa8121a5-787e-4628-b1fc-2b714896b279\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 03 05:41:14 crc kubenswrapper[4854]: I0103 05:41:14.547007 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa8121a5-787e-4628-b1fc-2b714896b279-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"fa8121a5-787e-4628-b1fc-2b714896b279\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 03 05:41:14 crc kubenswrapper[4854]: I0103 05:41:14.547309 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fa8121a5-787e-4628-b1fc-2b714896b279-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"fa8121a5-787e-4628-b1fc-2b714896b279\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 03 05:41:14 crc kubenswrapper[4854]: I0103 05:41:14.547455 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fa8121a5-787e-4628-b1fc-2b714896b279-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"fa8121a5-787e-4628-b1fc-2b714896b279\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 03 05:41:14 crc kubenswrapper[4854]: I0103 05:41:14.563772 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa8121a5-787e-4628-b1fc-2b714896b279-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"fa8121a5-787e-4628-b1fc-2b714896b279\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 03 05:41:14 crc kubenswrapper[4854]: I0103 05:41:14.599589 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 03 05:41:14 crc kubenswrapper[4854]: I0103 05:41:14.845188 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"14e4c5dd46afe369a292e546ff125684536302d16d409818a564d550c0c65abc"} Jan 03 05:41:15 crc kubenswrapper[4854]: I0103 05:41:15.077376 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 03 05:41:15 crc kubenswrapper[4854]: W0103 05:41:15.109339 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podfa8121a5_787e_4628_b1fc_2b714896b279.slice/crio-4a125b376d427d92af6dfda51ff422e756bfd48bd27d64915c41f05fdfe7deae WatchSource:0}: Error finding container 4a125b376d427d92af6dfda51ff422e756bfd48bd27d64915c41f05fdfe7deae: Status 404 returned error can't find the container with id 4a125b376d427d92af6dfda51ff422e756bfd48bd27d64915c41f05fdfe7deae Jan 03 05:41:15 crc kubenswrapper[4854]: I0103 05:41:15.337231 4854 patch_prober.go:28] interesting pod/router-default-5444994796-tdlx9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 03 05:41:15 crc kubenswrapper[4854]: [-]has-synced failed: reason withheld Jan 03 05:41:15 crc kubenswrapper[4854]: [+]process-running ok Jan 03 05:41:15 crc kubenswrapper[4854]: healthz check failed Jan 03 05:41:15 crc kubenswrapper[4854]: I0103 05:41:15.337278 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tdlx9" podUID="ab6ec22e-2a2c-4e28-8242-5bd783990843" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 03 05:41:15 crc kubenswrapper[4854]: I0103 05:41:15.362984 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-2q7dr" Jan 03 05:41:15 crc kubenswrapper[4854]: I0103 05:41:15.852353 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"69e7fb68589a265126ad796b52c2d5180e3043cbfd9c6ab1d82c4908c697089a"} Jan 03 05:41:15 crc kubenswrapper[4854]: I0103 05:41:15.854616 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"fa8121a5-787e-4628-b1fc-2b714896b279","Type":"ContainerStarted","Data":"4a125b376d427d92af6dfda51ff422e756bfd48bd27d64915c41f05fdfe7deae"} Jan 03 05:41:15 crc kubenswrapper[4854]: I0103 05:41:15.856643 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"1fcbaebd6f05944ecddd87dd7b87fe2245cd99ad78f52c23fe59ddd422bba259"} Jan 03 05:41:15 crc kubenswrapper[4854]: I0103 05:41:15.856864 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 05:41:16 crc kubenswrapper[4854]: I0103 05:41:16.336106 4854 patch_prober.go:28] interesting pod/router-default-5444994796-tdlx9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 03 05:41:16 crc kubenswrapper[4854]: [-]has-synced failed: reason withheld Jan 03 05:41:16 crc kubenswrapper[4854]: [+]process-running ok Jan 03 05:41:16 crc kubenswrapper[4854]: healthz check failed Jan 03 05:41:16 crc kubenswrapper[4854]: I0103 05:41:16.336154 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tdlx9" podUID="ab6ec22e-2a2c-4e28-8242-5bd783990843" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 03 05:41:17 crc kubenswrapper[4854]: I0103 05:41:17.336260 4854 patch_prober.go:28] interesting pod/router-default-5444994796-tdlx9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 03 05:41:17 crc kubenswrapper[4854]: [-]has-synced failed: reason withheld Jan 03 05:41:17 crc kubenswrapper[4854]: [+]process-running ok Jan 03 05:41:17 crc kubenswrapper[4854]: healthz check failed Jan 03 05:41:17 crc kubenswrapper[4854]: I0103 05:41:17.336625 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tdlx9" podUID="ab6ec22e-2a2c-4e28-8242-5bd783990843" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 03 05:41:17 crc kubenswrapper[4854]: I0103 05:41:17.873772 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"fa8121a5-787e-4628-b1fc-2b714896b279","Type":"ContainerStarted","Data":"92d54b3c659b97c86ef5e668b2cfa65903f26c4fb0d1738597038b01aa7b4044"} Jan 03 05:41:18 crc kubenswrapper[4854]: I0103 05:41:18.260070 4854 patch_prober.go:28] interesting pod/console-f9d7485db-zhzlw container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 03 05:41:18 crc kubenswrapper[4854]: I0103 05:41:18.260124 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-zhzlw" podUID="ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc" containerName="console" probeResult="failure" output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 03 05:41:18 crc kubenswrapper[4854]: I0103 05:41:18.335489 4854 patch_prober.go:28] interesting pod/router-default-5444994796-tdlx9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 03 05:41:18 crc kubenswrapper[4854]: [-]has-synced failed: reason withheld Jan 03 05:41:18 crc kubenswrapper[4854]: [+]process-running ok Jan 03 05:41:18 crc kubenswrapper[4854]: healthz check failed Jan 03 05:41:18 crc kubenswrapper[4854]: I0103 05:41:18.335548 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tdlx9" podUID="ab6ec22e-2a2c-4e28-8242-5bd783990843" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 03 05:41:18 crc kubenswrapper[4854]: I0103 05:41:18.906588 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=4.906568215 podStartE2EDuration="4.906568215s" podCreationTimestamp="2026-01-03 05:41:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:41:18.904676016 +0000 UTC m=+57.231252588" watchObservedRunningTime="2026-01-03 05:41:18.906568215 +0000 UTC m=+57.233144807" Jan 03 05:41:19 crc kubenswrapper[4854]: I0103 05:41:19.336565 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-tdlx9" Jan 03 05:41:19 crc kubenswrapper[4854]: I0103 05:41:19.338636 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-tdlx9" Jan 03 05:41:19 crc kubenswrapper[4854]: I0103 05:41:19.904210 4854 generic.go:334] "Generic (PLEG): container finished" podID="fa8121a5-787e-4628-b1fc-2b714896b279" containerID="92d54b3c659b97c86ef5e668b2cfa65903f26c4fb0d1738597038b01aa7b4044" exitCode=0 Jan 03 05:41:19 crc kubenswrapper[4854]: I0103 05:41:19.904538 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"fa8121a5-787e-4628-b1fc-2b714896b279","Type":"ContainerDied","Data":"92d54b3c659b97c86ef5e668b2cfa65903f26c4fb0d1738597038b01aa7b4044"} Jan 03 05:41:19 crc kubenswrapper[4854]: I0103 05:41:19.949008 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-dmlm5" Jan 03 05:41:20 crc kubenswrapper[4854]: E0103 05:41:20.665275 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="032197ad0435416f99cd15bb36715f4d3b236460e5ef843df74f3f01287f741f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 03 05:41:20 crc kubenswrapper[4854]: E0103 05:41:20.667014 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="032197ad0435416f99cd15bb36715f4d3b236460e5ef843df74f3f01287f741f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 03 05:41:20 crc kubenswrapper[4854]: E0103 05:41:20.669282 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="032197ad0435416f99cd15bb36715f4d3b236460e5ef843df74f3f01287f741f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 03 05:41:20 crc kubenswrapper[4854]: E0103 05:41:20.669311 4854 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-9d9fw" podUID="3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f" containerName="kube-multus-additional-cni-plugins" Jan 03 05:41:23 crc kubenswrapper[4854]: I0103 05:41:23.351564 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wfmjq"] Jan 03 05:41:23 crc kubenswrapper[4854]: I0103 05:41:23.352959 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-wfmjq" podUID="7d7acbec-3363-42f1-b14d-150409b8c40b" containerName="controller-manager" containerID="cri-o://ab2c5997e913c32b81190d8a32299720151d9d0dd9d33b021bee394839863bf9" gracePeriod=30 Jan 03 05:41:23 crc kubenswrapper[4854]: I0103 05:41:23.459542 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9tbks"] Jan 03 05:41:23 crc kubenswrapper[4854]: I0103 05:41:23.459727 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9tbks" podUID="e0316818-6edd-4e11-9a85-cdc385194515" containerName="route-controller-manager" containerID="cri-o://5fef0a059ca9fa95817cffca2c3d17f70f1a1fe2384ed6b3b334f9d0c005b83e" gracePeriod=30 Jan 03 05:41:25 crc kubenswrapper[4854]: I0103 05:41:25.177043 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:41:25 crc kubenswrapper[4854]: I0103 05:41:25.943505 4854 generic.go:334] "Generic (PLEG): container finished" podID="e0316818-6edd-4e11-9a85-cdc385194515" containerID="5fef0a059ca9fa95817cffca2c3d17f70f1a1fe2384ed6b3b334f9d0c005b83e" exitCode=0 Jan 03 05:41:25 crc kubenswrapper[4854]: I0103 05:41:25.944038 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9tbks" event={"ID":"e0316818-6edd-4e11-9a85-cdc385194515","Type":"ContainerDied","Data":"5fef0a059ca9fa95817cffca2c3d17f70f1a1fe2384ed6b3b334f9d0c005b83e"} Jan 03 05:41:25 crc kubenswrapper[4854]: I0103 05:41:25.945645 4854 generic.go:334] "Generic (PLEG): container finished" podID="7d7acbec-3363-42f1-b14d-150409b8c40b" containerID="ab2c5997e913c32b81190d8a32299720151d9d0dd9d33b021bee394839863bf9" exitCode=0 Jan 03 05:41:25 crc kubenswrapper[4854]: I0103 05:41:25.945693 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-wfmjq" event={"ID":"7d7acbec-3363-42f1-b14d-150409b8c40b","Type":"ContainerDied","Data":"ab2c5997e913c32b81190d8a32299720151d9d0dd9d33b021bee394839863bf9"} Jan 03 05:41:27 crc kubenswrapper[4854]: I0103 05:41:27.128844 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 03 05:41:28 crc kubenswrapper[4854]: I0103 05:41:28.251264 4854 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-wfmjq container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 03 05:41:28 crc kubenswrapper[4854]: I0103 05:41:28.251673 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-wfmjq" podUID="7d7acbec-3363-42f1-b14d-150409b8c40b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 03 05:41:28 crc kubenswrapper[4854]: I0103 05:41:28.264611 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-zhzlw" Jan 03 05:41:28 crc kubenswrapper[4854]: I0103 05:41:28.270314 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-zhzlw" Jan 03 05:41:28 crc kubenswrapper[4854]: I0103 05:41:28.281669 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=1.281655005 podStartE2EDuration="1.281655005s" podCreationTimestamp="2026-01-03 05:41:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:41:28.27916157 +0000 UTC m=+66.605738192" watchObservedRunningTime="2026-01-03 05:41:28.281655005 +0000 UTC m=+66.608231577" Jan 03 05:41:28 crc kubenswrapper[4854]: I0103 05:41:28.732831 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:41:30 crc kubenswrapper[4854]: E0103 05:41:30.665328 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="032197ad0435416f99cd15bb36715f4d3b236460e5ef843df74f3f01287f741f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 03 05:41:30 crc kubenswrapper[4854]: E0103 05:41:30.669158 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="032197ad0435416f99cd15bb36715f4d3b236460e5ef843df74f3f01287f741f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 03 05:41:30 crc kubenswrapper[4854]: E0103 05:41:30.671064 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="032197ad0435416f99cd15bb36715f4d3b236460e5ef843df74f3f01287f741f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 03 05:41:30 crc kubenswrapper[4854]: E0103 05:41:30.671138 4854 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-9d9fw" podUID="3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f" containerName="kube-multus-additional-cni-plugins" Jan 03 05:41:31 crc kubenswrapper[4854]: I0103 05:41:31.210319 4854 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-9tbks container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 03 05:41:31 crc kubenswrapper[4854]: I0103 05:41:31.210883 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9tbks" podUID="e0316818-6edd-4e11-9a85-cdc385194515" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 03 05:41:35 crc kubenswrapper[4854]: I0103 05:41:35.455428 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 03 05:41:35 crc kubenswrapper[4854]: I0103 05:41:35.578839 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fa8121a5-787e-4628-b1fc-2b714896b279-kubelet-dir\") pod \"fa8121a5-787e-4628-b1fc-2b714896b279\" (UID: \"fa8121a5-787e-4628-b1fc-2b714896b279\") " Jan 03 05:41:35 crc kubenswrapper[4854]: I0103 05:41:35.578891 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa8121a5-787e-4628-b1fc-2b714896b279-kube-api-access\") pod \"fa8121a5-787e-4628-b1fc-2b714896b279\" (UID: \"fa8121a5-787e-4628-b1fc-2b714896b279\") " Jan 03 05:41:35 crc kubenswrapper[4854]: I0103 05:41:35.578916 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa8121a5-787e-4628-b1fc-2b714896b279-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "fa8121a5-787e-4628-b1fc-2b714896b279" (UID: "fa8121a5-787e-4628-b1fc-2b714896b279"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 05:41:35 crc kubenswrapper[4854]: I0103 05:41:35.579246 4854 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fa8121a5-787e-4628-b1fc-2b714896b279-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 03 05:41:35 crc kubenswrapper[4854]: I0103 05:41:35.584044 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa8121a5-787e-4628-b1fc-2b714896b279-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "fa8121a5-787e-4628-b1fc-2b714896b279" (UID: "fa8121a5-787e-4628-b1fc-2b714896b279"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:41:35 crc kubenswrapper[4854]: I0103 05:41:35.680798 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa8121a5-787e-4628-b1fc-2b714896b279-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 03 05:41:36 crc kubenswrapper[4854]: I0103 05:41:36.025401 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"fa8121a5-787e-4628-b1fc-2b714896b279","Type":"ContainerDied","Data":"4a125b376d427d92af6dfda51ff422e756bfd48bd27d64915c41f05fdfe7deae"} Jan 03 05:41:36 crc kubenswrapper[4854]: I0103 05:41:36.025779 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a125b376d427d92af6dfda51ff422e756bfd48bd27d64915c41f05fdfe7deae" Jan 03 05:41:36 crc kubenswrapper[4854]: I0103 05:41:36.025845 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 03 05:41:39 crc kubenswrapper[4854]: I0103 05:41:39.252224 4854 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-wfmjq container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 05:41:39 crc kubenswrapper[4854]: I0103 05:41:39.252762 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-wfmjq" podUID="7d7acbec-3363-42f1-b14d-150409b8c40b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 05:41:40 crc kubenswrapper[4854]: I0103 05:41:40.056696 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-9d9fw_3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f/kube-multus-additional-cni-plugins/0.log" Jan 03 05:41:40 crc kubenswrapper[4854]: I0103 05:41:40.056775 4854 generic.go:334] "Generic (PLEG): container finished" podID="3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f" containerID="032197ad0435416f99cd15bb36715f4d3b236460e5ef843df74f3f01287f741f" exitCode=137 Jan 03 05:41:40 crc kubenswrapper[4854]: I0103 05:41:40.056882 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-9d9fw" event={"ID":"3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f","Type":"ContainerDied","Data":"032197ad0435416f99cd15bb36715f4d3b236460e5ef843df74f3f01287f741f"} Jan 03 05:41:40 crc kubenswrapper[4854]: I0103 05:41:40.331824 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-99h4j" Jan 03 05:41:40 crc kubenswrapper[4854]: E0103 05:41:40.663597 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 032197ad0435416f99cd15bb36715f4d3b236460e5ef843df74f3f01287f741f is running failed: container process not found" containerID="032197ad0435416f99cd15bb36715f4d3b236460e5ef843df74f3f01287f741f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 03 05:41:40 crc kubenswrapper[4854]: E0103 05:41:40.664043 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 032197ad0435416f99cd15bb36715f4d3b236460e5ef843df74f3f01287f741f is running failed: container process not found" containerID="032197ad0435416f99cd15bb36715f4d3b236460e5ef843df74f3f01287f741f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 03 05:41:40 crc kubenswrapper[4854]: E0103 05:41:40.664458 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 032197ad0435416f99cd15bb36715f4d3b236460e5ef843df74f3f01287f741f is running failed: container process not found" containerID="032197ad0435416f99cd15bb36715f4d3b236460e5ef843df74f3f01287f741f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 03 05:41:40 crc kubenswrapper[4854]: E0103 05:41:40.664514 4854 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 032197ad0435416f99cd15bb36715f4d3b236460e5ef843df74f3f01287f741f is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-9d9fw" podUID="3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f" containerName="kube-multus-additional-cni-plugins" Jan 03 05:41:42 crc kubenswrapper[4854]: I0103 05:41:42.207299 4854 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-9tbks container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 05:41:42 crc kubenswrapper[4854]: I0103 05:41:42.207386 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9tbks" podUID="e0316818-6edd-4e11-9a85-cdc385194515" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 05:41:48 crc kubenswrapper[4854]: I0103 05:41:48.295162 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 03 05:41:48 crc kubenswrapper[4854]: E0103 05:41:48.296422 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa8121a5-787e-4628-b1fc-2b714896b279" containerName="pruner" Jan 03 05:41:48 crc kubenswrapper[4854]: I0103 05:41:48.296451 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa8121a5-787e-4628-b1fc-2b714896b279" containerName="pruner" Jan 03 05:41:48 crc kubenswrapper[4854]: I0103 05:41:48.296716 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa8121a5-787e-4628-b1fc-2b714896b279" containerName="pruner" Jan 03 05:41:48 crc kubenswrapper[4854]: I0103 05:41:48.297673 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 03 05:41:48 crc kubenswrapper[4854]: I0103 05:41:48.300785 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 03 05:41:48 crc kubenswrapper[4854]: I0103 05:41:48.304576 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 03 05:41:48 crc kubenswrapper[4854]: I0103 05:41:48.309381 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 03 05:41:48 crc kubenswrapper[4854]: I0103 05:41:48.476308 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1af7da24-cf4a-4127-9fb6-bc43d033a87b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1af7da24-cf4a-4127-9fb6-bc43d033a87b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 03 05:41:48 crc kubenswrapper[4854]: I0103 05:41:48.476398 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1af7da24-cf4a-4127-9fb6-bc43d033a87b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1af7da24-cf4a-4127-9fb6-bc43d033a87b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 03 05:41:48 crc kubenswrapper[4854]: I0103 05:41:48.577493 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1af7da24-cf4a-4127-9fb6-bc43d033a87b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1af7da24-cf4a-4127-9fb6-bc43d033a87b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 03 05:41:48 crc kubenswrapper[4854]: I0103 05:41:48.577569 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1af7da24-cf4a-4127-9fb6-bc43d033a87b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1af7da24-cf4a-4127-9fb6-bc43d033a87b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 03 05:41:48 crc kubenswrapper[4854]: I0103 05:41:48.577652 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1af7da24-cf4a-4127-9fb6-bc43d033a87b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1af7da24-cf4a-4127-9fb6-bc43d033a87b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 03 05:41:48 crc kubenswrapper[4854]: I0103 05:41:48.614575 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1af7da24-cf4a-4127-9fb6-bc43d033a87b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1af7da24-cf4a-4127-9fb6-bc43d033a87b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 03 05:41:48 crc kubenswrapper[4854]: I0103 05:41:48.633894 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 03 05:41:49 crc kubenswrapper[4854]: I0103 05:41:49.251545 4854 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-wfmjq container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 05:41:49 crc kubenswrapper[4854]: I0103 05:41:49.251628 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-wfmjq" podUID="7d7acbec-3363-42f1-b14d-150409b8c40b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 05:41:50 crc kubenswrapper[4854]: E0103 05:41:50.663077 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 032197ad0435416f99cd15bb36715f4d3b236460e5ef843df74f3f01287f741f is running failed: container process not found" containerID="032197ad0435416f99cd15bb36715f4d3b236460e5ef843df74f3f01287f741f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 03 05:41:50 crc kubenswrapper[4854]: E0103 05:41:50.663637 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 032197ad0435416f99cd15bb36715f4d3b236460e5ef843df74f3f01287f741f is running failed: container process not found" containerID="032197ad0435416f99cd15bb36715f4d3b236460e5ef843df74f3f01287f741f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 03 05:41:50 crc kubenswrapper[4854]: E0103 05:41:50.664634 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 032197ad0435416f99cd15bb36715f4d3b236460e5ef843df74f3f01287f741f is running failed: container process not found" containerID="032197ad0435416f99cd15bb36715f4d3b236460e5ef843df74f3f01287f741f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 03 05:41:50 crc kubenswrapper[4854]: E0103 05:41:50.664717 4854 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 032197ad0435416f99cd15bb36715f4d3b236460e5ef843df74f3f01287f741f is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-9d9fw" podUID="3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f" containerName="kube-multus-additional-cni-plugins" Jan 03 05:41:52 crc kubenswrapper[4854]: I0103 05:41:52.206525 4854 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-9tbks container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 05:41:52 crc kubenswrapper[4854]: I0103 05:41:52.206934 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9tbks" podUID="e0316818-6edd-4e11-9a85-cdc385194515" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 05:41:52 crc kubenswrapper[4854]: I0103 05:41:52.694686 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 03 05:41:52 crc kubenswrapper[4854]: I0103 05:41:52.696002 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 03 05:41:52 crc kubenswrapper[4854]: I0103 05:41:52.706631 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 03 05:41:52 crc kubenswrapper[4854]: I0103 05:41:52.740828 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0380b43d-2d7f-490d-a822-1740bfa5c9ac-kube-api-access\") pod \"installer-9-crc\" (UID: \"0380b43d-2d7f-490d-a822-1740bfa5c9ac\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 03 05:41:52 crc kubenswrapper[4854]: I0103 05:41:52.740904 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0380b43d-2d7f-490d-a822-1740bfa5c9ac-kubelet-dir\") pod \"installer-9-crc\" (UID: \"0380b43d-2d7f-490d-a822-1740bfa5c9ac\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 03 05:41:52 crc kubenswrapper[4854]: I0103 05:41:52.740942 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0380b43d-2d7f-490d-a822-1740bfa5c9ac-var-lock\") pod \"installer-9-crc\" (UID: \"0380b43d-2d7f-490d-a822-1740bfa5c9ac\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 03 05:41:52 crc kubenswrapper[4854]: I0103 05:41:52.842296 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0380b43d-2d7f-490d-a822-1740bfa5c9ac-kube-api-access\") pod \"installer-9-crc\" (UID: \"0380b43d-2d7f-490d-a822-1740bfa5c9ac\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 03 05:41:52 crc kubenswrapper[4854]: I0103 05:41:52.842387 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0380b43d-2d7f-490d-a822-1740bfa5c9ac-kubelet-dir\") pod \"installer-9-crc\" (UID: \"0380b43d-2d7f-490d-a822-1740bfa5c9ac\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 03 05:41:52 crc kubenswrapper[4854]: I0103 05:41:52.842441 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0380b43d-2d7f-490d-a822-1740bfa5c9ac-var-lock\") pod \"installer-9-crc\" (UID: \"0380b43d-2d7f-490d-a822-1740bfa5c9ac\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 03 05:41:52 crc kubenswrapper[4854]: I0103 05:41:52.842566 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0380b43d-2d7f-490d-a822-1740bfa5c9ac-kubelet-dir\") pod \"installer-9-crc\" (UID: \"0380b43d-2d7f-490d-a822-1740bfa5c9ac\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 03 05:41:52 crc kubenswrapper[4854]: I0103 05:41:52.842606 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0380b43d-2d7f-490d-a822-1740bfa5c9ac-var-lock\") pod \"installer-9-crc\" (UID: \"0380b43d-2d7f-490d-a822-1740bfa5c9ac\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 03 05:41:52 crc kubenswrapper[4854]: I0103 05:41:52.872473 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0380b43d-2d7f-490d-a822-1740bfa5c9ac-kube-api-access\") pod \"installer-9-crc\" (UID: \"0380b43d-2d7f-490d-a822-1740bfa5c9ac\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 03 05:41:53 crc kubenswrapper[4854]: I0103 05:41:53.068534 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 03 05:41:53 crc kubenswrapper[4854]: I0103 05:41:53.379417 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 05:41:59 crc kubenswrapper[4854]: I0103 05:41:59.252337 4854 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-wfmjq container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 05:41:59 crc kubenswrapper[4854]: I0103 05:41:59.252829 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-wfmjq" podUID="7d7acbec-3363-42f1-b14d-150409b8c40b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 05:42:00 crc kubenswrapper[4854]: E0103 05:42:00.662759 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 032197ad0435416f99cd15bb36715f4d3b236460e5ef843df74f3f01287f741f is running failed: container process not found" containerID="032197ad0435416f99cd15bb36715f4d3b236460e5ef843df74f3f01287f741f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 03 05:42:00 crc kubenswrapper[4854]: E0103 05:42:00.663582 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 032197ad0435416f99cd15bb36715f4d3b236460e5ef843df74f3f01287f741f is running failed: container process not found" containerID="032197ad0435416f99cd15bb36715f4d3b236460e5ef843df74f3f01287f741f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 03 05:42:00 crc kubenswrapper[4854]: E0103 05:42:00.664553 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 032197ad0435416f99cd15bb36715f4d3b236460e5ef843df74f3f01287f741f is running failed: container process not found" containerID="032197ad0435416f99cd15bb36715f4d3b236460e5ef843df74f3f01287f741f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 03 05:42:00 crc kubenswrapper[4854]: E0103 05:42:00.664624 4854 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 032197ad0435416f99cd15bb36715f4d3b236460e5ef843df74f3f01287f741f is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-9d9fw" podUID="3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f" containerName="kube-multus-additional-cni-plugins" Jan 03 05:42:02 crc kubenswrapper[4854]: I0103 05:42:02.206927 4854 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-9tbks container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 05:42:02 crc kubenswrapper[4854]: I0103 05:42:02.207292 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9tbks" podUID="e0316818-6edd-4e11-9a85-cdc385194515" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 05:42:04 crc kubenswrapper[4854]: E0103 05:42:04.258696 4854 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 03 05:42:04 crc kubenswrapper[4854]: E0103 05:42:04.259561 4854 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4lvbg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-dsx2k_openshift-marketplace(7eefb384-ef8d-4c37-9287-20114d60743d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 03 05:42:04 crc kubenswrapper[4854]: E0103 05:42:04.261305 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-dsx2k" podUID="7eefb384-ef8d-4c37-9287-20114d60743d" Jan 03 05:42:06 crc kubenswrapper[4854]: E0103 05:42:06.327578 4854 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 03 05:42:06 crc kubenswrapper[4854]: E0103 05:42:06.327898 4854 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gshw8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-bqxfg_openshift-marketplace(6127414f-e3e3-4c52-81a8-f6fea70b7d0c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 03 05:42:06 crc kubenswrapper[4854]: E0103 05:42:06.329131 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-bqxfg" podUID="6127414f-e3e3-4c52-81a8-f6fea70b7d0c" Jan 03 05:42:07 crc kubenswrapper[4854]: E0103 05:42:07.226208 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-dsx2k" podUID="7eefb384-ef8d-4c37-9287-20114d60743d" Jan 03 05:42:08 crc kubenswrapper[4854]: E0103 05:42:08.017463 4854 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 03 05:42:08 crc kubenswrapper[4854]: E0103 05:42:08.017951 4854 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-82x46,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-cj878_openshift-marketplace(dd260432-f4bc-4c81-a5e1-e3205534cda8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 03 05:42:08 crc kubenswrapper[4854]: E0103 05:42:08.019210 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-cj878" podUID="dd260432-f4bc-4c81-a5e1-e3205534cda8" Jan 03 05:42:08 crc kubenswrapper[4854]: E0103 05:42:08.060638 4854 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 03 05:42:08 crc kubenswrapper[4854]: E0103 05:42:08.060814 4854 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jtmpx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-64gkx_openshift-marketplace(80855e9f-3a0c-439c-87cf-933b8825c398): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 03 05:42:08 crc kubenswrapper[4854]: E0103 05:42:08.062054 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-64gkx" podUID="80855e9f-3a0c-439c-87cf-933b8825c398" Jan 03 05:42:09 crc kubenswrapper[4854]: I0103 05:42:09.250908 4854 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-wfmjq container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 05:42:09 crc kubenswrapper[4854]: I0103 05:42:09.250954 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-wfmjq" podUID="7d7acbec-3363-42f1-b14d-150409b8c40b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 05:42:10 crc kubenswrapper[4854]: E0103 05:42:10.662458 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 032197ad0435416f99cd15bb36715f4d3b236460e5ef843df74f3f01287f741f is running failed: container process not found" containerID="032197ad0435416f99cd15bb36715f4d3b236460e5ef843df74f3f01287f741f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 03 05:42:10 crc kubenswrapper[4854]: E0103 05:42:10.662956 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 032197ad0435416f99cd15bb36715f4d3b236460e5ef843df74f3f01287f741f is running failed: container process not found" containerID="032197ad0435416f99cd15bb36715f4d3b236460e5ef843df74f3f01287f741f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 03 05:42:10 crc kubenswrapper[4854]: E0103 05:42:10.663280 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 032197ad0435416f99cd15bb36715f4d3b236460e5ef843df74f3f01287f741f is running failed: container process not found" containerID="032197ad0435416f99cd15bb36715f4d3b236460e5ef843df74f3f01287f741f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 03 05:42:10 crc kubenswrapper[4854]: E0103 05:42:10.663317 4854 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 032197ad0435416f99cd15bb36715f4d3b236460e5ef843df74f3f01287f741f is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-9d9fw" podUID="3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f" containerName="kube-multus-additional-cni-plugins" Jan 03 05:42:11 crc kubenswrapper[4854]: E0103 05:42:11.254301 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-cj878" podUID="dd260432-f4bc-4c81-a5e1-e3205534cda8" Jan 03 05:42:11 crc kubenswrapper[4854]: E0103 05:42:11.254366 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-64gkx" podUID="80855e9f-3a0c-439c-87cf-933b8825c398" Jan 03 05:42:11 crc kubenswrapper[4854]: E0103 05:42:11.254379 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-bqxfg" podUID="6127414f-e3e3-4c52-81a8-f6fea70b7d0c" Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.345506 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9tbks" Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.351852 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-wfmjq" Jan 03 05:42:11 crc kubenswrapper[4854]: E0103 05:42:11.355645 4854 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 03 05:42:11 crc kubenswrapper[4854]: E0103 05:42:11.355780 4854 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hgv75,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-ztvzs_openshift-marketplace(28406837-5e09-49b4-8583-54a450f07ae4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 03 05:42:11 crc kubenswrapper[4854]: E0103 05:42:11.357274 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-ztvzs" podUID="28406837-5e09-49b4-8583-54a450f07ae4" Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.356119 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-9d9fw_3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f/kube-multus-additional-cni-plugins/0.log" Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.358263 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-9d9fw" Jan 03 05:42:11 crc kubenswrapper[4854]: E0103 05:42:11.373633 4854 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 03 05:42:11 crc kubenswrapper[4854]: E0103 05:42:11.373820 4854 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kz766,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-f2b22_openshift-marketplace(332dcfb7-8bcf-46bf-9168-4bdb4411e55e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.374292 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0316818-6edd-4e11-9a85-cdc385194515-serving-cert\") pod \"e0316818-6edd-4e11-9a85-cdc385194515\" (UID: \"e0316818-6edd-4e11-9a85-cdc385194515\") " Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.374332 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0316818-6edd-4e11-9a85-cdc385194515-config\") pod \"e0316818-6edd-4e11-9a85-cdc385194515\" (UID: \"e0316818-6edd-4e11-9a85-cdc385194515\") " Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.374365 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbtmc\" (UniqueName: \"kubernetes.io/projected/e0316818-6edd-4e11-9a85-cdc385194515-kube-api-access-pbtmc\") pod \"e0316818-6edd-4e11-9a85-cdc385194515\" (UID: \"e0316818-6edd-4e11-9a85-cdc385194515\") " Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.374429 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0316818-6edd-4e11-9a85-cdc385194515-client-ca\") pod \"e0316818-6edd-4e11-9a85-cdc385194515\" (UID: \"e0316818-6edd-4e11-9a85-cdc385194515\") " Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.375301 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85f4ff9897-qqv5z"] Jan 03 05:42:11 crc kubenswrapper[4854]: E0103 05:42:11.375320 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-f2b22" podUID="332dcfb7-8bcf-46bf-9168-4bdb4411e55e" Jan 03 05:42:11 crc kubenswrapper[4854]: E0103 05:42:11.375595 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d7acbec-3363-42f1-b14d-150409b8c40b" containerName="controller-manager" Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.375608 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d7acbec-3363-42f1-b14d-150409b8c40b" containerName="controller-manager" Jan 03 05:42:11 crc kubenswrapper[4854]: E0103 05:42:11.375625 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f" containerName="kube-multus-additional-cni-plugins" Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.375651 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f" containerName="kube-multus-additional-cni-plugins" Jan 03 05:42:11 crc kubenswrapper[4854]: E0103 05:42:11.375663 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0316818-6edd-4e11-9a85-cdc385194515" containerName="route-controller-manager" Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.375670 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0316818-6edd-4e11-9a85-cdc385194515" containerName="route-controller-manager" Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.376096 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d7acbec-3363-42f1-b14d-150409b8c40b" containerName="controller-manager" Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.376108 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f" containerName="kube-multus-additional-cni-plugins" Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.376118 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0316818-6edd-4e11-9a85-cdc385194515" containerName="route-controller-manager" Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.376589 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85f4ff9897-qqv5z" Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.376856 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0316818-6edd-4e11-9a85-cdc385194515-config" (OuterVolumeSpecName: "config") pod "e0316818-6edd-4e11-9a85-cdc385194515" (UID: "e0316818-6edd-4e11-9a85-cdc385194515"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.377328 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0316818-6edd-4e11-9a85-cdc385194515-client-ca" (OuterVolumeSpecName: "client-ca") pod "e0316818-6edd-4e11-9a85-cdc385194515" (UID: "e0316818-6edd-4e11-9a85-cdc385194515"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:42:11 crc kubenswrapper[4854]: E0103 05:42:11.378326 4854 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 03 05:42:11 crc kubenswrapper[4854]: E0103 05:42:11.378671 4854 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c9kcs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-c8dxw_openshift-marketplace(be6f79dd-0ea7-442e-ab1e-e35b15d45721): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.379534 4854 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0316818-6edd-4e11-9a85-cdc385194515-client-ca\") on node \"crc\" DevicePath \"\"" Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.379553 4854 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0316818-6edd-4e11-9a85-cdc385194515-config\") on node \"crc\" DevicePath \"\"" Jan 03 05:42:11 crc kubenswrapper[4854]: E0103 05:42:11.380250 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-c8dxw" podUID="be6f79dd-0ea7-442e-ab1e-e35b15d45721" Jan 03 05:42:11 crc kubenswrapper[4854]: E0103 05:42:11.380576 4854 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.380659 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0316818-6edd-4e11-9a85-cdc385194515-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e0316818-6edd-4e11-9a85-cdc385194515" (UID: "e0316818-6edd-4e11-9a85-cdc385194515"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:42:11 crc kubenswrapper[4854]: E0103 05:42:11.380709 4854 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-57qpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-mfbxz_openshift-marketplace(54056ea8-c177-4995-8261-209eb3200f5f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.381413 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0316818-6edd-4e11-9a85-cdc385194515-kube-api-access-pbtmc" (OuterVolumeSpecName: "kube-api-access-pbtmc") pod "e0316818-6edd-4e11-9a85-cdc385194515" (UID: "e0316818-6edd-4e11-9a85-cdc385194515"). InnerVolumeSpecName "kube-api-access-pbtmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:42:11 crc kubenswrapper[4854]: E0103 05:42:11.385140 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-mfbxz" podUID="54056ea8-c177-4995-8261-209eb3200f5f" Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.399708 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85f4ff9897-qqv5z"] Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.480808 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f-cni-sysctl-allowlist\") pod \"3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f\" (UID: \"3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f\") " Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.480900 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d7acbec-3363-42f1-b14d-150409b8c40b-serving-cert\") pod \"7d7acbec-3363-42f1-b14d-150409b8c40b\" (UID: \"7d7acbec-3363-42f1-b14d-150409b8c40b\") " Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.480951 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6q92r\" (UniqueName: \"kubernetes.io/projected/3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f-kube-api-access-6q92r\") pod \"3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f\" (UID: \"3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f\") " Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.481000 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d7acbec-3363-42f1-b14d-150409b8c40b-client-ca\") pod \"7d7acbec-3363-42f1-b14d-150409b8c40b\" (UID: \"7d7acbec-3363-42f1-b14d-150409b8c40b\") " Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.481447 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f" (UID: "3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.481926 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f-ready\") pod \"3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f\" (UID: \"3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f\") " Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.481952 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f-tuning-conf-dir\") pod \"3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f\" (UID: \"3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f\") " Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.481968 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d7acbec-3363-42f1-b14d-150409b8c40b-proxy-ca-bundles\") pod \"7d7acbec-3363-42f1-b14d-150409b8c40b\" (UID: \"7d7acbec-3363-42f1-b14d-150409b8c40b\") " Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.482002 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8m9lf\" (UniqueName: \"kubernetes.io/projected/7d7acbec-3363-42f1-b14d-150409b8c40b-kube-api-access-8m9lf\") pod \"7d7acbec-3363-42f1-b14d-150409b8c40b\" (UID: \"7d7acbec-3363-42f1-b14d-150409b8c40b\") " Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.482055 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d7acbec-3363-42f1-b14d-150409b8c40b-config\") pod \"7d7acbec-3363-42f1-b14d-150409b8c40b\" (UID: \"7d7acbec-3363-42f1-b14d-150409b8c40b\") " Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.482194 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d7acbec-3363-42f1-b14d-150409b8c40b-client-ca" (OuterVolumeSpecName: "client-ca") pod "7d7acbec-3363-42f1-b14d-150409b8c40b" (UID: "7d7acbec-3363-42f1-b14d-150409b8c40b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.482316 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95421875-a016-4eba-8017-27f0276a6bc4-config\") pod \"route-controller-manager-85f4ff9897-qqv5z\" (UID: \"95421875-a016-4eba-8017-27f0276a6bc4\") " pod="openshift-route-controller-manager/route-controller-manager-85f4ff9897-qqv5z" Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.482377 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qzns\" (UniqueName: \"kubernetes.io/projected/95421875-a016-4eba-8017-27f0276a6bc4-kube-api-access-7qzns\") pod \"route-controller-manager-85f4ff9897-qqv5z\" (UID: \"95421875-a016-4eba-8017-27f0276a6bc4\") " pod="openshift-route-controller-manager/route-controller-manager-85f4ff9897-qqv5z" Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.482405 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95421875-a016-4eba-8017-27f0276a6bc4-serving-cert\") pod \"route-controller-manager-85f4ff9897-qqv5z\" (UID: \"95421875-a016-4eba-8017-27f0276a6bc4\") " pod="openshift-route-controller-manager/route-controller-manager-85f4ff9897-qqv5z" Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.482420 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d7acbec-3363-42f1-b14d-150409b8c40b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7d7acbec-3363-42f1-b14d-150409b8c40b" (UID: "7d7acbec-3363-42f1-b14d-150409b8c40b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.482432 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/95421875-a016-4eba-8017-27f0276a6bc4-client-ca\") pod \"route-controller-manager-85f4ff9897-qqv5z\" (UID: \"95421875-a016-4eba-8017-27f0276a6bc4\") " pod="openshift-route-controller-manager/route-controller-manager-85f4ff9897-qqv5z" Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.482490 4854 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.482500 4854 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d7acbec-3363-42f1-b14d-150409b8c40b-client-ca\") on node \"crc\" DevicePath \"\"" Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.482509 4854 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0316818-6edd-4e11-9a85-cdc385194515-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.482518 4854 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d7acbec-3363-42f1-b14d-150409b8c40b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.482526 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbtmc\" (UniqueName: \"kubernetes.io/projected/e0316818-6edd-4e11-9a85-cdc385194515-kube-api-access-pbtmc\") on node \"crc\" DevicePath \"\"" Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.482606 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f-ready" (OuterVolumeSpecName: "ready") pod "3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f" (UID: "3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.482691 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f" (UID: "3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.483054 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d7acbec-3363-42f1-b14d-150409b8c40b-config" (OuterVolumeSpecName: "config") pod "7d7acbec-3363-42f1-b14d-150409b8c40b" (UID: "7d7acbec-3363-42f1-b14d-150409b8c40b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.490317 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f-kube-api-access-6q92r" (OuterVolumeSpecName: "kube-api-access-6q92r") pod "3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f" (UID: "3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f"). InnerVolumeSpecName "kube-api-access-6q92r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.490896 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d7acbec-3363-42f1-b14d-150409b8c40b-kube-api-access-8m9lf" (OuterVolumeSpecName: "kube-api-access-8m9lf") pod "7d7acbec-3363-42f1-b14d-150409b8c40b" (UID: "7d7acbec-3363-42f1-b14d-150409b8c40b"). InnerVolumeSpecName "kube-api-access-8m9lf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.491152 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d7acbec-3363-42f1-b14d-150409b8c40b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7d7acbec-3363-42f1-b14d-150409b8c40b" (UID: "7d7acbec-3363-42f1-b14d-150409b8c40b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.583212 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95421875-a016-4eba-8017-27f0276a6bc4-config\") pod \"route-controller-manager-85f4ff9897-qqv5z\" (UID: \"95421875-a016-4eba-8017-27f0276a6bc4\") " pod="openshift-route-controller-manager/route-controller-manager-85f4ff9897-qqv5z" Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.583274 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qzns\" (UniqueName: \"kubernetes.io/projected/95421875-a016-4eba-8017-27f0276a6bc4-kube-api-access-7qzns\") pod \"route-controller-manager-85f4ff9897-qqv5z\" (UID: \"95421875-a016-4eba-8017-27f0276a6bc4\") " pod="openshift-route-controller-manager/route-controller-manager-85f4ff9897-qqv5z" Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.583301 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95421875-a016-4eba-8017-27f0276a6bc4-serving-cert\") pod \"route-controller-manager-85f4ff9897-qqv5z\" (UID: \"95421875-a016-4eba-8017-27f0276a6bc4\") " pod="openshift-route-controller-manager/route-controller-manager-85f4ff9897-qqv5z" Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.583323 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/95421875-a016-4eba-8017-27f0276a6bc4-client-ca\") pod \"route-controller-manager-85f4ff9897-qqv5z\" (UID: \"95421875-a016-4eba-8017-27f0276a6bc4\") " pod="openshift-route-controller-manager/route-controller-manager-85f4ff9897-qqv5z" Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.583378 4854 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.583389 4854 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f-ready\") on node \"crc\" DevicePath \"\"" Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.583398 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8m9lf\" (UniqueName: \"kubernetes.io/projected/7d7acbec-3363-42f1-b14d-150409b8c40b-kube-api-access-8m9lf\") on node \"crc\" DevicePath \"\"" Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.583409 4854 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d7acbec-3363-42f1-b14d-150409b8c40b-config\") on node \"crc\" DevicePath \"\"" Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.583418 4854 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d7acbec-3363-42f1-b14d-150409b8c40b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.583427 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6q92r\" (UniqueName: \"kubernetes.io/projected/3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f-kube-api-access-6q92r\") on node \"crc\" DevicePath \"\"" Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.584506 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/95421875-a016-4eba-8017-27f0276a6bc4-client-ca\") pod \"route-controller-manager-85f4ff9897-qqv5z\" (UID: \"95421875-a016-4eba-8017-27f0276a6bc4\") " pod="openshift-route-controller-manager/route-controller-manager-85f4ff9897-qqv5z" Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.584957 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95421875-a016-4eba-8017-27f0276a6bc4-config\") pod \"route-controller-manager-85f4ff9897-qqv5z\" (UID: \"95421875-a016-4eba-8017-27f0276a6bc4\") " pod="openshift-route-controller-manager/route-controller-manager-85f4ff9897-qqv5z" Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.587254 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95421875-a016-4eba-8017-27f0276a6bc4-serving-cert\") pod \"route-controller-manager-85f4ff9897-qqv5z\" (UID: \"95421875-a016-4eba-8017-27f0276a6bc4\") " pod="openshift-route-controller-manager/route-controller-manager-85f4ff9897-qqv5z" Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.600256 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qzns\" (UniqueName: \"kubernetes.io/projected/95421875-a016-4eba-8017-27f0276a6bc4-kube-api-access-7qzns\") pod \"route-controller-manager-85f4ff9897-qqv5z\" (UID: \"95421875-a016-4eba-8017-27f0276a6bc4\") " pod="openshift-route-controller-manager/route-controller-manager-85f4ff9897-qqv5z" Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.732262 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85f4ff9897-qqv5z" Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.735443 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.815328 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 03 05:42:11 crc kubenswrapper[4854]: W0103 05:42:11.830238 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod0380b43d_2d7f_490d_a822_1740bfa5c9ac.slice/crio-772e3ad9a8e4c3381089d88db7f323b0fbe47a40dd52f0ae0373dab41b57898c WatchSource:0}: Error finding container 772e3ad9a8e4c3381089d88db7f323b0fbe47a40dd52f0ae0373dab41b57898c: Status 404 returned error can't find the container with id 772e3ad9a8e4c3381089d88db7f323b0fbe47a40dd52f0ae0373dab41b57898c Jan 03 05:42:11 crc kubenswrapper[4854]: I0103 05:42:11.967573 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85f4ff9897-qqv5z"] Jan 03 05:42:11 crc kubenswrapper[4854]: W0103 05:42:11.979500 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95421875_a016_4eba_8017_27f0276a6bc4.slice/crio-41980d0a8fb9c6c05c3fd17f54281275fb20f06f5c28139b7460bd4fcc82c83f WatchSource:0}: Error finding container 41980d0a8fb9c6c05c3fd17f54281275fb20f06f5c28139b7460bd4fcc82c83f: Status 404 returned error can't find the container with id 41980d0a8fb9c6c05c3fd17f54281275fb20f06f5c28139b7460bd4fcc82c83f Jan 03 05:42:12 crc kubenswrapper[4854]: I0103 05:42:12.206348 4854 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-9tbks container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 05:42:12 crc kubenswrapper[4854]: I0103 05:42:12.206755 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9tbks" podUID="e0316818-6edd-4e11-9a85-cdc385194515" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 05:42:12 crc kubenswrapper[4854]: I0103 05:42:12.260357 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-9d9fw_3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f/kube-multus-additional-cni-plugins/0.log" Jan 03 05:42:12 crc kubenswrapper[4854]: I0103 05:42:12.260422 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-9d9fw" event={"ID":"3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f","Type":"ContainerDied","Data":"b6aea1a9e8cada6f94502eaa194ee03c1df61ec2d2d5b33e29f8f1b563f17baa"} Jan 03 05:42:12 crc kubenswrapper[4854]: I0103 05:42:12.260477 4854 scope.go:117] "RemoveContainer" containerID="032197ad0435416f99cd15bb36715f4d3b236460e5ef843df74f3f01287f741f" Jan 03 05:42:12 crc kubenswrapper[4854]: I0103 05:42:12.260587 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-9d9fw" Jan 03 05:42:12 crc kubenswrapper[4854]: I0103 05:42:12.265303 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"1af7da24-cf4a-4127-9fb6-bc43d033a87b","Type":"ContainerStarted","Data":"9291e30f5c04785a07f3b76e6c6878519a075e9d7e27f9c7bc025a7d45dc4b70"} Jan 03 05:42:12 crc kubenswrapper[4854]: I0103 05:42:12.265338 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"1af7da24-cf4a-4127-9fb6-bc43d033a87b","Type":"ContainerStarted","Data":"f60ebe2b2859393bd968febd8d3b72a58c105c850f2178423e44e720c357971b"} Jan 03 05:42:12 crc kubenswrapper[4854]: I0103 05:42:12.266526 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85f4ff9897-qqv5z" event={"ID":"95421875-a016-4eba-8017-27f0276a6bc4","Type":"ContainerStarted","Data":"ed4b4f37e6ba7ba03783c9d9604cc2dbb871c177cca5ae2ebbda2328416cdd46"} Jan 03 05:42:12 crc kubenswrapper[4854]: I0103 05:42:12.266600 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85f4ff9897-qqv5z" event={"ID":"95421875-a016-4eba-8017-27f0276a6bc4","Type":"ContainerStarted","Data":"41980d0a8fb9c6c05c3fd17f54281275fb20f06f5c28139b7460bd4fcc82c83f"} Jan 03 05:42:12 crc kubenswrapper[4854]: I0103 05:42:12.266742 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-85f4ff9897-qqv5z" Jan 03 05:42:12 crc kubenswrapper[4854]: I0103 05:42:12.270537 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9tbks" event={"ID":"e0316818-6edd-4e11-9a85-cdc385194515","Type":"ContainerDied","Data":"1f705b963af8804db36c9b3e9ffa96ee6202d0757667c41c4379d77c1c54db92"} Jan 03 05:42:12 crc kubenswrapper[4854]: I0103 05:42:12.270586 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9tbks" Jan 03 05:42:12 crc kubenswrapper[4854]: I0103 05:42:12.272996 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-wfmjq" event={"ID":"7d7acbec-3363-42f1-b14d-150409b8c40b","Type":"ContainerDied","Data":"99b23c817a8b2e751a4084c606f1d332ca954e7e5a718cddc376d1d0cac5d9d7"} Jan 03 05:42:12 crc kubenswrapper[4854]: I0103 05:42:12.273206 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-wfmjq" Jan 03 05:42:12 crc kubenswrapper[4854]: E0103 05:42:12.278004 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-f2b22" podUID="332dcfb7-8bcf-46bf-9168-4bdb4411e55e" Jan 03 05:42:12 crc kubenswrapper[4854]: I0103 05:42:12.278885 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"0380b43d-2d7f-490d-a822-1740bfa5c9ac","Type":"ContainerStarted","Data":"6244e3491422980599193660b370f80a475b2819d07bcb04d4cadd6908591e45"} Jan 03 05:42:12 crc kubenswrapper[4854]: I0103 05:42:12.278938 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"0380b43d-2d7f-490d-a822-1740bfa5c9ac","Type":"ContainerStarted","Data":"772e3ad9a8e4c3381089d88db7f323b0fbe47a40dd52f0ae0373dab41b57898c"} Jan 03 05:42:12 crc kubenswrapper[4854]: E0103 05:42:12.283308 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-c8dxw" podUID="be6f79dd-0ea7-442e-ab1e-e35b15d45721" Jan 03 05:42:12 crc kubenswrapper[4854]: E0103 05:42:12.283415 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-mfbxz" podUID="54056ea8-c177-4995-8261-209eb3200f5f" Jan 03 05:42:12 crc kubenswrapper[4854]: E0103 05:42:12.283470 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-ztvzs" podUID="28406837-5e09-49b4-8583-54a450f07ae4" Jan 03 05:42:12 crc kubenswrapper[4854]: I0103 05:42:12.284644 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=24.284625351 podStartE2EDuration="24.284625351s" podCreationTimestamp="2026-01-03 05:41:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:42:12.278897183 +0000 UTC m=+110.605473765" watchObservedRunningTime="2026-01-03 05:42:12.284625351 +0000 UTC m=+110.611201923" Jan 03 05:42:12 crc kubenswrapper[4854]: I0103 05:42:12.308530 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-85f4ff9897-qqv5z" podStartSLOduration=29.30850778 podStartE2EDuration="29.30850778s" podCreationTimestamp="2026-01-03 05:41:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:42:12.296295573 +0000 UTC m=+110.622872155" watchObservedRunningTime="2026-01-03 05:42:12.30850778 +0000 UTC m=+110.635084342" Jan 03 05:42:12 crc kubenswrapper[4854]: I0103 05:42:12.311298 4854 scope.go:117] "RemoveContainer" containerID="5fef0a059ca9fa95817cffca2c3d17f70f1a1fe2384ed6b3b334f9d0c005b83e" Jan 03 05:42:12 crc kubenswrapper[4854]: I0103 05:42:12.328801 4854 scope.go:117] "RemoveContainer" containerID="ab2c5997e913c32b81190d8a32299720151d9d0dd9d33b021bee394839863bf9" Jan 03 05:42:12 crc kubenswrapper[4854]: I0103 05:42:12.337498 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=20.33747616 podStartE2EDuration="20.33747616s" podCreationTimestamp="2026-01-03 05:41:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:42:12.334117323 +0000 UTC m=+110.660693895" watchObservedRunningTime="2026-01-03 05:42:12.33747616 +0000 UTC m=+110.664052742" Jan 03 05:42:12 crc kubenswrapper[4854]: I0103 05:42:12.371220 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wfmjq"] Jan 03 05:42:12 crc kubenswrapper[4854]: I0103 05:42:12.373895 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wfmjq"] Jan 03 05:42:12 crc kubenswrapper[4854]: I0103 05:42:12.377805 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-9d9fw"] Jan 03 05:42:12 crc kubenswrapper[4854]: I0103 05:42:12.380283 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-9d9fw"] Jan 03 05:42:12 crc kubenswrapper[4854]: I0103 05:42:12.419988 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9tbks"] Jan 03 05:42:12 crc kubenswrapper[4854]: I0103 05:42:12.427901 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9tbks"] Jan 03 05:42:12 crc kubenswrapper[4854]: I0103 05:42:12.610811 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-85f4ff9897-qqv5z" Jan 03 05:42:13 crc kubenswrapper[4854]: I0103 05:42:13.288668 4854 generic.go:334] "Generic (PLEG): container finished" podID="1af7da24-cf4a-4127-9fb6-bc43d033a87b" containerID="9291e30f5c04785a07f3b76e6c6878519a075e9d7e27f9c7bc025a7d45dc4b70" exitCode=0 Jan 03 05:42:13 crc kubenswrapper[4854]: I0103 05:42:13.288734 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"1af7da24-cf4a-4127-9fb6-bc43d033a87b","Type":"ContainerDied","Data":"9291e30f5c04785a07f3b76e6c6878519a075e9d7e27f9c7bc025a7d45dc4b70"} Jan 03 05:42:14 crc kubenswrapper[4854]: I0103 05:42:14.125446 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f" path="/var/lib/kubelet/pods/3d56bfbf-7d2a-40c0-9df0-3e20571a8f1f/volumes" Jan 03 05:42:14 crc kubenswrapper[4854]: I0103 05:42:14.126478 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d7acbec-3363-42f1-b14d-150409b8c40b" path="/var/lib/kubelet/pods/7d7acbec-3363-42f1-b14d-150409b8c40b/volumes" Jan 03 05:42:14 crc kubenswrapper[4854]: I0103 05:42:14.127049 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0316818-6edd-4e11-9a85-cdc385194515" path="/var/lib/kubelet/pods/e0316818-6edd-4e11-9a85-cdc385194515/volumes" Jan 03 05:42:14 crc kubenswrapper[4854]: I0103 05:42:14.303589 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-b46bdbb7f-szr25"] Jan 03 05:42:14 crc kubenswrapper[4854]: I0103 05:42:14.305410 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b46bdbb7f-szr25" Jan 03 05:42:14 crc kubenswrapper[4854]: I0103 05:42:14.310189 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 03 05:42:14 crc kubenswrapper[4854]: I0103 05:42:14.310453 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 03 05:42:14 crc kubenswrapper[4854]: I0103 05:42:14.311692 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 03 05:42:14 crc kubenswrapper[4854]: I0103 05:42:14.314493 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 03 05:42:14 crc kubenswrapper[4854]: I0103 05:42:14.314777 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 03 05:42:14 crc kubenswrapper[4854]: I0103 05:42:14.315050 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 03 05:42:14 crc kubenswrapper[4854]: I0103 05:42:14.320046 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 03 05:42:14 crc kubenswrapper[4854]: I0103 05:42:14.321775 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-b46bdbb7f-szr25"] Jan 03 05:42:14 crc kubenswrapper[4854]: I0103 05:42:14.430420 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4c2r\" (UniqueName: \"kubernetes.io/projected/b1a6f0af-e686-4a3a-b1af-b4ab77e8362c-kube-api-access-x4c2r\") pod \"controller-manager-b46bdbb7f-szr25\" (UID: \"b1a6f0af-e686-4a3a-b1af-b4ab77e8362c\") " pod="openshift-controller-manager/controller-manager-b46bdbb7f-szr25" Jan 03 05:42:14 crc kubenswrapper[4854]: I0103 05:42:14.430786 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1a6f0af-e686-4a3a-b1af-b4ab77e8362c-serving-cert\") pod \"controller-manager-b46bdbb7f-szr25\" (UID: \"b1a6f0af-e686-4a3a-b1af-b4ab77e8362c\") " pod="openshift-controller-manager/controller-manager-b46bdbb7f-szr25" Jan 03 05:42:14 crc kubenswrapper[4854]: I0103 05:42:14.430845 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1a6f0af-e686-4a3a-b1af-b4ab77e8362c-config\") pod \"controller-manager-b46bdbb7f-szr25\" (UID: \"b1a6f0af-e686-4a3a-b1af-b4ab77e8362c\") " pod="openshift-controller-manager/controller-manager-b46bdbb7f-szr25" Jan 03 05:42:14 crc kubenswrapper[4854]: I0103 05:42:14.430992 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b1a6f0af-e686-4a3a-b1af-b4ab77e8362c-proxy-ca-bundles\") pod \"controller-manager-b46bdbb7f-szr25\" (UID: \"b1a6f0af-e686-4a3a-b1af-b4ab77e8362c\") " pod="openshift-controller-manager/controller-manager-b46bdbb7f-szr25" Jan 03 05:42:14 crc kubenswrapper[4854]: I0103 05:42:14.431399 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b1a6f0af-e686-4a3a-b1af-b4ab77e8362c-client-ca\") pod \"controller-manager-b46bdbb7f-szr25\" (UID: \"b1a6f0af-e686-4a3a-b1af-b4ab77e8362c\") " pod="openshift-controller-manager/controller-manager-b46bdbb7f-szr25" Jan 03 05:42:14 crc kubenswrapper[4854]: I0103 05:42:14.532605 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1a6f0af-e686-4a3a-b1af-b4ab77e8362c-serving-cert\") pod \"controller-manager-b46bdbb7f-szr25\" (UID: \"b1a6f0af-e686-4a3a-b1af-b4ab77e8362c\") " pod="openshift-controller-manager/controller-manager-b46bdbb7f-szr25" Jan 03 05:42:14 crc kubenswrapper[4854]: I0103 05:42:14.532691 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1a6f0af-e686-4a3a-b1af-b4ab77e8362c-config\") pod \"controller-manager-b46bdbb7f-szr25\" (UID: \"b1a6f0af-e686-4a3a-b1af-b4ab77e8362c\") " pod="openshift-controller-manager/controller-manager-b46bdbb7f-szr25" Jan 03 05:42:14 crc kubenswrapper[4854]: I0103 05:42:14.532886 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b1a6f0af-e686-4a3a-b1af-b4ab77e8362c-proxy-ca-bundles\") pod \"controller-manager-b46bdbb7f-szr25\" (UID: \"b1a6f0af-e686-4a3a-b1af-b4ab77e8362c\") " pod="openshift-controller-manager/controller-manager-b46bdbb7f-szr25" Jan 03 05:42:14 crc kubenswrapper[4854]: I0103 05:42:14.532995 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b1a6f0af-e686-4a3a-b1af-b4ab77e8362c-client-ca\") pod \"controller-manager-b46bdbb7f-szr25\" (UID: \"b1a6f0af-e686-4a3a-b1af-b4ab77e8362c\") " pod="openshift-controller-manager/controller-manager-b46bdbb7f-szr25" Jan 03 05:42:14 crc kubenswrapper[4854]: I0103 05:42:14.533139 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4c2r\" (UniqueName: \"kubernetes.io/projected/b1a6f0af-e686-4a3a-b1af-b4ab77e8362c-kube-api-access-x4c2r\") pod \"controller-manager-b46bdbb7f-szr25\" (UID: \"b1a6f0af-e686-4a3a-b1af-b4ab77e8362c\") " pod="openshift-controller-manager/controller-manager-b46bdbb7f-szr25" Jan 03 05:42:14 crc kubenswrapper[4854]: I0103 05:42:14.534104 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b1a6f0af-e686-4a3a-b1af-b4ab77e8362c-client-ca\") pod \"controller-manager-b46bdbb7f-szr25\" (UID: \"b1a6f0af-e686-4a3a-b1af-b4ab77e8362c\") " pod="openshift-controller-manager/controller-manager-b46bdbb7f-szr25" Jan 03 05:42:14 crc kubenswrapper[4854]: I0103 05:42:14.535015 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b1a6f0af-e686-4a3a-b1af-b4ab77e8362c-proxy-ca-bundles\") pod \"controller-manager-b46bdbb7f-szr25\" (UID: \"b1a6f0af-e686-4a3a-b1af-b4ab77e8362c\") " pod="openshift-controller-manager/controller-manager-b46bdbb7f-szr25" Jan 03 05:42:14 crc kubenswrapper[4854]: I0103 05:42:14.535607 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1a6f0af-e686-4a3a-b1af-b4ab77e8362c-config\") pod \"controller-manager-b46bdbb7f-szr25\" (UID: \"b1a6f0af-e686-4a3a-b1af-b4ab77e8362c\") " pod="openshift-controller-manager/controller-manager-b46bdbb7f-szr25" Jan 03 05:42:14 crc kubenswrapper[4854]: I0103 05:42:14.547875 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1a6f0af-e686-4a3a-b1af-b4ab77e8362c-serving-cert\") pod \"controller-manager-b46bdbb7f-szr25\" (UID: \"b1a6f0af-e686-4a3a-b1af-b4ab77e8362c\") " pod="openshift-controller-manager/controller-manager-b46bdbb7f-szr25" Jan 03 05:42:14 crc kubenswrapper[4854]: I0103 05:42:14.551863 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4c2r\" (UniqueName: \"kubernetes.io/projected/b1a6f0af-e686-4a3a-b1af-b4ab77e8362c-kube-api-access-x4c2r\") pod \"controller-manager-b46bdbb7f-szr25\" (UID: \"b1a6f0af-e686-4a3a-b1af-b4ab77e8362c\") " pod="openshift-controller-manager/controller-manager-b46bdbb7f-szr25" Jan 03 05:42:14 crc kubenswrapper[4854]: I0103 05:42:14.592198 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 03 05:42:14 crc kubenswrapper[4854]: I0103 05:42:14.633877 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1af7da24-cf4a-4127-9fb6-bc43d033a87b-kube-api-access\") pod \"1af7da24-cf4a-4127-9fb6-bc43d033a87b\" (UID: \"1af7da24-cf4a-4127-9fb6-bc43d033a87b\") " Jan 03 05:42:14 crc kubenswrapper[4854]: I0103 05:42:14.633930 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1af7da24-cf4a-4127-9fb6-bc43d033a87b-kubelet-dir\") pod \"1af7da24-cf4a-4127-9fb6-bc43d033a87b\" (UID: \"1af7da24-cf4a-4127-9fb6-bc43d033a87b\") " Jan 03 05:42:14 crc kubenswrapper[4854]: I0103 05:42:14.634267 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1af7da24-cf4a-4127-9fb6-bc43d033a87b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1af7da24-cf4a-4127-9fb6-bc43d033a87b" (UID: "1af7da24-cf4a-4127-9fb6-bc43d033a87b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 05:42:14 crc kubenswrapper[4854]: I0103 05:42:14.638183 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1af7da24-cf4a-4127-9fb6-bc43d033a87b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1af7da24-cf4a-4127-9fb6-bc43d033a87b" (UID: "1af7da24-cf4a-4127-9fb6-bc43d033a87b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:42:14 crc kubenswrapper[4854]: I0103 05:42:14.647546 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b46bdbb7f-szr25" Jan 03 05:42:14 crc kubenswrapper[4854]: I0103 05:42:14.735806 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1af7da24-cf4a-4127-9fb6-bc43d033a87b-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 03 05:42:14 crc kubenswrapper[4854]: I0103 05:42:14.735834 4854 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1af7da24-cf4a-4127-9fb6-bc43d033a87b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 03 05:42:14 crc kubenswrapper[4854]: W0103 05:42:14.831813 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1a6f0af_e686_4a3a_b1af_b4ab77e8362c.slice/crio-53d723eed195f9d14654d7dc950311a7459a3cdc0b186b5c9041573325499ee1 WatchSource:0}: Error finding container 53d723eed195f9d14654d7dc950311a7459a3cdc0b186b5c9041573325499ee1: Status 404 returned error can't find the container with id 53d723eed195f9d14654d7dc950311a7459a3cdc0b186b5c9041573325499ee1 Jan 03 05:42:14 crc kubenswrapper[4854]: I0103 05:42:14.834963 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-b46bdbb7f-szr25"] Jan 03 05:42:15 crc kubenswrapper[4854]: I0103 05:42:15.309705 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b46bdbb7f-szr25" event={"ID":"b1a6f0af-e686-4a3a-b1af-b4ab77e8362c","Type":"ContainerStarted","Data":"40700b72f405147dac2d7ef6884431498ebb959ac301a575ef8860d36f4cc4f4"} Jan 03 05:42:15 crc kubenswrapper[4854]: I0103 05:42:15.309980 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b46bdbb7f-szr25" event={"ID":"b1a6f0af-e686-4a3a-b1af-b4ab77e8362c","Type":"ContainerStarted","Data":"53d723eed195f9d14654d7dc950311a7459a3cdc0b186b5c9041573325499ee1"} Jan 03 05:42:15 crc kubenswrapper[4854]: I0103 05:42:15.310602 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-b46bdbb7f-szr25" Jan 03 05:42:15 crc kubenswrapper[4854]: I0103 05:42:15.311399 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"1af7da24-cf4a-4127-9fb6-bc43d033a87b","Type":"ContainerDied","Data":"f60ebe2b2859393bd968febd8d3b72a58c105c850f2178423e44e720c357971b"} Jan 03 05:42:15 crc kubenswrapper[4854]: I0103 05:42:15.311419 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f60ebe2b2859393bd968febd8d3b72a58c105c850f2178423e44e720c357971b" Jan 03 05:42:15 crc kubenswrapper[4854]: I0103 05:42:15.311443 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 03 05:42:15 crc kubenswrapper[4854]: I0103 05:42:15.317861 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-b46bdbb7f-szr25" Jan 03 05:42:15 crc kubenswrapper[4854]: I0103 05:42:15.328187 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-b46bdbb7f-szr25" podStartSLOduration=32.328167267 podStartE2EDuration="32.328167267s" podCreationTimestamp="2026-01-03 05:41:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:42:15.327167321 +0000 UTC m=+113.653743903" watchObservedRunningTime="2026-01-03 05:42:15.328167267 +0000 UTC m=+113.654743839" Jan 03 05:42:24 crc kubenswrapper[4854]: I0103 05:42:24.368764 4854 generic.go:334] "Generic (PLEG): container finished" podID="7eefb384-ef8d-4c37-9287-20114d60743d" containerID="881bd14d507af4d806c909749c7308fcaae6924c53fd58bcb90c4b0d23944f6f" exitCode=0 Jan 03 05:42:24 crc kubenswrapper[4854]: I0103 05:42:24.369150 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dsx2k" event={"ID":"7eefb384-ef8d-4c37-9287-20114d60743d","Type":"ContainerDied","Data":"881bd14d507af4d806c909749c7308fcaae6924c53fd58bcb90c4b0d23944f6f"} Jan 03 05:42:25 crc kubenswrapper[4854]: I0103 05:42:25.376785 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dsx2k" event={"ID":"7eefb384-ef8d-4c37-9287-20114d60743d","Type":"ContainerStarted","Data":"7f033c1151c54efd57adfa810788bbfb6f3c385da083a2bef364fcd7ce78ec57"} Jan 03 05:42:25 crc kubenswrapper[4854]: I0103 05:42:25.379345 4854 generic.go:334] "Generic (PLEG): container finished" podID="80855e9f-3a0c-439c-87cf-933b8825c398" containerID="5009f6a4818fde46c27896adc63c2053578af8e21905d228fc7129847413f341" exitCode=0 Jan 03 05:42:25 crc kubenswrapper[4854]: I0103 05:42:25.379414 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-64gkx" event={"ID":"80855e9f-3a0c-439c-87cf-933b8825c398","Type":"ContainerDied","Data":"5009f6a4818fde46c27896adc63c2053578af8e21905d228fc7129847413f341"} Jan 03 05:42:25 crc kubenswrapper[4854]: I0103 05:42:25.381622 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ztvzs" event={"ID":"28406837-5e09-49b4-8583-54a450f07ae4","Type":"ContainerStarted","Data":"3d24ce275ffa80601db6375eb57fbcfe4a4d5c8b0beb3defb4fb798012eb6526"} Jan 03 05:42:25 crc kubenswrapper[4854]: I0103 05:42:25.398875 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dsx2k" podStartSLOduration=3.17883481 podStartE2EDuration="1m20.398849869s" podCreationTimestamp="2026-01-03 05:41:05 +0000 UTC" firstStartedPulling="2026-01-03 05:41:07.588890975 +0000 UTC m=+45.915467547" lastFinishedPulling="2026-01-03 05:42:24.808906024 +0000 UTC m=+123.135482606" observedRunningTime="2026-01-03 05:42:25.393170012 +0000 UTC m=+123.719746614" watchObservedRunningTime="2026-01-03 05:42:25.398849869 +0000 UTC m=+123.725426451" Jan 03 05:42:26 crc kubenswrapper[4854]: I0103 05:42:26.140920 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dsx2k" Jan 03 05:42:26 crc kubenswrapper[4854]: I0103 05:42:26.140961 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dsx2k" Jan 03 05:42:26 crc kubenswrapper[4854]: I0103 05:42:26.387965 4854 generic.go:334] "Generic (PLEG): container finished" podID="28406837-5e09-49b4-8583-54a450f07ae4" containerID="3d24ce275ffa80601db6375eb57fbcfe4a4d5c8b0beb3defb4fb798012eb6526" exitCode=0 Jan 03 05:42:26 crc kubenswrapper[4854]: I0103 05:42:26.388043 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ztvzs" event={"ID":"28406837-5e09-49b4-8583-54a450f07ae4","Type":"ContainerDied","Data":"3d24ce275ffa80601db6375eb57fbcfe4a4d5c8b0beb3defb4fb798012eb6526"} Jan 03 05:42:26 crc kubenswrapper[4854]: I0103 05:42:26.391135 4854 generic.go:334] "Generic (PLEG): container finished" podID="dd260432-f4bc-4c81-a5e1-e3205534cda8" containerID="216a2eb4bb8838d4e80e268febc637ca0fdc1d038f57b491928448bd39de2687" exitCode=0 Jan 03 05:42:26 crc kubenswrapper[4854]: I0103 05:42:26.391192 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cj878" event={"ID":"dd260432-f4bc-4c81-a5e1-e3205534cda8","Type":"ContainerDied","Data":"216a2eb4bb8838d4e80e268febc637ca0fdc1d038f57b491928448bd39de2687"} Jan 03 05:42:26 crc kubenswrapper[4854]: I0103 05:42:26.397071 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-64gkx" event={"ID":"80855e9f-3a0c-439c-87cf-933b8825c398","Type":"ContainerStarted","Data":"c084121ee582724f034cb0a71f515135288bad9d7d51135e49266427e49c725c"} Jan 03 05:42:26 crc kubenswrapper[4854]: I0103 05:42:26.400851 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-h7drl"] Jan 03 05:42:26 crc kubenswrapper[4854]: I0103 05:42:26.464762 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-64gkx" podStartSLOduration=3.273955514 podStartE2EDuration="1m21.464739525s" podCreationTimestamp="2026-01-03 05:41:05 +0000 UTC" firstStartedPulling="2026-01-03 05:41:07.637347581 +0000 UTC m=+45.963924153" lastFinishedPulling="2026-01-03 05:42:25.828131582 +0000 UTC m=+124.154708164" observedRunningTime="2026-01-03 05:42:26.458997046 +0000 UTC m=+124.785573638" watchObservedRunningTime="2026-01-03 05:42:26.464739525 +0000 UTC m=+124.791316137" Jan 03 05:42:27 crc kubenswrapper[4854]: I0103 05:42:27.209671 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-dsx2k" podUID="7eefb384-ef8d-4c37-9287-20114d60743d" containerName="registry-server" probeResult="failure" output=< Jan 03 05:42:27 crc kubenswrapper[4854]: timeout: failed to connect service ":50051" within 1s Jan 03 05:42:27 crc kubenswrapper[4854]: > Jan 03 05:42:27 crc kubenswrapper[4854]: I0103 05:42:27.408670 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cj878" event={"ID":"dd260432-f4bc-4c81-a5e1-e3205534cda8","Type":"ContainerStarted","Data":"c07a08a31076c61af219ce8e33da4e65d03cb1b07f0d5d05d6a2e1fd1808f0cb"} Jan 03 05:42:27 crc kubenswrapper[4854]: I0103 05:42:27.413866 4854 generic.go:334] "Generic (PLEG): container finished" podID="be6f79dd-0ea7-442e-ab1e-e35b15d45721" containerID="c7dec33c8735f5b4e73712682ae1341fc298b8078565d9fb49fb2bcc536db146" exitCode=0 Jan 03 05:42:27 crc kubenswrapper[4854]: I0103 05:42:27.413934 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c8dxw" event={"ID":"be6f79dd-0ea7-442e-ab1e-e35b15d45721","Type":"ContainerDied","Data":"c7dec33c8735f5b4e73712682ae1341fc298b8078565d9fb49fb2bcc536db146"} Jan 03 05:42:27 crc kubenswrapper[4854]: I0103 05:42:27.429429 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ztvzs" event={"ID":"28406837-5e09-49b4-8583-54a450f07ae4","Type":"ContainerStarted","Data":"76b249acee26cd6a24c4ddf64d5a9075ac417327a62100d25d633bbd38abe199"} Jan 03 05:42:27 crc kubenswrapper[4854]: I0103 05:42:27.431965 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cj878" podStartSLOduration=3.399320891 podStartE2EDuration="1m20.431955365s" podCreationTimestamp="2026-01-03 05:41:07 +0000 UTC" firstStartedPulling="2026-01-03 05:41:09.748599622 +0000 UTC m=+48.075176194" lastFinishedPulling="2026-01-03 05:42:26.781234096 +0000 UTC m=+125.107810668" observedRunningTime="2026-01-03 05:42:27.430611041 +0000 UTC m=+125.757187633" watchObservedRunningTime="2026-01-03 05:42:27.431955365 +0000 UTC m=+125.758531937" Jan 03 05:42:27 crc kubenswrapper[4854]: I0103 05:42:27.446454 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ztvzs" podStartSLOduration=3.436548495 podStartE2EDuration="1m19.44643236s" podCreationTimestamp="2026-01-03 05:41:08 +0000 UTC" firstStartedPulling="2026-01-03 05:41:10.784217214 +0000 UTC m=+49.110793786" lastFinishedPulling="2026-01-03 05:42:26.794101079 +0000 UTC m=+125.120677651" observedRunningTime="2026-01-03 05:42:27.446012119 +0000 UTC m=+125.772588701" watchObservedRunningTime="2026-01-03 05:42:27.44643236 +0000 UTC m=+125.773008932" Jan 03 05:42:28 crc kubenswrapper[4854]: I0103 05:42:28.346218 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cj878" Jan 03 05:42:28 crc kubenswrapper[4854]: I0103 05:42:28.346262 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cj878" Jan 03 05:42:29 crc kubenswrapper[4854]: I0103 05:42:29.404523 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-cj878" podUID="dd260432-f4bc-4c81-a5e1-e3205534cda8" containerName="registry-server" probeResult="failure" output=< Jan 03 05:42:29 crc kubenswrapper[4854]: timeout: failed to connect service ":50051" within 1s Jan 03 05:42:29 crc kubenswrapper[4854]: > Jan 03 05:42:29 crc kubenswrapper[4854]: I0103 05:42:29.440910 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c8dxw" event={"ID":"be6f79dd-0ea7-442e-ab1e-e35b15d45721","Type":"ContainerStarted","Data":"d69da54aa74785fd1ee550ed529b9658bf6d8cb5e91ca308295fdce996446a48"} Jan 03 05:42:29 crc kubenswrapper[4854]: I0103 05:42:29.442702 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bqxfg" event={"ID":"6127414f-e3e3-4c52-81a8-f6fea70b7d0c","Type":"ContainerStarted","Data":"f82c4f86011d8a52d9798620693b5d376a0813d6575a5151c80632ad36eeec27"} Jan 03 05:42:29 crc kubenswrapper[4854]: I0103 05:42:29.444459 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f2b22" event={"ID":"332dcfb7-8bcf-46bf-9168-4bdb4411e55e","Type":"ContainerStarted","Data":"52a75d9111d592b472af1dc45f1f0e978fa384ce0d37e2d305e42bac2b12c7fa"} Jan 03 05:42:29 crc kubenswrapper[4854]: I0103 05:42:29.464291 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-c8dxw" podStartSLOduration=3.931267374 podStartE2EDuration="1m22.464271582s" podCreationTimestamp="2026-01-03 05:41:07 +0000 UTC" firstStartedPulling="2026-01-03 05:41:09.753829007 +0000 UTC m=+48.080405579" lastFinishedPulling="2026-01-03 05:42:28.286833215 +0000 UTC m=+126.613409787" observedRunningTime="2026-01-03 05:42:29.460983137 +0000 UTC m=+127.787559699" watchObservedRunningTime="2026-01-03 05:42:29.464271582 +0000 UTC m=+127.790848154" Jan 03 05:42:29 crc kubenswrapper[4854]: I0103 05:42:29.478370 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ztvzs" Jan 03 05:42:29 crc kubenswrapper[4854]: I0103 05:42:29.478445 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ztvzs" Jan 03 05:42:30 crc kubenswrapper[4854]: I0103 05:42:30.451147 4854 generic.go:334] "Generic (PLEG): container finished" podID="332dcfb7-8bcf-46bf-9168-4bdb4411e55e" containerID="52a75d9111d592b472af1dc45f1f0e978fa384ce0d37e2d305e42bac2b12c7fa" exitCode=0 Jan 03 05:42:30 crc kubenswrapper[4854]: I0103 05:42:30.451205 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f2b22" event={"ID":"332dcfb7-8bcf-46bf-9168-4bdb4411e55e","Type":"ContainerDied","Data":"52a75d9111d592b472af1dc45f1f0e978fa384ce0d37e2d305e42bac2b12c7fa"} Jan 03 05:42:30 crc kubenswrapper[4854]: I0103 05:42:30.453029 4854 generic.go:334] "Generic (PLEG): container finished" podID="6127414f-e3e3-4c52-81a8-f6fea70b7d0c" containerID="f82c4f86011d8a52d9798620693b5d376a0813d6575a5151c80632ad36eeec27" exitCode=0 Jan 03 05:42:30 crc kubenswrapper[4854]: I0103 05:42:30.453069 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bqxfg" event={"ID":"6127414f-e3e3-4c52-81a8-f6fea70b7d0c","Type":"ContainerDied","Data":"f82c4f86011d8a52d9798620693b5d376a0813d6575a5151c80632ad36eeec27"} Jan 03 05:42:30 crc kubenswrapper[4854]: I0103 05:42:30.532745 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ztvzs" podUID="28406837-5e09-49b4-8583-54a450f07ae4" containerName="registry-server" probeResult="failure" output=< Jan 03 05:42:30 crc kubenswrapper[4854]: timeout: failed to connect service ":50051" within 1s Jan 03 05:42:30 crc kubenswrapper[4854]: > Jan 03 05:42:33 crc kubenswrapper[4854]: I0103 05:42:33.361000 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-k8nxq"] Jan 03 05:42:33 crc kubenswrapper[4854]: E0103 05:42:33.361612 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1af7da24-cf4a-4127-9fb6-bc43d033a87b" containerName="pruner" Jan 03 05:42:33 crc kubenswrapper[4854]: I0103 05:42:33.361631 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="1af7da24-cf4a-4127-9fb6-bc43d033a87b" containerName="pruner" Jan 03 05:42:33 crc kubenswrapper[4854]: I0103 05:42:33.361771 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="1af7da24-cf4a-4127-9fb6-bc43d033a87b" containerName="pruner" Jan 03 05:42:33 crc kubenswrapper[4854]: I0103 05:42:33.362218 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" Jan 03 05:42:33 crc kubenswrapper[4854]: I0103 05:42:33.383913 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-k8nxq"] Jan 03 05:42:33 crc kubenswrapper[4854]: I0103 05:42:33.546997 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b633dc70-c725-4f1b-9595-aee7f6c165b4-registry-tls\") pod \"image-registry-66df7c8f76-k8nxq\" (UID: \"b633dc70-c725-4f1b-9595-aee7f6c165b4\") " pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" Jan 03 05:42:33 crc kubenswrapper[4854]: I0103 05:42:33.547065 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b633dc70-c725-4f1b-9595-aee7f6c165b4-bound-sa-token\") pod \"image-registry-66df7c8f76-k8nxq\" (UID: \"b633dc70-c725-4f1b-9595-aee7f6c165b4\") " pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" Jan 03 05:42:33 crc kubenswrapper[4854]: I0103 05:42:33.547164 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b633dc70-c725-4f1b-9595-aee7f6c165b4-installation-pull-secrets\") pod \"image-registry-66df7c8f76-k8nxq\" (UID: \"b633dc70-c725-4f1b-9595-aee7f6c165b4\") " pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" Jan 03 05:42:33 crc kubenswrapper[4854]: I0103 05:42:33.547229 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b633dc70-c725-4f1b-9595-aee7f6c165b4-registry-certificates\") pod \"image-registry-66df7c8f76-k8nxq\" (UID: \"b633dc70-c725-4f1b-9595-aee7f6c165b4\") " pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" Jan 03 05:42:33 crc kubenswrapper[4854]: I0103 05:42:33.547258 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b633dc70-c725-4f1b-9595-aee7f6c165b4-ca-trust-extracted\") pod \"image-registry-66df7c8f76-k8nxq\" (UID: \"b633dc70-c725-4f1b-9595-aee7f6c165b4\") " pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" Jan 03 05:42:33 crc kubenswrapper[4854]: I0103 05:42:33.547297 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-k8nxq\" (UID: \"b633dc70-c725-4f1b-9595-aee7f6c165b4\") " pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" Jan 03 05:42:33 crc kubenswrapper[4854]: I0103 05:42:33.547354 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6h6p\" (UniqueName: \"kubernetes.io/projected/b633dc70-c725-4f1b-9595-aee7f6c165b4-kube-api-access-w6h6p\") pod \"image-registry-66df7c8f76-k8nxq\" (UID: \"b633dc70-c725-4f1b-9595-aee7f6c165b4\") " pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" Jan 03 05:42:33 crc kubenswrapper[4854]: I0103 05:42:33.547425 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b633dc70-c725-4f1b-9595-aee7f6c165b4-trusted-ca\") pod \"image-registry-66df7c8f76-k8nxq\" (UID: \"b633dc70-c725-4f1b-9595-aee7f6c165b4\") " pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" Jan 03 05:42:33 crc kubenswrapper[4854]: I0103 05:42:33.570106 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-k8nxq\" (UID: \"b633dc70-c725-4f1b-9595-aee7f6c165b4\") " pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" Jan 03 05:42:33 crc kubenswrapper[4854]: I0103 05:42:33.649155 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6h6p\" (UniqueName: \"kubernetes.io/projected/b633dc70-c725-4f1b-9595-aee7f6c165b4-kube-api-access-w6h6p\") pod \"image-registry-66df7c8f76-k8nxq\" (UID: \"b633dc70-c725-4f1b-9595-aee7f6c165b4\") " pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" Jan 03 05:42:33 crc kubenswrapper[4854]: I0103 05:42:33.649196 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b633dc70-c725-4f1b-9595-aee7f6c165b4-trusted-ca\") pod \"image-registry-66df7c8f76-k8nxq\" (UID: \"b633dc70-c725-4f1b-9595-aee7f6c165b4\") " pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" Jan 03 05:42:33 crc kubenswrapper[4854]: I0103 05:42:33.649230 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b633dc70-c725-4f1b-9595-aee7f6c165b4-registry-tls\") pod \"image-registry-66df7c8f76-k8nxq\" (UID: \"b633dc70-c725-4f1b-9595-aee7f6c165b4\") " pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" Jan 03 05:42:33 crc kubenswrapper[4854]: I0103 05:42:33.649265 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b633dc70-c725-4f1b-9595-aee7f6c165b4-bound-sa-token\") pod \"image-registry-66df7c8f76-k8nxq\" (UID: \"b633dc70-c725-4f1b-9595-aee7f6c165b4\") " pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" Jan 03 05:42:33 crc kubenswrapper[4854]: I0103 05:42:33.649286 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b633dc70-c725-4f1b-9595-aee7f6c165b4-installation-pull-secrets\") pod \"image-registry-66df7c8f76-k8nxq\" (UID: \"b633dc70-c725-4f1b-9595-aee7f6c165b4\") " pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" Jan 03 05:42:33 crc kubenswrapper[4854]: I0103 05:42:33.649319 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b633dc70-c725-4f1b-9595-aee7f6c165b4-registry-certificates\") pod \"image-registry-66df7c8f76-k8nxq\" (UID: \"b633dc70-c725-4f1b-9595-aee7f6c165b4\") " pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" Jan 03 05:42:33 crc kubenswrapper[4854]: I0103 05:42:33.649337 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b633dc70-c725-4f1b-9595-aee7f6c165b4-ca-trust-extracted\") pod \"image-registry-66df7c8f76-k8nxq\" (UID: \"b633dc70-c725-4f1b-9595-aee7f6c165b4\") " pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" Jan 03 05:42:33 crc kubenswrapper[4854]: I0103 05:42:33.651671 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b633dc70-c725-4f1b-9595-aee7f6c165b4-registry-certificates\") pod \"image-registry-66df7c8f76-k8nxq\" (UID: \"b633dc70-c725-4f1b-9595-aee7f6c165b4\") " pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" Jan 03 05:42:33 crc kubenswrapper[4854]: I0103 05:42:33.651862 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b633dc70-c725-4f1b-9595-aee7f6c165b4-ca-trust-extracted\") pod \"image-registry-66df7c8f76-k8nxq\" (UID: \"b633dc70-c725-4f1b-9595-aee7f6c165b4\") " pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" Jan 03 05:42:33 crc kubenswrapper[4854]: I0103 05:42:33.652076 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b633dc70-c725-4f1b-9595-aee7f6c165b4-trusted-ca\") pod \"image-registry-66df7c8f76-k8nxq\" (UID: \"b633dc70-c725-4f1b-9595-aee7f6c165b4\") " pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" Jan 03 05:42:33 crc kubenswrapper[4854]: I0103 05:42:33.660698 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b633dc70-c725-4f1b-9595-aee7f6c165b4-installation-pull-secrets\") pod \"image-registry-66df7c8f76-k8nxq\" (UID: \"b633dc70-c725-4f1b-9595-aee7f6c165b4\") " pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" Jan 03 05:42:33 crc kubenswrapper[4854]: I0103 05:42:33.664489 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6h6p\" (UniqueName: \"kubernetes.io/projected/b633dc70-c725-4f1b-9595-aee7f6c165b4-kube-api-access-w6h6p\") pod \"image-registry-66df7c8f76-k8nxq\" (UID: \"b633dc70-c725-4f1b-9595-aee7f6c165b4\") " pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" Jan 03 05:42:33 crc kubenswrapper[4854]: I0103 05:42:33.666100 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b633dc70-c725-4f1b-9595-aee7f6c165b4-bound-sa-token\") pod \"image-registry-66df7c8f76-k8nxq\" (UID: \"b633dc70-c725-4f1b-9595-aee7f6c165b4\") " pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" Jan 03 05:42:33 crc kubenswrapper[4854]: I0103 05:42:33.686164 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b633dc70-c725-4f1b-9595-aee7f6c165b4-registry-tls\") pod \"image-registry-66df7c8f76-k8nxq\" (UID: \"b633dc70-c725-4f1b-9595-aee7f6c165b4\") " pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" Jan 03 05:42:33 crc kubenswrapper[4854]: I0103 05:42:33.978513 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" Jan 03 05:42:34 crc kubenswrapper[4854]: I0103 05:42:34.381861 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-k8nxq"] Jan 03 05:42:34 crc kubenswrapper[4854]: W0103 05:42:34.391122 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb633dc70_c725_4f1b_9595_aee7f6c165b4.slice/crio-4b96ff0c4b48568e92f50aa29917fbc7e55e17e61dcd1f332356579aab517a80 WatchSource:0}: Error finding container 4b96ff0c4b48568e92f50aa29917fbc7e55e17e61dcd1f332356579aab517a80: Status 404 returned error can't find the container with id 4b96ff0c4b48568e92f50aa29917fbc7e55e17e61dcd1f332356579aab517a80 Jan 03 05:42:34 crc kubenswrapper[4854]: I0103 05:42:34.477121 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" event={"ID":"b633dc70-c725-4f1b-9595-aee7f6c165b4","Type":"ContainerStarted","Data":"4b96ff0c4b48568e92f50aa29917fbc7e55e17e61dcd1f332356579aab517a80"} Jan 03 05:42:35 crc kubenswrapper[4854]: I0103 05:42:35.761450 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-64gkx" Jan 03 05:42:35 crc kubenswrapper[4854]: I0103 05:42:35.761510 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-64gkx" Jan 03 05:42:36 crc kubenswrapper[4854]: I0103 05:42:36.142136 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-64gkx" Jan 03 05:42:36 crc kubenswrapper[4854]: I0103 05:42:36.200802 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dsx2k" Jan 03 05:42:36 crc kubenswrapper[4854]: I0103 05:42:36.247302 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dsx2k" Jan 03 05:42:36 crc kubenswrapper[4854]: I0103 05:42:36.762173 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-64gkx" Jan 03 05:42:36 crc kubenswrapper[4854]: I0103 05:42:36.802209 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dsx2k"] Jan 03 05:42:37 crc kubenswrapper[4854]: I0103 05:42:37.496779 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" event={"ID":"b633dc70-c725-4f1b-9595-aee7f6c165b4","Type":"ContainerStarted","Data":"79f31ece74c7c71a3ac5e8b1497c7401a52fca93c267d9bcb7b19c13821144d8"} Jan 03 05:42:37 crc kubenswrapper[4854]: I0103 05:42:37.497246 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dsx2k" podUID="7eefb384-ef8d-4c37-9287-20114d60743d" containerName="registry-server" containerID="cri-o://7f033c1151c54efd57adfa810788bbfb6f3c385da083a2bef364fcd7ce78ec57" gracePeriod=2 Jan 03 05:42:37 crc kubenswrapper[4854]: I0103 05:42:37.918156 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-c8dxw" Jan 03 05:42:37 crc kubenswrapper[4854]: I0103 05:42:37.918511 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-c8dxw" Jan 03 05:42:37 crc kubenswrapper[4854]: I0103 05:42:37.967490 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-c8dxw" Jan 03 05:42:38 crc kubenswrapper[4854]: I0103 05:42:38.385175 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cj878" Jan 03 05:42:38 crc kubenswrapper[4854]: I0103 05:42:38.426670 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cj878" Jan 03 05:42:38 crc kubenswrapper[4854]: I0103 05:42:38.503570 4854 generic.go:334] "Generic (PLEG): container finished" podID="7eefb384-ef8d-4c37-9287-20114d60743d" containerID="7f033c1151c54efd57adfa810788bbfb6f3c385da083a2bef364fcd7ce78ec57" exitCode=0 Jan 03 05:42:38 crc kubenswrapper[4854]: I0103 05:42:38.503719 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dsx2k" event={"ID":"7eefb384-ef8d-4c37-9287-20114d60743d","Type":"ContainerDied","Data":"7f033c1151c54efd57adfa810788bbfb6f3c385da083a2bef364fcd7ce78ec57"} Jan 03 05:42:38 crc kubenswrapper[4854]: I0103 05:42:38.524975 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" podStartSLOduration=5.524958391 podStartE2EDuration="5.524958391s" podCreationTimestamp="2026-01-03 05:42:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:42:38.521600514 +0000 UTC m=+136.848177096" watchObservedRunningTime="2026-01-03 05:42:38.524958391 +0000 UTC m=+136.851534953" Jan 03 05:42:38 crc kubenswrapper[4854]: I0103 05:42:38.557378 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-c8dxw" Jan 03 05:42:39 crc kubenswrapper[4854]: I0103 05:42:39.526940 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ztvzs" Jan 03 05:42:39 crc kubenswrapper[4854]: I0103 05:42:39.564486 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ztvzs" Jan 03 05:42:40 crc kubenswrapper[4854]: I0103 05:42:40.403432 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cj878"] Jan 03 05:42:40 crc kubenswrapper[4854]: I0103 05:42:40.403787 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cj878" podUID="dd260432-f4bc-4c81-a5e1-e3205534cda8" containerName="registry-server" containerID="cri-o://c07a08a31076c61af219ce8e33da4e65d03cb1b07f0d5d05d6a2e1fd1808f0cb" gracePeriod=2 Jan 03 05:42:41 crc kubenswrapper[4854]: I0103 05:42:41.804188 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ztvzs"] Jan 03 05:42:41 crc kubenswrapper[4854]: I0103 05:42:41.804589 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ztvzs" podUID="28406837-5e09-49b4-8583-54a450f07ae4" containerName="registry-server" containerID="cri-o://76b249acee26cd6a24c4ddf64d5a9075ac417327a62100d25d633bbd38abe199" gracePeriod=2 Jan 03 05:42:43 crc kubenswrapper[4854]: I0103 05:42:43.358974 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-b46bdbb7f-szr25"] Jan 03 05:42:43 crc kubenswrapper[4854]: I0103 05:42:43.359676 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-b46bdbb7f-szr25" podUID="b1a6f0af-e686-4a3a-b1af-b4ab77e8362c" containerName="controller-manager" containerID="cri-o://40700b72f405147dac2d7ef6884431498ebb959ac301a575ef8860d36f4cc4f4" gracePeriod=30 Jan 03 05:42:43 crc kubenswrapper[4854]: I0103 05:42:43.451305 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85f4ff9897-qqv5z"] Jan 03 05:42:43 crc kubenswrapper[4854]: I0103 05:42:43.451514 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-85f4ff9897-qqv5z" podUID="95421875-a016-4eba-8017-27f0276a6bc4" containerName="route-controller-manager" containerID="cri-o://ed4b4f37e6ba7ba03783c9d9604cc2dbb871c177cca5ae2ebbda2328416cdd46" gracePeriod=30 Jan 03 05:42:43 crc kubenswrapper[4854]: I0103 05:42:43.980623 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" Jan 03 05:42:44 crc kubenswrapper[4854]: I0103 05:42:44.065626 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dsx2k" Jan 03 05:42:44 crc kubenswrapper[4854]: I0103 05:42:44.227316 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7eefb384-ef8d-4c37-9287-20114d60743d-utilities\") pod \"7eefb384-ef8d-4c37-9287-20114d60743d\" (UID: \"7eefb384-ef8d-4c37-9287-20114d60743d\") " Jan 03 05:42:44 crc kubenswrapper[4854]: I0103 05:42:44.227801 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lvbg\" (UniqueName: \"kubernetes.io/projected/7eefb384-ef8d-4c37-9287-20114d60743d-kube-api-access-4lvbg\") pod \"7eefb384-ef8d-4c37-9287-20114d60743d\" (UID: \"7eefb384-ef8d-4c37-9287-20114d60743d\") " Jan 03 05:42:44 crc kubenswrapper[4854]: I0103 05:42:44.227932 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7eefb384-ef8d-4c37-9287-20114d60743d-catalog-content\") pod \"7eefb384-ef8d-4c37-9287-20114d60743d\" (UID: \"7eefb384-ef8d-4c37-9287-20114d60743d\") " Jan 03 05:42:44 crc kubenswrapper[4854]: I0103 05:42:44.229952 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7eefb384-ef8d-4c37-9287-20114d60743d-utilities" (OuterVolumeSpecName: "utilities") pod "7eefb384-ef8d-4c37-9287-20114d60743d" (UID: "7eefb384-ef8d-4c37-9287-20114d60743d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:42:44 crc kubenswrapper[4854]: I0103 05:42:44.235061 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7eefb384-ef8d-4c37-9287-20114d60743d-kube-api-access-4lvbg" (OuterVolumeSpecName: "kube-api-access-4lvbg") pod "7eefb384-ef8d-4c37-9287-20114d60743d" (UID: "7eefb384-ef8d-4c37-9287-20114d60743d"). InnerVolumeSpecName "kube-api-access-4lvbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:42:44 crc kubenswrapper[4854]: I0103 05:42:44.329855 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lvbg\" (UniqueName: \"kubernetes.io/projected/7eefb384-ef8d-4c37-9287-20114d60743d-kube-api-access-4lvbg\") on node \"crc\" DevicePath \"\"" Jan 03 05:42:44 crc kubenswrapper[4854]: I0103 05:42:44.329911 4854 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7eefb384-ef8d-4c37-9287-20114d60743d-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 05:42:44 crc kubenswrapper[4854]: I0103 05:42:44.549471 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dsx2k" event={"ID":"7eefb384-ef8d-4c37-9287-20114d60743d","Type":"ContainerDied","Data":"26049466ec30dc63065403ce278018f93b77d1f52acd1548d1605be3414a49e6"} Jan 03 05:42:44 crc kubenswrapper[4854]: I0103 05:42:44.549560 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dsx2k" Jan 03 05:42:44 crc kubenswrapper[4854]: I0103 05:42:44.549568 4854 scope.go:117] "RemoveContainer" containerID="7f033c1151c54efd57adfa810788bbfb6f3c385da083a2bef364fcd7ce78ec57" Jan 03 05:42:44 crc kubenswrapper[4854]: I0103 05:42:44.649137 4854 patch_prober.go:28] interesting pod/controller-manager-b46bdbb7f-szr25 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Jan 03 05:42:44 crc kubenswrapper[4854]: I0103 05:42:44.649890 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-b46bdbb7f-szr25" podUID="b1a6f0af-e686-4a3a-b1af-b4ab77e8362c" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" Jan 03 05:42:45 crc kubenswrapper[4854]: I0103 05:42:45.140842 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7eefb384-ef8d-4c37-9287-20114d60743d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7eefb384-ef8d-4c37-9287-20114d60743d" (UID: "7eefb384-ef8d-4c37-9287-20114d60743d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:42:45 crc kubenswrapper[4854]: I0103 05:42:45.142786 4854 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7eefb384-ef8d-4c37-9287-20114d60743d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 05:42:45 crc kubenswrapper[4854]: I0103 05:42:45.195179 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dsx2k"] Jan 03 05:42:45 crc kubenswrapper[4854]: I0103 05:42:45.198686 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dsx2k"] Jan 03 05:42:46 crc kubenswrapper[4854]: I0103 05:42:46.126506 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7eefb384-ef8d-4c37-9287-20114d60743d" path="/var/lib/kubelet/pods/7eefb384-ef8d-4c37-9287-20114d60743d/volumes" Jan 03 05:42:48 crc kubenswrapper[4854]: E0103 05:42:48.344450 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c07a08a31076c61af219ce8e33da4e65d03cb1b07f0d5d05d6a2e1fd1808f0cb is running failed: container process not found" containerID="c07a08a31076c61af219ce8e33da4e65d03cb1b07f0d5d05d6a2e1fd1808f0cb" cmd=["grpc_health_probe","-addr=:50051"] Jan 03 05:42:48 crc kubenswrapper[4854]: E0103 05:42:48.345671 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c07a08a31076c61af219ce8e33da4e65d03cb1b07f0d5d05d6a2e1fd1808f0cb is running failed: container process not found" containerID="c07a08a31076c61af219ce8e33da4e65d03cb1b07f0d5d05d6a2e1fd1808f0cb" cmd=["grpc_health_probe","-addr=:50051"] Jan 03 05:42:48 crc kubenswrapper[4854]: E0103 05:42:48.346220 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c07a08a31076c61af219ce8e33da4e65d03cb1b07f0d5d05d6a2e1fd1808f0cb is running failed: container process not found" containerID="c07a08a31076c61af219ce8e33da4e65d03cb1b07f0d5d05d6a2e1fd1808f0cb" cmd=["grpc_health_probe","-addr=:50051"] Jan 03 05:42:48 crc kubenswrapper[4854]: E0103 05:42:48.346372 4854 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c07a08a31076c61af219ce8e33da4e65d03cb1b07f0d5d05d6a2e1fd1808f0cb is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-cj878" podUID="dd260432-f4bc-4c81-a5e1-e3205534cda8" containerName="registry-server" Jan 03 05:42:49 crc kubenswrapper[4854]: E0103 05:42:49.479704 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 76b249acee26cd6a24c4ddf64d5a9075ac417327a62100d25d633bbd38abe199 is running failed: container process not found" containerID="76b249acee26cd6a24c4ddf64d5a9075ac417327a62100d25d633bbd38abe199" cmd=["grpc_health_probe","-addr=:50051"] Jan 03 05:42:49 crc kubenswrapper[4854]: E0103 05:42:49.480627 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 76b249acee26cd6a24c4ddf64d5a9075ac417327a62100d25d633bbd38abe199 is running failed: container process not found" containerID="76b249acee26cd6a24c4ddf64d5a9075ac417327a62100d25d633bbd38abe199" cmd=["grpc_health_probe","-addr=:50051"] Jan 03 05:42:49 crc kubenswrapper[4854]: E0103 05:42:49.481424 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 76b249acee26cd6a24c4ddf64d5a9075ac417327a62100d25d633bbd38abe199 is running failed: container process not found" containerID="76b249acee26cd6a24c4ddf64d5a9075ac417327a62100d25d633bbd38abe199" cmd=["grpc_health_probe","-addr=:50051"] Jan 03 05:42:49 crc kubenswrapper[4854]: E0103 05:42:49.481567 4854 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 76b249acee26cd6a24c4ddf64d5a9075ac417327a62100d25d633bbd38abe199 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-ztvzs" podUID="28406837-5e09-49b4-8583-54a450f07ae4" containerName="registry-server" Jan 03 05:42:49 crc kubenswrapper[4854]: I0103 05:42:49.833360 4854 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 03 05:42:49 crc kubenswrapper[4854]: E0103 05:42:49.833728 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eefb384-ef8d-4c37-9287-20114d60743d" containerName="extract-utilities" Jan 03 05:42:49 crc kubenswrapper[4854]: I0103 05:42:49.833750 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eefb384-ef8d-4c37-9287-20114d60743d" containerName="extract-utilities" Jan 03 05:42:49 crc kubenswrapper[4854]: E0103 05:42:49.833770 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eefb384-ef8d-4c37-9287-20114d60743d" containerName="extract-content" Jan 03 05:42:49 crc kubenswrapper[4854]: I0103 05:42:49.833785 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eefb384-ef8d-4c37-9287-20114d60743d" containerName="extract-content" Jan 03 05:42:49 crc kubenswrapper[4854]: E0103 05:42:49.833809 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eefb384-ef8d-4c37-9287-20114d60743d" containerName="registry-server" Jan 03 05:42:49 crc kubenswrapper[4854]: I0103 05:42:49.833823 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eefb384-ef8d-4c37-9287-20114d60743d" containerName="registry-server" Jan 03 05:42:49 crc kubenswrapper[4854]: I0103 05:42:49.834024 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="7eefb384-ef8d-4c37-9287-20114d60743d" containerName="registry-server" Jan 03 05:42:49 crc kubenswrapper[4854]: I0103 05:42:49.834745 4854 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 03 05:42:49 crc kubenswrapper[4854]: I0103 05:42:49.834768 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 05:42:49 crc kubenswrapper[4854]: I0103 05:42:49.835327 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://b205d8e9458800979a8d964bee4251860e547baf9ae4a82816c7347b37484e57" gracePeriod=15 Jan 03 05:42:49 crc kubenswrapper[4854]: I0103 05:42:49.835349 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://150adb49200724a1aa45c990bba31412c42c85ffc9dfd355f85b38114962c9eb" gracePeriod=15 Jan 03 05:42:49 crc kubenswrapper[4854]: I0103 05:42:49.835531 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://7a2b7fcd26e43a60746db22efe9b0b3cc0cec70a9bfb52d27644ca850ca16e51" gracePeriod=15 Jan 03 05:42:49 crc kubenswrapper[4854]: I0103 05:42:49.835560 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://34ffe91003d44a5658b1de915f0823abd5399b936ddc5e4696a08171e202fa92" gracePeriod=15 Jan 03 05:42:49 crc kubenswrapper[4854]: I0103 05:42:49.835649 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://a547bb00f1c271e432cec6966b47decd29e1aa9e0c4f0ff7a517faed2f732b53" gracePeriod=15 Jan 03 05:42:49 crc kubenswrapper[4854]: I0103 05:42:49.836378 4854 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 03 05:42:49 crc kubenswrapper[4854]: E0103 05:42:49.836705 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 03 05:42:49 crc kubenswrapper[4854]: I0103 05:42:49.836725 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 03 05:42:49 crc kubenswrapper[4854]: E0103 05:42:49.836744 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 03 05:42:49 crc kubenswrapper[4854]: I0103 05:42:49.836758 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 03 05:42:49 crc kubenswrapper[4854]: E0103 05:42:49.836776 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 03 05:42:49 crc kubenswrapper[4854]: I0103 05:42:49.836788 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 03 05:42:49 crc kubenswrapper[4854]: E0103 05:42:49.836808 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 03 05:42:49 crc kubenswrapper[4854]: I0103 05:42:49.836820 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 03 05:42:49 crc kubenswrapper[4854]: E0103 05:42:49.836842 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 03 05:42:49 crc kubenswrapper[4854]: I0103 05:42:49.836855 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 03 05:42:49 crc kubenswrapper[4854]: E0103 05:42:49.836878 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 03 05:42:49 crc kubenswrapper[4854]: I0103 05:42:49.836890 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 03 05:42:49 crc kubenswrapper[4854]: I0103 05:42:49.837192 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 03 05:42:49 crc kubenswrapper[4854]: I0103 05:42:49.837216 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 03 05:42:49 crc kubenswrapper[4854]: I0103 05:42:49.837232 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 03 05:42:49 crc kubenswrapper[4854]: I0103 05:42:49.837252 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 03 05:42:49 crc kubenswrapper[4854]: I0103 05:42:49.837274 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 03 05:42:49 crc kubenswrapper[4854]: I0103 05:42:49.837301 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 03 05:42:49 crc kubenswrapper[4854]: E0103 05:42:49.837490 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 03 05:42:49 crc kubenswrapper[4854]: I0103 05:42:49.837505 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 03 05:42:49 crc kubenswrapper[4854]: I0103 05:42:49.923114 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 05:42:49 crc kubenswrapper[4854]: I0103 05:42:49.923264 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 05:42:49 crc kubenswrapper[4854]: I0103 05:42:49.923328 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 05:42:49 crc kubenswrapper[4854]: I0103 05:42:49.923376 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 05:42:49 crc kubenswrapper[4854]: I0103 05:42:49.923582 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 05:42:49 crc kubenswrapper[4854]: I0103 05:42:49.923643 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 05:42:49 crc kubenswrapper[4854]: I0103 05:42:49.923704 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 05:42:49 crc kubenswrapper[4854]: I0103 05:42:49.923774 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 05:42:50 crc kubenswrapper[4854]: I0103 05:42:50.025766 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 05:42:50 crc kubenswrapper[4854]: I0103 05:42:50.025985 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 05:42:50 crc kubenswrapper[4854]: I0103 05:42:50.026332 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 05:42:50 crc kubenswrapper[4854]: I0103 05:42:50.026396 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 05:42:50 crc kubenswrapper[4854]: I0103 05:42:50.026486 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 05:42:50 crc kubenswrapper[4854]: I0103 05:42:50.026542 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 05:42:50 crc kubenswrapper[4854]: I0103 05:42:50.026609 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 05:42:50 crc kubenswrapper[4854]: I0103 05:42:50.026620 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 05:42:50 crc kubenswrapper[4854]: I0103 05:42:50.026649 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 05:42:50 crc kubenswrapper[4854]: I0103 05:42:50.026687 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 05:42:50 crc kubenswrapper[4854]: I0103 05:42:50.026728 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 05:42:50 crc kubenswrapper[4854]: I0103 05:42:50.026689 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 05:42:50 crc kubenswrapper[4854]: I0103 05:42:50.026732 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 05:42:50 crc kubenswrapper[4854]: I0103 05:42:50.026787 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 05:42:50 crc kubenswrapper[4854]: I0103 05:42:50.026787 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 05:42:50 crc kubenswrapper[4854]: I0103 05:42:50.026789 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 05:42:51 crc kubenswrapper[4854]: I0103 05:42:51.277791 4854 generic.go:334] "Generic (PLEG): container finished" podID="b1a6f0af-e686-4a3a-b1af-b4ab77e8362c" containerID="40700b72f405147dac2d7ef6884431498ebb959ac301a575ef8860d36f4cc4f4" exitCode=0 Jan 03 05:42:51 crc kubenswrapper[4854]: I0103 05:42:51.277847 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b46bdbb7f-szr25" event={"ID":"b1a6f0af-e686-4a3a-b1af-b4ab77e8362c","Type":"ContainerDied","Data":"40700b72f405147dac2d7ef6884431498ebb959ac301a575ef8860d36f4cc4f4"} Jan 03 05:42:51 crc kubenswrapper[4854]: I0103 05:42:51.442202 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" podUID="c31b366b-2182-4c59-8777-e552553ba8a8" containerName="oauth-openshift" containerID="cri-o://91a30b02a02410d5306acdb48fd96666de1bed90ab02f525930c913ecb5b8fbb" gracePeriod=15 Jan 03 05:42:51 crc kubenswrapper[4854]: I0103 05:42:51.733852 4854 patch_prober.go:28] interesting pod/route-controller-manager-85f4ff9897-qqv5z container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.56:8443/healthz\": dial tcp 10.217.0.56:8443: connect: connection refused" start-of-body= Jan 03 05:42:51 crc kubenswrapper[4854]: I0103 05:42:51.733942 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-85f4ff9897-qqv5z" podUID="95421875-a016-4eba-8017-27f0276a6bc4" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.56:8443/healthz\": dial tcp 10.217.0.56:8443: connect: connection refused" Jan 03 05:42:51 crc kubenswrapper[4854]: E0103 05:42:51.734874 4854 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/events\": dial tcp 38.102.83.102:6443: connect: connection refused" event=< Jan 03 05:42:51 crc kubenswrapper[4854]: &Event{ObjectMeta:{route-controller-manager-85f4ff9897-qqv5z.18872230cb7c57a1 openshift-route-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-route-controller-manager,Name:route-controller-manager-85f4ff9897-qqv5z,UID:95421875-a016-4eba-8017-27f0276a6bc4,APIVersion:v1,ResourceVersion:29227,FieldPath:spec.containers{route-controller-manager},},Reason:ProbeError,Message:Readiness probe error: Get "https://10.217.0.56:8443/healthz": dial tcp 10.217.0.56:8443: connect: connection refused Jan 03 05:42:51 crc kubenswrapper[4854]: body: Jan 03 05:42:51 crc kubenswrapper[4854]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-03 05:42:51.733907361 +0000 UTC m=+150.060483963,LastTimestamp:2026-01-03 05:42:51.733907361 +0000 UTC m=+150.060483963,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 03 05:42:51 crc kubenswrapper[4854]: > Jan 03 05:42:52 crc kubenswrapper[4854]: I0103 05:42:52.396041 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ztvzs_28406837-5e09-49b4-8583-54a450f07ae4/registry-server/0.log" Jan 03 05:42:52 crc kubenswrapper[4854]: I0103 05:42:52.397847 4854 generic.go:334] "Generic (PLEG): container finished" podID="28406837-5e09-49b4-8583-54a450f07ae4" containerID="76b249acee26cd6a24c4ddf64d5a9075ac417327a62100d25d633bbd38abe199" exitCode=137 Jan 03 05:42:52 crc kubenswrapper[4854]: I0103 05:42:52.397956 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ztvzs" event={"ID":"28406837-5e09-49b4-8583-54a450f07ae4","Type":"ContainerDied","Data":"76b249acee26cd6a24c4ddf64d5a9075ac417327a62100d25d633bbd38abe199"} Jan 03 05:42:52 crc kubenswrapper[4854]: I0103 05:42:52.400745 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cj878_dd260432-f4bc-4c81-a5e1-e3205534cda8/registry-server/0.log" Jan 03 05:42:52 crc kubenswrapper[4854]: I0103 05:42:52.401815 4854 generic.go:334] "Generic (PLEG): container finished" podID="dd260432-f4bc-4c81-a5e1-e3205534cda8" containerID="c07a08a31076c61af219ce8e33da4e65d03cb1b07f0d5d05d6a2e1fd1808f0cb" exitCode=137 Jan 03 05:42:52 crc kubenswrapper[4854]: I0103 05:42:52.401899 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cj878" event={"ID":"dd260432-f4bc-4c81-a5e1-e3205534cda8","Type":"ContainerDied","Data":"c07a08a31076c61af219ce8e33da4e65d03cb1b07f0d5d05d6a2e1fd1808f0cb"} Jan 03 05:42:52 crc kubenswrapper[4854]: I0103 05:42:52.405271 4854 generic.go:334] "Generic (PLEG): container finished" podID="95421875-a016-4eba-8017-27f0276a6bc4" containerID="ed4b4f37e6ba7ba03783c9d9604cc2dbb871c177cca5ae2ebbda2328416cdd46" exitCode=0 Jan 03 05:42:52 crc kubenswrapper[4854]: I0103 05:42:52.405322 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85f4ff9897-qqv5z" event={"ID":"95421875-a016-4eba-8017-27f0276a6bc4","Type":"ContainerDied","Data":"ed4b4f37e6ba7ba03783c9d9604cc2dbb871c177cca5ae2ebbda2328416cdd46"} Jan 03 05:42:53 crc kubenswrapper[4854]: E0103 05:42:53.267635 4854 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:42:53 crc kubenswrapper[4854]: E0103 05:42:53.268351 4854 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:42:53 crc kubenswrapper[4854]: E0103 05:42:53.268761 4854 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:42:53 crc kubenswrapper[4854]: E0103 05:42:53.269107 4854 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:42:53 crc kubenswrapper[4854]: E0103 05:42:53.269480 4854 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:42:53 crc kubenswrapper[4854]: I0103 05:42:53.269570 4854 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 03 05:42:53 crc kubenswrapper[4854]: E0103 05:42:53.270153 4854 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.102:6443: connect: connection refused" interval="200ms" Jan 03 05:42:53 crc kubenswrapper[4854]: I0103 05:42:53.415749 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 03 05:42:53 crc kubenswrapper[4854]: I0103 05:42:53.418784 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 03 05:42:53 crc kubenswrapper[4854]: I0103 05:42:53.420125 4854 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a547bb00f1c271e432cec6966b47decd29e1aa9e0c4f0ff7a517faed2f732b53" exitCode=2 Jan 03 05:42:53 crc kubenswrapper[4854]: E0103 05:42:53.471510 4854 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.102:6443: connect: connection refused" interval="400ms" Jan 03 05:42:53 crc kubenswrapper[4854]: E0103 05:42:53.873218 4854 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.102:6443: connect: connection refused" interval="800ms" Jan 03 05:42:53 crc kubenswrapper[4854]: I0103 05:42:53.988057 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" Jan 03 05:42:53 crc kubenswrapper[4854]: I0103 05:42:53.988962 4854 status_manager.go:851] "Failed to get status for pod" podUID="b633dc70-c725-4f1b-9595-aee7f6c165b4" pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-k8nxq\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:42:54 crc kubenswrapper[4854]: E0103 05:42:54.032565 4854 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.102:6443: connect: connection refused" pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" volumeName="registry-storage" Jan 03 05:42:54 crc kubenswrapper[4854]: I0103 05:42:54.648374 4854 patch_prober.go:28] interesting pod/controller-manager-b46bdbb7f-szr25 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Jan 03 05:42:54 crc kubenswrapper[4854]: I0103 05:42:54.648470 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-b46bdbb7f-szr25" podUID="b1a6f0af-e686-4a3a-b1af-b4ab77e8362c" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" Jan 03 05:42:54 crc kubenswrapper[4854]: E0103 05:42:54.675247 4854 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.102:6443: connect: connection refused" interval="1.6s" Jan 03 05:42:54 crc kubenswrapper[4854]: E0103 05:42:54.892590 4854 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.102:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 05:42:54 crc kubenswrapper[4854]: I0103 05:42:54.893370 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 05:42:56 crc kubenswrapper[4854]: E0103 05:42:56.276207 4854 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.102:6443: connect: connection refused" interval="3.2s" Jan 03 05:42:56 crc kubenswrapper[4854]: I0103 05:42:56.440184 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 03 05:42:56 crc kubenswrapper[4854]: I0103 05:42:56.442150 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 03 05:42:56 crc kubenswrapper[4854]: I0103 05:42:56.443115 4854 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="150adb49200724a1aa45c990bba31412c42c85ffc9dfd355f85b38114962c9eb" exitCode=0 Jan 03 05:42:56 crc kubenswrapper[4854]: I0103 05:42:56.443162 4854 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b205d8e9458800979a8d964bee4251860e547baf9ae4a82816c7347b37484e57" exitCode=0 Jan 03 05:42:58 crc kubenswrapper[4854]: E0103 05:42:58.344233 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c07a08a31076c61af219ce8e33da4e65d03cb1b07f0d5d05d6a2e1fd1808f0cb is running failed: container process not found" containerID="c07a08a31076c61af219ce8e33da4e65d03cb1b07f0d5d05d6a2e1fd1808f0cb" cmd=["grpc_health_probe","-addr=:50051"] Jan 03 05:42:58 crc kubenswrapper[4854]: E0103 05:42:58.346138 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c07a08a31076c61af219ce8e33da4e65d03cb1b07f0d5d05d6a2e1fd1808f0cb is running failed: container process not found" containerID="c07a08a31076c61af219ce8e33da4e65d03cb1b07f0d5d05d6a2e1fd1808f0cb" cmd=["grpc_health_probe","-addr=:50051"] Jan 03 05:42:58 crc kubenswrapper[4854]: E0103 05:42:58.346680 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c07a08a31076c61af219ce8e33da4e65d03cb1b07f0d5d05d6a2e1fd1808f0cb is running failed: container process not found" containerID="c07a08a31076c61af219ce8e33da4e65d03cb1b07f0d5d05d6a2e1fd1808f0cb" cmd=["grpc_health_probe","-addr=:50051"] Jan 03 05:42:58 crc kubenswrapper[4854]: E0103 05:42:58.346797 4854 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c07a08a31076c61af219ce8e33da4e65d03cb1b07f0d5d05d6a2e1fd1808f0cb is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-cj878" podUID="dd260432-f4bc-4c81-a5e1-e3205534cda8" containerName="registry-server" Jan 03 05:42:58 crc kubenswrapper[4854]: I0103 05:42:58.466924 4854 generic.go:334] "Generic (PLEG): container finished" podID="c31b366b-2182-4c59-8777-e552553ba8a8" containerID="91a30b02a02410d5306acdb48fd96666de1bed90ab02f525930c913ecb5b8fbb" exitCode=0 Jan 03 05:42:58 crc kubenswrapper[4854]: I0103 05:42:58.467122 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" event={"ID":"c31b366b-2182-4c59-8777-e552553ba8a8","Type":"ContainerDied","Data":"91a30b02a02410d5306acdb48fd96666de1bed90ab02f525930c913ecb5b8fbb"} Jan 03 05:42:58 crc kubenswrapper[4854]: I0103 05:42:58.471011 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 03 05:42:58 crc kubenswrapper[4854]: I0103 05:42:58.473872 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 03 05:42:58 crc kubenswrapper[4854]: I0103 05:42:58.475234 4854 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7a2b7fcd26e43a60746db22efe9b0b3cc0cec70a9bfb52d27644ca850ca16e51" exitCode=0 Jan 03 05:42:58 crc kubenswrapper[4854]: I0103 05:42:58.475321 4854 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="34ffe91003d44a5658b1de915f0823abd5399b936ddc5e4696a08171e202fa92" exitCode=0 Jan 03 05:42:58 crc kubenswrapper[4854]: I0103 05:42:58.478776 4854 generic.go:334] "Generic (PLEG): container finished" podID="0380b43d-2d7f-490d-a822-1740bfa5c9ac" containerID="6244e3491422980599193660b370f80a475b2819d07bcb04d4cadd6908591e45" exitCode=0 Jan 03 05:42:58 crc kubenswrapper[4854]: I0103 05:42:58.478839 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"0380b43d-2d7f-490d-a822-1740bfa5c9ac","Type":"ContainerDied","Data":"6244e3491422980599193660b370f80a475b2819d07bcb04d4cadd6908591e45"} Jan 03 05:42:58 crc kubenswrapper[4854]: I0103 05:42:58.479825 4854 status_manager.go:851] "Failed to get status for pod" podUID="0380b43d-2d7f-490d-a822-1740bfa5c9ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:42:58 crc kubenswrapper[4854]: I0103 05:42:58.480616 4854 status_manager.go:851] "Failed to get status for pod" podUID="b633dc70-c725-4f1b-9595-aee7f6c165b4" pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-k8nxq\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:42:59 crc kubenswrapper[4854]: E0103 05:42:59.478196 4854 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.102:6443: connect: connection refused" interval="6.4s" Jan 03 05:42:59 crc kubenswrapper[4854]: E0103 05:42:59.480921 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 76b249acee26cd6a24c4ddf64d5a9075ac417327a62100d25d633bbd38abe199 is running failed: container process not found" containerID="76b249acee26cd6a24c4ddf64d5a9075ac417327a62100d25d633bbd38abe199" cmd=["grpc_health_probe","-addr=:50051"] Jan 03 05:42:59 crc kubenswrapper[4854]: E0103 05:42:59.481541 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 76b249acee26cd6a24c4ddf64d5a9075ac417327a62100d25d633bbd38abe199 is running failed: container process not found" containerID="76b249acee26cd6a24c4ddf64d5a9075ac417327a62100d25d633bbd38abe199" cmd=["grpc_health_probe","-addr=:50051"] Jan 03 05:42:59 crc kubenswrapper[4854]: E0103 05:42:59.482378 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 76b249acee26cd6a24c4ddf64d5a9075ac417327a62100d25d633bbd38abe199 is running failed: container process not found" containerID="76b249acee26cd6a24c4ddf64d5a9075ac417327a62100d25d633bbd38abe199" cmd=["grpc_health_probe","-addr=:50051"] Jan 03 05:42:59 crc kubenswrapper[4854]: E0103 05:42:59.482708 4854 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 76b249acee26cd6a24c4ddf64d5a9075ac417327a62100d25d633bbd38abe199 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-ztvzs" podUID="28406837-5e09-49b4-8583-54a450f07ae4" containerName="registry-server" Jan 03 05:42:59 crc kubenswrapper[4854]: I0103 05:42:59.845477 4854 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-h7drl container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.35:6443/healthz\": dial tcp 10.217.0.35:6443: connect: connection refused" start-of-body= Jan 03 05:42:59 crc kubenswrapper[4854]: I0103 05:42:59.845556 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" podUID="c31b366b-2182-4c59-8777-e552553ba8a8" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.35:6443/healthz\": dial tcp 10.217.0.35:6443: connect: connection refused" Jan 03 05:43:00 crc kubenswrapper[4854]: E0103 05:43:00.745953 4854 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/events\": dial tcp 38.102.83.102:6443: connect: connection refused" event=< Jan 03 05:43:00 crc kubenswrapper[4854]: &Event{ObjectMeta:{route-controller-manager-85f4ff9897-qqv5z.18872230cb7c57a1 openshift-route-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-route-controller-manager,Name:route-controller-manager-85f4ff9897-qqv5z,UID:95421875-a016-4eba-8017-27f0276a6bc4,APIVersion:v1,ResourceVersion:29227,FieldPath:spec.containers{route-controller-manager},},Reason:ProbeError,Message:Readiness probe error: Get "https://10.217.0.56:8443/healthz": dial tcp 10.217.0.56:8443: connect: connection refused Jan 03 05:43:00 crc kubenswrapper[4854]: body: Jan 03 05:43:00 crc kubenswrapper[4854]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-03 05:42:51.733907361 +0000 UTC m=+150.060483963,LastTimestamp:2026-01-03 05:42:51.733907361 +0000 UTC m=+150.060483963,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 03 05:43:00 crc kubenswrapper[4854]: > Jan 03 05:43:02 crc kubenswrapper[4854]: I0103 05:43:02.120673 4854 status_manager.go:851] "Failed to get status for pod" podUID="0380b43d-2d7f-490d-a822-1740bfa5c9ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:02 crc kubenswrapper[4854]: I0103 05:43:02.121146 4854 status_manager.go:851] "Failed to get status for pod" podUID="b633dc70-c725-4f1b-9595-aee7f6c165b4" pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-k8nxq\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:02 crc kubenswrapper[4854]: I0103 05:43:02.732890 4854 patch_prober.go:28] interesting pod/route-controller-manager-85f4ff9897-qqv5z container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.56:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 05:43:02 crc kubenswrapper[4854]: I0103 05:43:02.732987 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-85f4ff9897-qqv5z" podUID="95421875-a016-4eba-8017-27f0276a6bc4" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.56:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 05:43:05 crc kubenswrapper[4854]: I0103 05:43:05.649288 4854 patch_prober.go:28] interesting pod/controller-manager-b46bdbb7f-szr25 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 05:43:05 crc kubenswrapper[4854]: I0103 05:43:05.649698 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-b46bdbb7f-szr25" podUID="b1a6f0af-e686-4a3a-b1af-b4ab77e8362c" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)" Jan 03 05:43:05 crc kubenswrapper[4854]: E0103 05:43:05.880772 4854 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.102:6443: connect: connection refused" interval="7s" Jan 03 05:43:06 crc kubenswrapper[4854]: I0103 05:43:06.792134 4854 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 03 05:43:06 crc kubenswrapper[4854]: I0103 05:43:06.792744 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 03 05:43:08 crc kubenswrapper[4854]: E0103 05:43:08.344406 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c07a08a31076c61af219ce8e33da4e65d03cb1b07f0d5d05d6a2e1fd1808f0cb is running failed: container process not found" containerID="c07a08a31076c61af219ce8e33da4e65d03cb1b07f0d5d05d6a2e1fd1808f0cb" cmd=["grpc_health_probe","-addr=:50051"] Jan 03 05:43:08 crc kubenswrapper[4854]: E0103 05:43:08.345374 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c07a08a31076c61af219ce8e33da4e65d03cb1b07f0d5d05d6a2e1fd1808f0cb is running failed: container process not found" containerID="c07a08a31076c61af219ce8e33da4e65d03cb1b07f0d5d05d6a2e1fd1808f0cb" cmd=["grpc_health_probe","-addr=:50051"] Jan 03 05:43:08 crc kubenswrapper[4854]: E0103 05:43:08.345767 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c07a08a31076c61af219ce8e33da4e65d03cb1b07f0d5d05d6a2e1fd1808f0cb is running failed: container process not found" containerID="c07a08a31076c61af219ce8e33da4e65d03cb1b07f0d5d05d6a2e1fd1808f0cb" cmd=["grpc_health_probe","-addr=:50051"] Jan 03 05:43:08 crc kubenswrapper[4854]: E0103 05:43:08.345815 4854 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c07a08a31076c61af219ce8e33da4e65d03cb1b07f0d5d05d6a2e1fd1808f0cb is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-cj878" podUID="dd260432-f4bc-4c81-a5e1-e3205534cda8" containerName="registry-server" Jan 03 05:43:09 crc kubenswrapper[4854]: I0103 05:43:09.176003 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cj878_dd260432-f4bc-4c81-a5e1-e3205534cda8/registry-server/0.log" Jan 03 05:43:09 crc kubenswrapper[4854]: I0103 05:43:09.177223 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cj878" Jan 03 05:43:09 crc kubenswrapper[4854]: I0103 05:43:09.177906 4854 status_manager.go:851] "Failed to get status for pod" podUID="dd260432-f4bc-4c81-a5e1-e3205534cda8" pod="openshift-marketplace/redhat-marketplace-cj878" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cj878\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:09 crc kubenswrapper[4854]: I0103 05:43:09.178398 4854 status_manager.go:851] "Failed to get status for pod" podUID="0380b43d-2d7f-490d-a822-1740bfa5c9ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:09 crc kubenswrapper[4854]: I0103 05:43:09.181072 4854 status_manager.go:851] "Failed to get status for pod" podUID="b633dc70-c725-4f1b-9595-aee7f6c165b4" pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-k8nxq\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:09 crc kubenswrapper[4854]: I0103 05:43:09.220502 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82x46\" (UniqueName: \"kubernetes.io/projected/dd260432-f4bc-4c81-a5e1-e3205534cda8-kube-api-access-82x46\") pod \"dd260432-f4bc-4c81-a5e1-e3205534cda8\" (UID: \"dd260432-f4bc-4c81-a5e1-e3205534cda8\") " Jan 03 05:43:09 crc kubenswrapper[4854]: I0103 05:43:09.220626 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd260432-f4bc-4c81-a5e1-e3205534cda8-catalog-content\") pod \"dd260432-f4bc-4c81-a5e1-e3205534cda8\" (UID: \"dd260432-f4bc-4c81-a5e1-e3205534cda8\") " Jan 03 05:43:09 crc kubenswrapper[4854]: I0103 05:43:09.220669 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd260432-f4bc-4c81-a5e1-e3205534cda8-utilities\") pod \"dd260432-f4bc-4c81-a5e1-e3205534cda8\" (UID: \"dd260432-f4bc-4c81-a5e1-e3205534cda8\") " Jan 03 05:43:09 crc kubenswrapper[4854]: I0103 05:43:09.222260 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd260432-f4bc-4c81-a5e1-e3205534cda8-utilities" (OuterVolumeSpecName: "utilities") pod "dd260432-f4bc-4c81-a5e1-e3205534cda8" (UID: "dd260432-f4bc-4c81-a5e1-e3205534cda8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:43:09 crc kubenswrapper[4854]: I0103 05:43:09.230270 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd260432-f4bc-4c81-a5e1-e3205534cda8-kube-api-access-82x46" (OuterVolumeSpecName: "kube-api-access-82x46") pod "dd260432-f4bc-4c81-a5e1-e3205534cda8" (UID: "dd260432-f4bc-4c81-a5e1-e3205534cda8"). InnerVolumeSpecName "kube-api-access-82x46". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:43:09 crc kubenswrapper[4854]: I0103 05:43:09.248488 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd260432-f4bc-4c81-a5e1-e3205534cda8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dd260432-f4bc-4c81-a5e1-e3205534cda8" (UID: "dd260432-f4bc-4c81-a5e1-e3205534cda8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:43:09 crc kubenswrapper[4854]: I0103 05:43:09.322566 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82x46\" (UniqueName: \"kubernetes.io/projected/dd260432-f4bc-4c81-a5e1-e3205534cda8-kube-api-access-82x46\") on node \"crc\" DevicePath \"\"" Jan 03 05:43:09 crc kubenswrapper[4854]: I0103 05:43:09.322599 4854 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd260432-f4bc-4c81-a5e1-e3205534cda8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 05:43:09 crc kubenswrapper[4854]: I0103 05:43:09.322609 4854 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd260432-f4bc-4c81-a5e1-e3205534cda8-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 05:43:09 crc kubenswrapper[4854]: I0103 05:43:09.404960 4854 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 03 05:43:09 crc kubenswrapper[4854]: I0103 05:43:09.405018 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 03 05:43:09 crc kubenswrapper[4854]: E0103 05:43:09.478798 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 76b249acee26cd6a24c4ddf64d5a9075ac417327a62100d25d633bbd38abe199 is running failed: container process not found" containerID="76b249acee26cd6a24c4ddf64d5a9075ac417327a62100d25d633bbd38abe199" cmd=["grpc_health_probe","-addr=:50051"] Jan 03 05:43:09 crc kubenswrapper[4854]: E0103 05:43:09.479152 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 76b249acee26cd6a24c4ddf64d5a9075ac417327a62100d25d633bbd38abe199 is running failed: container process not found" containerID="76b249acee26cd6a24c4ddf64d5a9075ac417327a62100d25d633bbd38abe199" cmd=["grpc_health_probe","-addr=:50051"] Jan 03 05:43:09 crc kubenswrapper[4854]: E0103 05:43:09.479831 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 76b249acee26cd6a24c4ddf64d5a9075ac417327a62100d25d633bbd38abe199 is running failed: container process not found" containerID="76b249acee26cd6a24c4ddf64d5a9075ac417327a62100d25d633bbd38abe199" cmd=["grpc_health_probe","-addr=:50051"] Jan 03 05:43:09 crc kubenswrapper[4854]: E0103 05:43:09.479857 4854 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 76b249acee26cd6a24c4ddf64d5a9075ac417327a62100d25d633bbd38abe199 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-ztvzs" podUID="28406837-5e09-49b4-8583-54a450f07ae4" containerName="registry-server" Jan 03 05:43:09 crc kubenswrapper[4854]: I0103 05:43:09.562840 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cj878_dd260432-f4bc-4c81-a5e1-e3205534cda8/registry-server/0.log" Jan 03 05:43:09 crc kubenswrapper[4854]: I0103 05:43:09.564656 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cj878" Jan 03 05:43:09 crc kubenswrapper[4854]: I0103 05:43:09.564647 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cj878" event={"ID":"dd260432-f4bc-4c81-a5e1-e3205534cda8","Type":"ContainerDied","Data":"34c09feab4f8d7b19a4e0db7da17b6b2e00008df846238b2e01ae7f51a5e02c6"} Jan 03 05:43:09 crc kubenswrapper[4854]: I0103 05:43:09.565662 4854 status_manager.go:851] "Failed to get status for pod" podUID="dd260432-f4bc-4c81-a5e1-e3205534cda8" pod="openshift-marketplace/redhat-marketplace-cj878" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cj878\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:09 crc kubenswrapper[4854]: I0103 05:43:09.566474 4854 status_manager.go:851] "Failed to get status for pod" podUID="0380b43d-2d7f-490d-a822-1740bfa5c9ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:09 crc kubenswrapper[4854]: I0103 05:43:09.567152 4854 status_manager.go:851] "Failed to get status for pod" podUID="b633dc70-c725-4f1b-9595-aee7f6c165b4" pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-k8nxq\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:09 crc kubenswrapper[4854]: I0103 05:43:09.569683 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 03 05:43:09 crc kubenswrapper[4854]: I0103 05:43:09.569756 4854 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="d23ed4cba923a9b14622426134be2d60f5a835b8bbabc385821b9cfbeead4b13" exitCode=1 Jan 03 05:43:09 crc kubenswrapper[4854]: I0103 05:43:09.569792 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"d23ed4cba923a9b14622426134be2d60f5a835b8bbabc385821b9cfbeead4b13"} Jan 03 05:43:09 crc kubenswrapper[4854]: I0103 05:43:09.570521 4854 scope.go:117] "RemoveContainer" containerID="d23ed4cba923a9b14622426134be2d60f5a835b8bbabc385821b9cfbeead4b13" Jan 03 05:43:09 crc kubenswrapper[4854]: I0103 05:43:09.570791 4854 status_manager.go:851] "Failed to get status for pod" podUID="dd260432-f4bc-4c81-a5e1-e3205534cda8" pod="openshift-marketplace/redhat-marketplace-cj878" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cj878\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:09 crc kubenswrapper[4854]: I0103 05:43:09.571318 4854 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:09 crc kubenswrapper[4854]: I0103 05:43:09.571895 4854 status_manager.go:851] "Failed to get status for pod" podUID="0380b43d-2d7f-490d-a822-1740bfa5c9ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:09 crc kubenswrapper[4854]: I0103 05:43:09.572431 4854 status_manager.go:851] "Failed to get status for pod" podUID="b633dc70-c725-4f1b-9595-aee7f6c165b4" pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-k8nxq\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:09 crc kubenswrapper[4854]: I0103 05:43:09.587837 4854 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:09 crc kubenswrapper[4854]: I0103 05:43:09.588480 4854 status_manager.go:851] "Failed to get status for pod" podUID="dd260432-f4bc-4c81-a5e1-e3205534cda8" pod="openshift-marketplace/redhat-marketplace-cj878" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cj878\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:09 crc kubenswrapper[4854]: I0103 05:43:09.588926 4854 status_manager.go:851] "Failed to get status for pod" podUID="0380b43d-2d7f-490d-a822-1740bfa5c9ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:09 crc kubenswrapper[4854]: I0103 05:43:09.589428 4854 status_manager.go:851] "Failed to get status for pod" podUID="b633dc70-c725-4f1b-9595-aee7f6c165b4" pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-k8nxq\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:09 crc kubenswrapper[4854]: I0103 05:43:09.845810 4854 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-h7drl container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.35:6443/healthz\": dial tcp 10.217.0.35:6443: connect: connection refused" start-of-body= Jan 03 05:43:09 crc kubenswrapper[4854]: I0103 05:43:09.845899 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" podUID="c31b366b-2182-4c59-8777-e552553ba8a8" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.35:6443/healthz\": dial tcp 10.217.0.35:6443: connect: connection refused" Jan 03 05:43:10 crc kubenswrapper[4854]: E0103 05:43:10.747977 4854 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/events\": dial tcp 38.102.83.102:6443: connect: connection refused" event=< Jan 03 05:43:10 crc kubenswrapper[4854]: &Event{ObjectMeta:{route-controller-manager-85f4ff9897-qqv5z.18872230cb7c57a1 openshift-route-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-route-controller-manager,Name:route-controller-manager-85f4ff9897-qqv5z,UID:95421875-a016-4eba-8017-27f0276a6bc4,APIVersion:v1,ResourceVersion:29227,FieldPath:spec.containers{route-controller-manager},},Reason:ProbeError,Message:Readiness probe error: Get "https://10.217.0.56:8443/healthz": dial tcp 10.217.0.56:8443: connect: connection refused Jan 03 05:43:10 crc kubenswrapper[4854]: body: Jan 03 05:43:10 crc kubenswrapper[4854]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-03 05:42:51.733907361 +0000 UTC m=+150.060483963,LastTimestamp:2026-01-03 05:42:51.733907361 +0000 UTC m=+150.060483963,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 03 05:43:10 crc kubenswrapper[4854]: > Jan 03 05:43:11 crc kubenswrapper[4854]: I0103 05:43:11.341217 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 05:43:11 crc kubenswrapper[4854]: I0103 05:43:11.755290 4854 patch_prober.go:28] interesting pod/machine-config-daemon-qdhfx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 05:43:11 crc kubenswrapper[4854]: I0103 05:43:11.755361 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 05:43:12 crc kubenswrapper[4854]: I0103 05:43:12.124909 4854 status_manager.go:851] "Failed to get status for pod" podUID="0380b43d-2d7f-490d-a822-1740bfa5c9ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:12 crc kubenswrapper[4854]: I0103 05:43:12.125277 4854 status_manager.go:851] "Failed to get status for pod" podUID="b633dc70-c725-4f1b-9595-aee7f6c165b4" pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-k8nxq\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:12 crc kubenswrapper[4854]: I0103 05:43:12.125572 4854 status_manager.go:851] "Failed to get status for pod" podUID="dd260432-f4bc-4c81-a5e1-e3205534cda8" pod="openshift-marketplace/redhat-marketplace-cj878" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cj878\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:12 crc kubenswrapper[4854]: I0103 05:43:12.125857 4854 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:12 crc kubenswrapper[4854]: I0103 05:43:12.126496 4854 scope.go:117] "RemoveContainer" containerID="881bd14d507af4d806c909749c7308fcaae6924c53fd58bcb90c4b0d23944f6f" Jan 03 05:43:12 crc kubenswrapper[4854]: I0103 05:43:12.738649 4854 patch_prober.go:28] interesting pod/route-controller-manager-85f4ff9897-qqv5z container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.56:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 05:43:12 crc kubenswrapper[4854]: I0103 05:43:12.738944 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-85f4ff9897-qqv5z" podUID="95421875-a016-4eba-8017-27f0276a6bc4" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.56:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 05:43:12 crc kubenswrapper[4854]: E0103 05:43:12.882382 4854 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.102:6443: connect: connection refused" interval="7s" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.380202 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ztvzs_28406837-5e09-49b4-8583-54a450f07ae4/registry-server/0.log" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.381682 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ztvzs" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.382797 4854 status_manager.go:851] "Failed to get status for pod" podUID="0380b43d-2d7f-490d-a822-1740bfa5c9ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.383353 4854 status_manager.go:851] "Failed to get status for pod" podUID="b633dc70-c725-4f1b-9595-aee7f6c165b4" pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-k8nxq\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.383834 4854 status_manager.go:851] "Failed to get status for pod" podUID="28406837-5e09-49b4-8583-54a450f07ae4" pod="openshift-marketplace/redhat-operators-ztvzs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ztvzs\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.384535 4854 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.384825 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85f4ff9897-qqv5z" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.385199 4854 status_manager.go:851] "Failed to get status for pod" podUID="dd260432-f4bc-4c81-a5e1-e3205534cda8" pod="openshift-marketplace/redhat-marketplace-cj878" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cj878\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.386155 4854 status_manager.go:851] "Failed to get status for pod" podUID="95421875-a016-4eba-8017-27f0276a6bc4" pod="openshift-route-controller-manager/route-controller-manager-85f4ff9897-qqv5z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85f4ff9897-qqv5z\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.386679 4854 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.387017 4854 status_manager.go:851] "Failed to get status for pod" podUID="dd260432-f4bc-4c81-a5e1-e3205534cda8" pod="openshift-marketplace/redhat-marketplace-cj878" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cj878\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.388117 4854 status_manager.go:851] "Failed to get status for pod" podUID="0380b43d-2d7f-490d-a822-1740bfa5c9ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.388594 4854 status_manager.go:851] "Failed to get status for pod" podUID="b633dc70-c725-4f1b-9595-aee7f6c165b4" pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-k8nxq\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.389273 4854 status_manager.go:851] "Failed to get status for pod" podUID="28406837-5e09-49b4-8583-54a450f07ae4" pod="openshift-marketplace/redhat-operators-ztvzs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ztvzs\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.393735 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b46bdbb7f-szr25" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.394719 4854 status_manager.go:851] "Failed to get status for pod" podUID="95421875-a016-4eba-8017-27f0276a6bc4" pod="openshift-route-controller-manager/route-controller-manager-85f4ff9897-qqv5z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85f4ff9897-qqv5z\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.395391 4854 status_manager.go:851] "Failed to get status for pod" podUID="b1a6f0af-e686-4a3a-b1af-b4ab77e8362c" pod="openshift-controller-manager/controller-manager-b46bdbb7f-szr25" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b46bdbb7f-szr25\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.395951 4854 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.396397 4854 status_manager.go:851] "Failed to get status for pod" podUID="dd260432-f4bc-4c81-a5e1-e3205534cda8" pod="openshift-marketplace/redhat-marketplace-cj878" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cj878\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.396691 4854 status_manager.go:851] "Failed to get status for pod" podUID="0380b43d-2d7f-490d-a822-1740bfa5c9ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.397072 4854 status_manager.go:851] "Failed to get status for pod" podUID="b633dc70-c725-4f1b-9595-aee7f6c165b4" pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-k8nxq\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.397722 4854 status_manager.go:851] "Failed to get status for pod" podUID="28406837-5e09-49b4-8583-54a450f07ae4" pod="openshift-marketplace/redhat-operators-ztvzs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ztvzs\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.398257 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.398627 4854 status_manager.go:851] "Failed to get status for pod" podUID="28406837-5e09-49b4-8583-54a450f07ae4" pod="openshift-marketplace/redhat-operators-ztvzs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ztvzs\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.398998 4854 status_manager.go:851] "Failed to get status for pod" podUID="dd260432-f4bc-4c81-a5e1-e3205534cda8" pod="openshift-marketplace/redhat-marketplace-cj878" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cj878\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.399487 4854 status_manager.go:851] "Failed to get status for pod" podUID="95421875-a016-4eba-8017-27f0276a6bc4" pod="openshift-route-controller-manager/route-controller-manager-85f4ff9897-qqv5z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85f4ff9897-qqv5z\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.399892 4854 status_manager.go:851] "Failed to get status for pod" podUID="b1a6f0af-e686-4a3a-b1af-b4ab77e8362c" pod="openshift-controller-manager/controller-manager-b46bdbb7f-szr25" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b46bdbb7f-szr25\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.400522 4854 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.401236 4854 status_manager.go:851] "Failed to get status for pod" podUID="0380b43d-2d7f-490d-a822-1740bfa5c9ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.401740 4854 status_manager.go:851] "Failed to get status for pod" podUID="b633dc70-c725-4f1b-9595-aee7f6c165b4" pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-k8nxq\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.405378 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.407187 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.407882 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.408257 4854 status_manager.go:851] "Failed to get status for pod" podUID="0380b43d-2d7f-490d-a822-1740bfa5c9ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.408688 4854 status_manager.go:851] "Failed to get status for pod" podUID="b633dc70-c725-4f1b-9595-aee7f6c165b4" pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-k8nxq\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.409003 4854 status_manager.go:851] "Failed to get status for pod" podUID="28406837-5e09-49b4-8583-54a450f07ae4" pod="openshift-marketplace/redhat-operators-ztvzs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ztvzs\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.409370 4854 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.409788 4854 status_manager.go:851] "Failed to get status for pod" podUID="dd260432-f4bc-4c81-a5e1-e3205534cda8" pod="openshift-marketplace/redhat-marketplace-cj878" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cj878\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.410117 4854 status_manager.go:851] "Failed to get status for pod" podUID="95421875-a016-4eba-8017-27f0276a6bc4" pod="openshift-route-controller-manager/route-controller-manager-85f4ff9897-qqv5z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85f4ff9897-qqv5z\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.410575 4854 status_manager.go:851] "Failed to get status for pod" podUID="b1a6f0af-e686-4a3a-b1af-b4ab77e8362c" pod="openshift-controller-manager/controller-manager-b46bdbb7f-szr25" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b46bdbb7f-szr25\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.410761 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.410966 4854 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.411467 4854 status_manager.go:851] "Failed to get status for pod" podUID="c31b366b-2182-4c59-8777-e552553ba8a8" pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-h7drl\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.412016 4854 status_manager.go:851] "Failed to get status for pod" podUID="28406837-5e09-49b4-8583-54a450f07ae4" pod="openshift-marketplace/redhat-operators-ztvzs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ztvzs\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.412481 4854 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.412704 4854 status_manager.go:851] "Failed to get status for pod" podUID="dd260432-f4bc-4c81-a5e1-e3205534cda8" pod="openshift-marketplace/redhat-marketplace-cj878" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cj878\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.412999 4854 status_manager.go:851] "Failed to get status for pod" podUID="95421875-a016-4eba-8017-27f0276a6bc4" pod="openshift-route-controller-manager/route-controller-manager-85f4ff9897-qqv5z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85f4ff9897-qqv5z\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.413411 4854 status_manager.go:851] "Failed to get status for pod" podUID="b1a6f0af-e686-4a3a-b1af-b4ab77e8362c" pod="openshift-controller-manager/controller-manager-b46bdbb7f-szr25" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b46bdbb7f-szr25\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.413632 4854 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.413888 4854 status_manager.go:851] "Failed to get status for pod" podUID="0380b43d-2d7f-490d-a822-1740bfa5c9ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.414359 4854 status_manager.go:851] "Failed to get status for pod" podUID="b633dc70-c725-4f1b-9595-aee7f6c165b4" pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-k8nxq\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.515136 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c31b366b-2182-4c59-8777-e552553ba8a8-audit-dir\") pod \"c31b366b-2182-4c59-8777-e552553ba8a8\" (UID: \"c31b366b-2182-4c59-8777-e552553ba8a8\") " Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.515196 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1a6f0af-e686-4a3a-b1af-b4ab77e8362c-config\") pod \"b1a6f0af-e686-4a3a-b1af-b4ab77e8362c\" (UID: \"b1a6f0af-e686-4a3a-b1af-b4ab77e8362c\") " Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.515235 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-user-template-provider-selection\") pod \"c31b366b-2182-4c59-8777-e552553ba8a8\" (UID: \"c31b366b-2182-4c59-8777-e552553ba8a8\") " Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.515261 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c31b366b-2182-4c59-8777-e552553ba8a8-audit-policies\") pod \"c31b366b-2182-4c59-8777-e552553ba8a8\" (UID: \"c31b366b-2182-4c59-8777-e552553ba8a8\") " Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.515284 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0380b43d-2d7f-490d-a822-1740bfa5c9ac-kubelet-dir\") pod \"0380b43d-2d7f-490d-a822-1740bfa5c9ac\" (UID: \"0380b43d-2d7f-490d-a822-1740bfa5c9ac\") " Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.515305 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-system-router-certs\") pod \"c31b366b-2182-4c59-8777-e552553ba8a8\" (UID: \"c31b366b-2182-4c59-8777-e552553ba8a8\") " Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.515332 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-system-ocp-branding-template\") pod \"c31b366b-2182-4c59-8777-e552553ba8a8\" (UID: \"c31b366b-2182-4c59-8777-e552553ba8a8\") " Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.515357 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-system-service-ca\") pod \"c31b366b-2182-4c59-8777-e552553ba8a8\" (UID: \"c31b366b-2182-4c59-8777-e552553ba8a8\") " Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.515378 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.515408 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-system-serving-cert\") pod \"c31b366b-2182-4c59-8777-e552553ba8a8\" (UID: \"c31b366b-2182-4c59-8777-e552553ba8a8\") " Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.515412 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c31b366b-2182-4c59-8777-e552553ba8a8-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "c31b366b-2182-4c59-8777-e552553ba8a8" (UID: "c31b366b-2182-4c59-8777-e552553ba8a8"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.515429 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-system-session\") pod \"c31b366b-2182-4c59-8777-e552553ba8a8\" (UID: \"c31b366b-2182-4c59-8777-e552553ba8a8\") " Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.515464 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-user-template-error\") pod \"c31b366b-2182-4c59-8777-e552553ba8a8\" (UID: \"c31b366b-2182-4c59-8777-e552553ba8a8\") " Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.515490 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0380b43d-2d7f-490d-a822-1740bfa5c9ac-var-lock\") pod \"0380b43d-2d7f-490d-a822-1740bfa5c9ac\" (UID: \"0380b43d-2d7f-490d-a822-1740bfa5c9ac\") " Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.515514 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-user-template-login\") pod \"c31b366b-2182-4c59-8777-e552553ba8a8\" (UID: \"c31b366b-2182-4c59-8777-e552553ba8a8\") " Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.515538 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-system-cliconfig\") pod \"c31b366b-2182-4c59-8777-e552553ba8a8\" (UID: \"c31b366b-2182-4c59-8777-e552553ba8a8\") " Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.515563 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28406837-5e09-49b4-8583-54a450f07ae4-catalog-content\") pod \"28406837-5e09-49b4-8583-54a450f07ae4\" (UID: \"28406837-5e09-49b4-8583-54a450f07ae4\") " Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.515591 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-user-idp-0-file-data\") pod \"c31b366b-2182-4c59-8777-e552553ba8a8\" (UID: \"c31b366b-2182-4c59-8777-e552553ba8a8\") " Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.515617 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0380b43d-2d7f-490d-a822-1740bfa5c9ac-kube-api-access\") pod \"0380b43d-2d7f-490d-a822-1740bfa5c9ac\" (UID: \"0380b43d-2d7f-490d-a822-1740bfa5c9ac\") " Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.515641 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/95421875-a016-4eba-8017-27f0276a6bc4-client-ca\") pod \"95421875-a016-4eba-8017-27f0276a6bc4\" (UID: \"95421875-a016-4eba-8017-27f0276a6bc4\") " Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.515673 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qzns\" (UniqueName: \"kubernetes.io/projected/95421875-a016-4eba-8017-27f0276a6bc4-kube-api-access-7qzns\") pod \"95421875-a016-4eba-8017-27f0276a6bc4\" (UID: \"95421875-a016-4eba-8017-27f0276a6bc4\") " Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.515703 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfw4c\" (UniqueName: \"kubernetes.io/projected/c31b366b-2182-4c59-8777-e552553ba8a8-kube-api-access-vfw4c\") pod \"c31b366b-2182-4c59-8777-e552553ba8a8\" (UID: \"c31b366b-2182-4c59-8777-e552553ba8a8\") " Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.515722 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-system-trusted-ca-bundle\") pod \"c31b366b-2182-4c59-8777-e552553ba8a8\" (UID: \"c31b366b-2182-4c59-8777-e552553ba8a8\") " Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.515741 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b1a6f0af-e686-4a3a-b1af-b4ab77e8362c-client-ca\") pod \"b1a6f0af-e686-4a3a-b1af-b4ab77e8362c\" (UID: \"b1a6f0af-e686-4a3a-b1af-b4ab77e8362c\") " Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.515761 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1a6f0af-e686-4a3a-b1af-b4ab77e8362c-serving-cert\") pod \"b1a6f0af-e686-4a3a-b1af-b4ab77e8362c\" (UID: \"b1a6f0af-e686-4a3a-b1af-b4ab77e8362c\") " Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.515791 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95421875-a016-4eba-8017-27f0276a6bc4-config\") pod \"95421875-a016-4eba-8017-27f0276a6bc4\" (UID: \"95421875-a016-4eba-8017-27f0276a6bc4\") " Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.515810 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.515827 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.515852 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4c2r\" (UniqueName: \"kubernetes.io/projected/b1a6f0af-e686-4a3a-b1af-b4ab77e8362c-kube-api-access-x4c2r\") pod \"b1a6f0af-e686-4a3a-b1af-b4ab77e8362c\" (UID: \"b1a6f0af-e686-4a3a-b1af-b4ab77e8362c\") " Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.515876 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28406837-5e09-49b4-8583-54a450f07ae4-utilities\") pod \"28406837-5e09-49b4-8583-54a450f07ae4\" (UID: \"28406837-5e09-49b4-8583-54a450f07ae4\") " Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.515899 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgv75\" (UniqueName: \"kubernetes.io/projected/28406837-5e09-49b4-8583-54a450f07ae4-kube-api-access-hgv75\") pod \"28406837-5e09-49b4-8583-54a450f07ae4\" (UID: \"28406837-5e09-49b4-8583-54a450f07ae4\") " Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.515921 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95421875-a016-4eba-8017-27f0276a6bc4-serving-cert\") pod \"95421875-a016-4eba-8017-27f0276a6bc4\" (UID: \"95421875-a016-4eba-8017-27f0276a6bc4\") " Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.515953 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b1a6f0af-e686-4a3a-b1af-b4ab77e8362c-proxy-ca-bundles\") pod \"b1a6f0af-e686-4a3a-b1af-b4ab77e8362c\" (UID: \"b1a6f0af-e686-4a3a-b1af-b4ab77e8362c\") " Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.516216 4854 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c31b366b-2182-4c59-8777-e552553ba8a8-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.516917 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1a6f0af-e686-4a3a-b1af-b4ab77e8362c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b1a6f0af-e686-4a3a-b1af-b4ab77e8362c" (UID: "b1a6f0af-e686-4a3a-b1af-b4ab77e8362c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.517145 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1a6f0af-e686-4a3a-b1af-b4ab77e8362c-config" (OuterVolumeSpecName: "config") pod "b1a6f0af-e686-4a3a-b1af-b4ab77e8362c" (UID: "b1a6f0af-e686-4a3a-b1af-b4ab77e8362c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.517784 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c31b366b-2182-4c59-8777-e552553ba8a8-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "c31b366b-2182-4c59-8777-e552553ba8a8" (UID: "c31b366b-2182-4c59-8777-e552553ba8a8"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.518161 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95421875-a016-4eba-8017-27f0276a6bc4-client-ca" (OuterVolumeSpecName: "client-ca") pod "95421875-a016-4eba-8017-27f0276a6bc4" (UID: "95421875-a016-4eba-8017-27f0276a6bc4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.518877 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "c31b366b-2182-4c59-8777-e552553ba8a8" (UID: "c31b366b-2182-4c59-8777-e552553ba8a8"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.518936 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.519874 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0380b43d-2d7f-490d-a822-1740bfa5c9ac-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0380b43d-2d7f-490d-a822-1740bfa5c9ac" (UID: "0380b43d-2d7f-490d-a822-1740bfa5c9ac"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.520862 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "c31b366b-2182-4c59-8777-e552553ba8a8" (UID: "c31b366b-2182-4c59-8777-e552553ba8a8"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.521700 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95421875-a016-4eba-8017-27f0276a6bc4-config" (OuterVolumeSpecName: "config") pod "95421875-a016-4eba-8017-27f0276a6bc4" (UID: "95421875-a016-4eba-8017-27f0276a6bc4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.522274 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.522697 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1a6f0af-e686-4a3a-b1af-b4ab77e8362c-client-ca" (OuterVolumeSpecName: "client-ca") pod "b1a6f0af-e686-4a3a-b1af-b4ab77e8362c" (UID: "b1a6f0af-e686-4a3a-b1af-b4ab77e8362c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.522721 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "c31b366b-2182-4c59-8777-e552553ba8a8" (UID: "c31b366b-2182-4c59-8777-e552553ba8a8"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.522800 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.535116 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0380b43d-2d7f-490d-a822-1740bfa5c9ac-var-lock" (OuterVolumeSpecName: "var-lock") pod "0380b43d-2d7f-490d-a822-1740bfa5c9ac" (UID: "0380b43d-2d7f-490d-a822-1740bfa5c9ac"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.535461 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "c31b366b-2182-4c59-8777-e552553ba8a8" (UID: "c31b366b-2182-4c59-8777-e552553ba8a8"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.536138 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28406837-5e09-49b4-8583-54a450f07ae4-utilities" (OuterVolumeSpecName: "utilities") pod "28406837-5e09-49b4-8583-54a450f07ae4" (UID: "28406837-5e09-49b4-8583-54a450f07ae4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.537471 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "c31b366b-2182-4c59-8777-e552553ba8a8" (UID: "c31b366b-2182-4c59-8777-e552553ba8a8"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.540127 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95421875-a016-4eba-8017-27f0276a6bc4-kube-api-access-7qzns" (OuterVolumeSpecName: "kube-api-access-7qzns") pod "95421875-a016-4eba-8017-27f0276a6bc4" (UID: "95421875-a016-4eba-8017-27f0276a6bc4"). InnerVolumeSpecName "kube-api-access-7qzns". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.544354 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "c31b366b-2182-4c59-8777-e552553ba8a8" (UID: "c31b366b-2182-4c59-8777-e552553ba8a8"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.544630 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c31b366b-2182-4c59-8777-e552553ba8a8-kube-api-access-vfw4c" (OuterVolumeSpecName: "kube-api-access-vfw4c") pod "c31b366b-2182-4c59-8777-e552553ba8a8" (UID: "c31b366b-2182-4c59-8777-e552553ba8a8"). InnerVolumeSpecName "kube-api-access-vfw4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.545664 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "c31b366b-2182-4c59-8777-e552553ba8a8" (UID: "c31b366b-2182-4c59-8777-e552553ba8a8"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.546182 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "c31b366b-2182-4c59-8777-e552553ba8a8" (UID: "c31b366b-2182-4c59-8777-e552553ba8a8"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.545993 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95421875-a016-4eba-8017-27f0276a6bc4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "95421875-a016-4eba-8017-27f0276a6bc4" (UID: "95421875-a016-4eba-8017-27f0276a6bc4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.547376 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1a6f0af-e686-4a3a-b1af-b4ab77e8362c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b1a6f0af-e686-4a3a-b1af-b4ab77e8362c" (UID: "b1a6f0af-e686-4a3a-b1af-b4ab77e8362c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.547942 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1a6f0af-e686-4a3a-b1af-b4ab77e8362c-kube-api-access-x4c2r" (OuterVolumeSpecName: "kube-api-access-x4c2r") pod "b1a6f0af-e686-4a3a-b1af-b4ab77e8362c" (UID: "b1a6f0af-e686-4a3a-b1af-b4ab77e8362c"). InnerVolumeSpecName "kube-api-access-x4c2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.547999 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "c31b366b-2182-4c59-8777-e552553ba8a8" (UID: "c31b366b-2182-4c59-8777-e552553ba8a8"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.548178 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "c31b366b-2182-4c59-8777-e552553ba8a8" (UID: "c31b366b-2182-4c59-8777-e552553ba8a8"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.548437 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "c31b366b-2182-4c59-8777-e552553ba8a8" (UID: "c31b366b-2182-4c59-8777-e552553ba8a8"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.548854 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0380b43d-2d7f-490d-a822-1740bfa5c9ac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0380b43d-2d7f-490d-a822-1740bfa5c9ac" (UID: "0380b43d-2d7f-490d-a822-1740bfa5c9ac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.562130 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28406837-5e09-49b4-8583-54a450f07ae4-kube-api-access-hgv75" (OuterVolumeSpecName: "kube-api-access-hgv75") pod "28406837-5e09-49b4-8583-54a450f07ae4" (UID: "28406837-5e09-49b4-8583-54a450f07ae4"). InnerVolumeSpecName "kube-api-access-hgv75". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.605300 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" event={"ID":"c31b366b-2182-4c59-8777-e552553ba8a8","Type":"ContainerDied","Data":"9d5b94dfd9b8041d415b0d27ce6a6626e408d486e23c12e02993ec66fee529fe"} Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.605317 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.606510 4854 status_manager.go:851] "Failed to get status for pod" podUID="0380b43d-2d7f-490d-a822-1740bfa5c9ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.607041 4854 status_manager.go:851] "Failed to get status for pod" podUID="b633dc70-c725-4f1b-9595-aee7f6c165b4" pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-k8nxq\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.607517 4854 status_manager.go:851] "Failed to get status for pod" podUID="c31b366b-2182-4c59-8777-e552553ba8a8" pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-h7drl\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.607571 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ztvzs_28406837-5e09-49b4-8583-54a450f07ae4/registry-server/0.log" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.607953 4854 status_manager.go:851] "Failed to get status for pod" podUID="28406837-5e09-49b4-8583-54a450f07ae4" pod="openshift-marketplace/redhat-operators-ztvzs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ztvzs\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.608363 4854 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.608543 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ztvzs" event={"ID":"28406837-5e09-49b4-8583-54a450f07ae4","Type":"ContainerDied","Data":"593aaf5288fe9008494031595e2b1989b5c96ac91dc2e4f533d639ab666db3e3"} Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.608702 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ztvzs" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.608807 4854 status_manager.go:851] "Failed to get status for pod" podUID="b1a6f0af-e686-4a3a-b1af-b4ab77e8362c" pod="openshift-controller-manager/controller-manager-b46bdbb7f-szr25" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b46bdbb7f-szr25\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.610400 4854 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.610774 4854 status_manager.go:851] "Failed to get status for pod" podUID="dd260432-f4bc-4c81-a5e1-e3205534cda8" pod="openshift-marketplace/redhat-marketplace-cj878" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cj878\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.611310 4854 status_manager.go:851] "Failed to get status for pod" podUID="95421875-a016-4eba-8017-27f0276a6bc4" pod="openshift-route-controller-manager/route-controller-manager-85f4ff9897-qqv5z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85f4ff9897-qqv5z\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.612218 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85f4ff9897-qqv5z" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.612201 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85f4ff9897-qqv5z" event={"ID":"95421875-a016-4eba-8017-27f0276a6bc4","Type":"ContainerDied","Data":"41980d0a8fb9c6c05c3fd17f54281275fb20f06f5c28139b7460bd4fcc82c83f"} Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.612686 4854 status_manager.go:851] "Failed to get status for pod" podUID="0380b43d-2d7f-490d-a822-1740bfa5c9ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.613449 4854 status_manager.go:851] "Failed to get status for pod" podUID="b633dc70-c725-4f1b-9595-aee7f6c165b4" pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-k8nxq\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.614324 4854 status_manager.go:851] "Failed to get status for pod" podUID="c31b366b-2182-4c59-8777-e552553ba8a8" pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-h7drl\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.614815 4854 status_manager.go:851] "Failed to get status for pod" podUID="28406837-5e09-49b4-8583-54a450f07ae4" pod="openshift-marketplace/redhat-operators-ztvzs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ztvzs\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.615303 4854 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.615723 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.615750 4854 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.616212 4854 status_manager.go:851] "Failed to get status for pod" podUID="dd260432-f4bc-4c81-a5e1-e3205534cda8" pod="openshift-marketplace/redhat-marketplace-cj878" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cj878\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.617227 4854 status_manager.go:851] "Failed to get status for pod" podUID="95421875-a016-4eba-8017-27f0276a6bc4" pod="openshift-route-controller-manager/route-controller-manager-85f4ff9897-qqv5z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85f4ff9897-qqv5z\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.617476 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qzns\" (UniqueName: \"kubernetes.io/projected/95421875-a016-4eba-8017-27f0276a6bc4-kube-api-access-7qzns\") on node \"crc\" DevicePath \"\"" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.617532 4854 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.617550 4854 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b1a6f0af-e686-4a3a-b1af-b4ab77e8362c-client-ca\") on node \"crc\" DevicePath \"\"" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.617563 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfw4c\" (UniqueName: \"kubernetes.io/projected/c31b366b-2182-4c59-8777-e552553ba8a8-kube-api-access-vfw4c\") on node \"crc\" DevicePath \"\"" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.617575 4854 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1a6f0af-e686-4a3a-b1af-b4ab77e8362c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.617614 4854 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95421875-a016-4eba-8017-27f0276a6bc4-config\") on node \"crc\" DevicePath \"\"" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.617625 4854 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.617636 4854 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.617646 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4c2r\" (UniqueName: \"kubernetes.io/projected/b1a6f0af-e686-4a3a-b1af-b4ab77e8362c-kube-api-access-x4c2r\") on node \"crc\" DevicePath \"\"" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.617684 4854 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28406837-5e09-49b4-8583-54a450f07ae4-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.617700 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgv75\" (UniqueName: \"kubernetes.io/projected/28406837-5e09-49b4-8583-54a450f07ae4-kube-api-access-hgv75\") on node \"crc\" DevicePath \"\"" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.617710 4854 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95421875-a016-4eba-8017-27f0276a6bc4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.617721 4854 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b1a6f0af-e686-4a3a-b1af-b4ab77e8362c-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.617732 4854 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1a6f0af-e686-4a3a-b1af-b4ab77e8362c-config\") on node \"crc\" DevicePath \"\"" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.617767 4854 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.617782 4854 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c31b366b-2182-4c59-8777-e552553ba8a8-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.617794 4854 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0380b43d-2d7f-490d-a822-1740bfa5c9ac-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.617805 4854 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.617817 4854 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.617855 4854 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.617867 4854 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.617881 4854 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.617894 4854 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.617930 4854 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.617944 4854 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0380b43d-2d7f-490d-a822-1740bfa5c9ac-var-lock\") on node \"crc\" DevicePath \"\"" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.617955 4854 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.617966 4854 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.617978 4854 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c31b366b-2182-4c59-8777-e552553ba8a8-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.618014 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0380b43d-2d7f-490d-a822-1740bfa5c9ac-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.618027 4854 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/95421875-a016-4eba-8017-27f0276a6bc4-client-ca\") on node \"crc\" DevicePath \"\"" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.620908 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.621178 4854 status_manager.go:851] "Failed to get status for pod" podUID="b1a6f0af-e686-4a3a-b1af-b4ab77e8362c" pod="openshift-controller-manager/controller-manager-b46bdbb7f-szr25" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b46bdbb7f-szr25\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.622059 4854 status_manager.go:851] "Failed to get status for pod" podUID="0380b43d-2d7f-490d-a822-1740bfa5c9ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.622235 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.622575 4854 status_manager.go:851] "Failed to get status for pod" podUID="b633dc70-c725-4f1b-9595-aee7f6c165b4" pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-k8nxq\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.623014 4854 status_manager.go:851] "Failed to get status for pod" podUID="c31b366b-2182-4c59-8777-e552553ba8a8" pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-h7drl\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.623656 4854 status_manager.go:851] "Failed to get status for pod" podUID="28406837-5e09-49b4-8583-54a450f07ae4" pod="openshift-marketplace/redhat-operators-ztvzs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ztvzs\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.623841 4854 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.623988 4854 status_manager.go:851] "Failed to get status for pod" podUID="95421875-a016-4eba-8017-27f0276a6bc4" pod="openshift-route-controller-manager/route-controller-manager-85f4ff9897-qqv5z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85f4ff9897-qqv5z\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.624657 4854 status_manager.go:851] "Failed to get status for pod" podUID="b1a6f0af-e686-4a3a-b1af-b4ab77e8362c" pod="openshift-controller-manager/controller-manager-b46bdbb7f-szr25" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b46bdbb7f-szr25\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.625221 4854 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.625546 4854 status_manager.go:851] "Failed to get status for pod" podUID="dd260432-f4bc-4c81-a5e1-e3205534cda8" pod="openshift-marketplace/redhat-marketplace-cj878" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cj878\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.627521 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b46bdbb7f-szr25" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.627548 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b46bdbb7f-szr25" event={"ID":"b1a6f0af-e686-4a3a-b1af-b4ab77e8362c","Type":"ContainerDied","Data":"53d723eed195f9d14654d7dc950311a7459a3cdc0b186b5c9041573325499ee1"} Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.628587 4854 status_manager.go:851] "Failed to get status for pod" podUID="0380b43d-2d7f-490d-a822-1740bfa5c9ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.629392 4854 status_manager.go:851] "Failed to get status for pod" podUID="b633dc70-c725-4f1b-9595-aee7f6c165b4" pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-k8nxq\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.630023 4854 status_manager.go:851] "Failed to get status for pod" podUID="c31b366b-2182-4c59-8777-e552553ba8a8" pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-h7drl\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.630717 4854 status_manager.go:851] "Failed to get status for pod" podUID="28406837-5e09-49b4-8583-54a450f07ae4" pod="openshift-marketplace/redhat-operators-ztvzs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ztvzs\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.630968 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"0380b43d-2d7f-490d-a822-1740bfa5c9ac","Type":"ContainerDied","Data":"772e3ad9a8e4c3381089d88db7f323b0fbe47a40dd52f0ae0373dab41b57898c"} Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.631128 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="772e3ad9a8e4c3381089d88db7f323b0fbe47a40dd52f0ae0373dab41b57898c" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.631164 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.631863 4854 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.632339 4854 status_manager.go:851] "Failed to get status for pod" podUID="dd260432-f4bc-4c81-a5e1-e3205534cda8" pod="openshift-marketplace/redhat-marketplace-cj878" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cj878\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.633177 4854 status_manager.go:851] "Failed to get status for pod" podUID="95421875-a016-4eba-8017-27f0276a6bc4" pod="openshift-route-controller-manager/route-controller-manager-85f4ff9897-qqv5z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85f4ff9897-qqv5z\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.633610 4854 status_manager.go:851] "Failed to get status for pod" podUID="b1a6f0af-e686-4a3a-b1af-b4ab77e8362c" pod="openshift-controller-manager/controller-manager-b46bdbb7f-szr25" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b46bdbb7f-szr25\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.634304 4854 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.649327 4854 status_manager.go:851] "Failed to get status for pod" podUID="28406837-5e09-49b4-8583-54a450f07ae4" pod="openshift-marketplace/redhat-operators-ztvzs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ztvzs\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.650031 4854 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.650790 4854 status_manager.go:851] "Failed to get status for pod" podUID="dd260432-f4bc-4c81-a5e1-e3205534cda8" pod="openshift-marketplace/redhat-marketplace-cj878" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cj878\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.651260 4854 status_manager.go:851] "Failed to get status for pod" podUID="95421875-a016-4eba-8017-27f0276a6bc4" pod="openshift-route-controller-manager/route-controller-manager-85f4ff9897-qqv5z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85f4ff9897-qqv5z\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.651582 4854 status_manager.go:851] "Failed to get status for pod" podUID="b1a6f0af-e686-4a3a-b1af-b4ab77e8362c" pod="openshift-controller-manager/controller-manager-b46bdbb7f-szr25" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b46bdbb7f-szr25\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.652029 4854 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.652481 4854 status_manager.go:851] "Failed to get status for pod" podUID="0380b43d-2d7f-490d-a822-1740bfa5c9ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.652883 4854 status_manager.go:851] "Failed to get status for pod" podUID="b633dc70-c725-4f1b-9595-aee7f6c165b4" pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-k8nxq\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.653619 4854 status_manager.go:851] "Failed to get status for pod" podUID="c31b366b-2182-4c59-8777-e552553ba8a8" pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-h7drl\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.654706 4854 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.655424 4854 status_manager.go:851] "Failed to get status for pod" podUID="95421875-a016-4eba-8017-27f0276a6bc4" pod="openshift-route-controller-manager/route-controller-manager-85f4ff9897-qqv5z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85f4ff9897-qqv5z\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.656897 4854 status_manager.go:851] "Failed to get status for pod" podUID="b1a6f0af-e686-4a3a-b1af-b4ab77e8362c" pod="openshift-controller-manager/controller-manager-b46bdbb7f-szr25" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b46bdbb7f-szr25\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.657603 4854 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.659219 4854 status_manager.go:851] "Failed to get status for pod" podUID="dd260432-f4bc-4c81-a5e1-e3205534cda8" pod="openshift-marketplace/redhat-marketplace-cj878" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cj878\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.659785 4854 status_manager.go:851] "Failed to get status for pod" podUID="0380b43d-2d7f-490d-a822-1740bfa5c9ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.661152 4854 status_manager.go:851] "Failed to get status for pod" podUID="b633dc70-c725-4f1b-9595-aee7f6c165b4" pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-k8nxq\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.661696 4854 status_manager.go:851] "Failed to get status for pod" podUID="c31b366b-2182-4c59-8777-e552553ba8a8" pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-h7drl\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.662381 4854 status_manager.go:851] "Failed to get status for pod" podUID="28406837-5e09-49b4-8583-54a450f07ae4" pod="openshift-marketplace/redhat-operators-ztvzs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ztvzs\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.663025 4854 status_manager.go:851] "Failed to get status for pod" podUID="28406837-5e09-49b4-8583-54a450f07ae4" pod="openshift-marketplace/redhat-operators-ztvzs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ztvzs\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.666054 4854 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.666543 4854 status_manager.go:851] "Failed to get status for pod" podUID="dd260432-f4bc-4c81-a5e1-e3205534cda8" pod="openshift-marketplace/redhat-marketplace-cj878" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cj878\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.667144 4854 status_manager.go:851] "Failed to get status for pod" podUID="95421875-a016-4eba-8017-27f0276a6bc4" pod="openshift-route-controller-manager/route-controller-manager-85f4ff9897-qqv5z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85f4ff9897-qqv5z\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.667592 4854 status_manager.go:851] "Failed to get status for pod" podUID="b1a6f0af-e686-4a3a-b1af-b4ab77e8362c" pod="openshift-controller-manager/controller-manager-b46bdbb7f-szr25" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b46bdbb7f-szr25\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.668075 4854 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.669372 4854 status_manager.go:851] "Failed to get status for pod" podUID="0380b43d-2d7f-490d-a822-1740bfa5c9ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.669805 4854 status_manager.go:851] "Failed to get status for pod" podUID="b633dc70-c725-4f1b-9595-aee7f6c165b4" pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-k8nxq\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.670245 4854 status_manager.go:851] "Failed to get status for pod" podUID="c31b366b-2182-4c59-8777-e552553ba8a8" pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-h7drl\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.695339 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28406837-5e09-49b4-8583-54a450f07ae4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "28406837-5e09-49b4-8583-54a450f07ae4" (UID: "28406837-5e09-49b4-8583-54a450f07ae4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.718640 4854 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28406837-5e09-49b4-8583-54a450f07ae4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 05:43:14 crc kubenswrapper[4854]: E0103 05:43:14.725498 4854 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95421875_a016_4eba_8017_27f0276a6bc4.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-64790143e744feecc69beb4a54703726f497886c2e48af6f98cbc04a2c021ff0\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1a6f0af_e686_4a3a_b1af_b4ab77e8362c.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pod0380b43d_2d7f_490d_a822_1740bfa5c9ac.slice/crio-772e3ad9a8e4c3381089d88db7f323b0fbe47a40dd52f0ae0373dab41b57898c\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1a6f0af_e686_4a3a_b1af_b4ab77e8362c.slice/crio-53d723eed195f9d14654d7dc950311a7459a3cdc0b186b5c9041573325499ee1\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc31b366b_2182_4c59_8777_e552553ba8a8.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc31b366b_2182_4c59_8777_e552553ba8a8.slice/crio-9d5b94dfd9b8041d415b0d27ce6a6626e408d486e23c12e02993ec66fee529fe\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice\": RecentStats: unable to find data in memory cache]" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.930358 4854 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.930986 4854 status_manager.go:851] "Failed to get status for pod" podUID="dd260432-f4bc-4c81-a5e1-e3205534cda8" pod="openshift-marketplace/redhat-marketplace-cj878" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cj878\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.931496 4854 status_manager.go:851] "Failed to get status for pod" podUID="95421875-a016-4eba-8017-27f0276a6bc4" pod="openshift-route-controller-manager/route-controller-manager-85f4ff9897-qqv5z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85f4ff9897-qqv5z\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.931979 4854 status_manager.go:851] "Failed to get status for pod" podUID="b1a6f0af-e686-4a3a-b1af-b4ab77e8362c" pod="openshift-controller-manager/controller-manager-b46bdbb7f-szr25" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b46bdbb7f-szr25\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.933286 4854 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.933898 4854 status_manager.go:851] "Failed to get status for pod" podUID="0380b43d-2d7f-490d-a822-1740bfa5c9ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.934162 4854 status_manager.go:851] "Failed to get status for pod" podUID="b633dc70-c725-4f1b-9595-aee7f6c165b4" pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-k8nxq\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.934436 4854 status_manager.go:851] "Failed to get status for pod" podUID="c31b366b-2182-4c59-8777-e552553ba8a8" pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-h7drl\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:14 crc kubenswrapper[4854]: I0103 05:43:14.934974 4854 status_manager.go:851] "Failed to get status for pod" podUID="28406837-5e09-49b4-8583-54a450f07ae4" pod="openshift-marketplace/redhat-operators-ztvzs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ztvzs\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:16 crc kubenswrapper[4854]: I0103 05:43:16.130387 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 03 05:43:16 crc kubenswrapper[4854]: I0103 05:43:16.791256 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 05:43:17 crc kubenswrapper[4854]: I0103 05:43:17.374551 4854 scope.go:117] "RemoveContainer" containerID="533aa8162348383e5432b6d8e1683685373f199c3205a906b6042d464da94f97" Jan 03 05:43:17 crc kubenswrapper[4854]: I0103 05:43:17.538180 4854 scope.go:117] "RemoveContainer" containerID="5f5526160a5c68dcf472ca4562ba7cfc24aef8be3058acd28a2850f7d7abb674" Jan 03 05:43:17 crc kubenswrapper[4854]: I0103 05:43:17.893709 4854 scope.go:117] "RemoveContainer" containerID="c07a08a31076c61af219ce8e33da4e65d03cb1b07f0d5d05d6a2e1fd1808f0cb" Jan 03 05:43:17 crc kubenswrapper[4854]: W0103 05:43:17.915645 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-565ec8255dfee400afdd43514d5fae89c1bd3560fdb6fa75987de97b5a857654 WatchSource:0}: Error finding container 565ec8255dfee400afdd43514d5fae89c1bd3560fdb6fa75987de97b5a857654: Status 404 returned error can't find the container with id 565ec8255dfee400afdd43514d5fae89c1bd3560fdb6fa75987de97b5a857654 Jan 03 05:43:17 crc kubenswrapper[4854]: I0103 05:43:17.962948 4854 scope.go:117] "RemoveContainer" containerID="216a2eb4bb8838d4e80e268febc637ca0fdc1d038f57b491928448bd39de2687" Jan 03 05:43:18 crc kubenswrapper[4854]: I0103 05:43:18.012728 4854 scope.go:117] "RemoveContainer" containerID="9bafbbf545ecf1586c3bf1dd33b7e0e2d6763b89f8ea6a903aa9d245d45c1fb7" Jan 03 05:43:18 crc kubenswrapper[4854]: I0103 05:43:18.037273 4854 scope.go:117] "RemoveContainer" containerID="91a30b02a02410d5306acdb48fd96666de1bed90ab02f525930c913ecb5b8fbb" Jan 03 05:43:18 crc kubenswrapper[4854]: I0103 05:43:18.063362 4854 scope.go:117] "RemoveContainer" containerID="76b249acee26cd6a24c4ddf64d5a9075ac417327a62100d25d633bbd38abe199" Jan 03 05:43:18 crc kubenswrapper[4854]: I0103 05:43:18.081077 4854 scope.go:117] "RemoveContainer" containerID="3d24ce275ffa80601db6375eb57fbcfe4a4d5c8b0beb3defb4fb798012eb6526" Jan 03 05:43:18 crc kubenswrapper[4854]: I0103 05:43:18.102127 4854 scope.go:117] "RemoveContainer" containerID="1233ca8735e35f0f568d29e6123b6567da8d1baeccdaf9497d0bcbb1d794da0f" Jan 03 05:43:18 crc kubenswrapper[4854]: I0103 05:43:18.121297 4854 scope.go:117] "RemoveContainer" containerID="ed4b4f37e6ba7ba03783c9d9604cc2dbb871c177cca5ae2ebbda2328416cdd46" Jan 03 05:43:18 crc kubenswrapper[4854]: I0103 05:43:18.142601 4854 scope.go:117] "RemoveContainer" containerID="7a2b7fcd26e43a60746db22efe9b0b3cc0cec70a9bfb52d27644ca850ca16e51" Jan 03 05:43:18 crc kubenswrapper[4854]: I0103 05:43:18.176625 4854 scope.go:117] "RemoveContainer" containerID="5f5526160a5c68dcf472ca4562ba7cfc24aef8be3058acd28a2850f7d7abb674" Jan 03 05:43:18 crc kubenswrapper[4854]: E0103 05:43:18.177074 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f5526160a5c68dcf472ca4562ba7cfc24aef8be3058acd28a2850f7d7abb674\": container with ID starting with 5f5526160a5c68dcf472ca4562ba7cfc24aef8be3058acd28a2850f7d7abb674 not found: ID does not exist" containerID="5f5526160a5c68dcf472ca4562ba7cfc24aef8be3058acd28a2850f7d7abb674" Jan 03 05:43:18 crc kubenswrapper[4854]: I0103 05:43:18.177159 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f5526160a5c68dcf472ca4562ba7cfc24aef8be3058acd28a2850f7d7abb674"} err="failed to get container status \"5f5526160a5c68dcf472ca4562ba7cfc24aef8be3058acd28a2850f7d7abb674\": rpc error: code = NotFound desc = could not find container \"5f5526160a5c68dcf472ca4562ba7cfc24aef8be3058acd28a2850f7d7abb674\": container with ID starting with 5f5526160a5c68dcf472ca4562ba7cfc24aef8be3058acd28a2850f7d7abb674 not found: ID does not exist" Jan 03 05:43:18 crc kubenswrapper[4854]: I0103 05:43:18.177223 4854 scope.go:117] "RemoveContainer" containerID="150adb49200724a1aa45c990bba31412c42c85ffc9dfd355f85b38114962c9eb" Jan 03 05:43:18 crc kubenswrapper[4854]: I0103 05:43:18.196604 4854 scope.go:117] "RemoveContainer" containerID="34ffe91003d44a5658b1de915f0823abd5399b936ddc5e4696a08171e202fa92" Jan 03 05:43:18 crc kubenswrapper[4854]: I0103 05:43:18.216950 4854 scope.go:117] "RemoveContainer" containerID="a547bb00f1c271e432cec6966b47decd29e1aa9e0c4f0ff7a517faed2f732b53" Jan 03 05:43:18 crc kubenswrapper[4854]: I0103 05:43:18.252464 4854 scope.go:117] "RemoveContainer" containerID="b205d8e9458800979a8d964bee4251860e547baf9ae4a82816c7347b37484e57" Jan 03 05:43:18 crc kubenswrapper[4854]: I0103 05:43:18.277521 4854 scope.go:117] "RemoveContainer" containerID="16935ee336fab386a68b1d6138b6131561872b3157340cd17d9f3fe44127c365" Jan 03 05:43:18 crc kubenswrapper[4854]: I0103 05:43:18.304270 4854 scope.go:117] "RemoveContainer" containerID="40700b72f405147dac2d7ef6884431498ebb959ac301a575ef8860d36f4cc4f4" Jan 03 05:43:18 crc kubenswrapper[4854]: I0103 05:43:18.673652 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"565ec8255dfee400afdd43514d5fae89c1bd3560fdb6fa75987de97b5a857654"} Jan 03 05:43:19 crc kubenswrapper[4854]: I0103 05:43:19.403860 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 05:43:19 crc kubenswrapper[4854]: I0103 05:43:19.692686 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f2b22" event={"ID":"332dcfb7-8bcf-46bf-9168-4bdb4411e55e","Type":"ContainerStarted","Data":"879c4d1e22238a11a74d0e95a96ce85f406cd3da2e7217ea2f4dec58d97aea69"} Jan 03 05:43:19 crc kubenswrapper[4854]: E0103 05:43:19.883915 4854 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.102:6443: connect: connection refused" interval="7s" Jan 03 05:43:20 crc kubenswrapper[4854]: I0103 05:43:20.701832 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 03 05:43:20 crc kubenswrapper[4854]: I0103 05:43:20.702015 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1e783c688f3958d8106cc55b7b342eb3b92c06ef49c155bc3474c7118bccdd71"} Jan 03 05:43:20 crc kubenswrapper[4854]: I0103 05:43:20.703234 4854 status_manager.go:851] "Failed to get status for pod" podUID="0380b43d-2d7f-490d-a822-1740bfa5c9ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:20 crc kubenswrapper[4854]: I0103 05:43:20.703864 4854 status_manager.go:851] "Failed to get status for pod" podUID="b633dc70-c725-4f1b-9595-aee7f6c165b4" pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-k8nxq\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:20 crc kubenswrapper[4854]: I0103 05:43:20.704509 4854 status_manager.go:851] "Failed to get status for pod" podUID="c31b366b-2182-4c59-8777-e552553ba8a8" pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-h7drl\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:20 crc kubenswrapper[4854]: I0103 05:43:20.704928 4854 status_manager.go:851] "Failed to get status for pod" podUID="28406837-5e09-49b4-8583-54a450f07ae4" pod="openshift-marketplace/redhat-operators-ztvzs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ztvzs\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:20 crc kubenswrapper[4854]: I0103 05:43:20.705451 4854 status_manager.go:851] "Failed to get status for pod" podUID="b1a6f0af-e686-4a3a-b1af-b4ab77e8362c" pod="openshift-controller-manager/controller-manager-b46bdbb7f-szr25" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b46bdbb7f-szr25\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:20 crc kubenswrapper[4854]: I0103 05:43:20.705863 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bqxfg" event={"ID":"6127414f-e3e3-4c52-81a8-f6fea70b7d0c","Type":"ContainerStarted","Data":"7d0908a790711ab88ce877b0913dd162e60e002f0e3309ef83e63fbfc04c76d8"} Jan 03 05:43:20 crc kubenswrapper[4854]: I0103 05:43:20.706059 4854 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:20 crc kubenswrapper[4854]: I0103 05:43:20.706312 4854 status_manager.go:851] "Failed to get status for pod" podUID="dd260432-f4bc-4c81-a5e1-e3205534cda8" pod="openshift-marketplace/redhat-marketplace-cj878" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cj878\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:20 crc kubenswrapper[4854]: I0103 05:43:20.706541 4854 status_manager.go:851] "Failed to get status for pod" podUID="95421875-a016-4eba-8017-27f0276a6bc4" pod="openshift-route-controller-manager/route-controller-manager-85f4ff9897-qqv5z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85f4ff9897-qqv5z\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:20 crc kubenswrapper[4854]: I0103 05:43:20.707045 4854 status_manager.go:851] "Failed to get status for pod" podUID="28406837-5e09-49b4-8583-54a450f07ae4" pod="openshift-marketplace/redhat-operators-ztvzs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ztvzs\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:20 crc kubenswrapper[4854]: I0103 05:43:20.707602 4854 status_manager.go:851] "Failed to get status for pod" podUID="95421875-a016-4eba-8017-27f0276a6bc4" pod="openshift-route-controller-manager/route-controller-manager-85f4ff9897-qqv5z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85f4ff9897-qqv5z\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:20 crc kubenswrapper[4854]: I0103 05:43:20.708109 4854 status_manager.go:851] "Failed to get status for pod" podUID="b1a6f0af-e686-4a3a-b1af-b4ab77e8362c" pod="openshift-controller-manager/controller-manager-b46bdbb7f-szr25" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b46bdbb7f-szr25\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:20 crc kubenswrapper[4854]: I0103 05:43:20.708154 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"98de16ac545bc42fd7e40a9a51c5b4ea215977b46afc3d7a69dcbe3032d6151a"} Jan 03 05:43:20 crc kubenswrapper[4854]: I0103 05:43:20.708667 4854 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:20 crc kubenswrapper[4854]: E0103 05:43:20.708929 4854 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.102:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 05:43:20 crc kubenswrapper[4854]: I0103 05:43:20.709021 4854 status_manager.go:851] "Failed to get status for pod" podUID="dd260432-f4bc-4c81-a5e1-e3205534cda8" pod="openshift-marketplace/redhat-marketplace-cj878" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cj878\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:20 crc kubenswrapper[4854]: I0103 05:43:20.709395 4854 status_manager.go:851] "Failed to get status for pod" podUID="0380b43d-2d7f-490d-a822-1740bfa5c9ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:20 crc kubenswrapper[4854]: I0103 05:43:20.709815 4854 status_manager.go:851] "Failed to get status for pod" podUID="b633dc70-c725-4f1b-9595-aee7f6c165b4" pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-k8nxq\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:20 crc kubenswrapper[4854]: I0103 05:43:20.710250 4854 status_manager.go:851] "Failed to get status for pod" podUID="c31b366b-2182-4c59-8777-e552553ba8a8" pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-h7drl\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:20 crc kubenswrapper[4854]: I0103 05:43:20.710644 4854 status_manager.go:851] "Failed to get status for pod" podUID="6127414f-e3e3-4c52-81a8-f6fea70b7d0c" pod="openshift-marketplace/community-operators-bqxfg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bqxfg\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:20 crc kubenswrapper[4854]: I0103 05:43:20.711065 4854 status_manager.go:851] "Failed to get status for pod" podUID="0380b43d-2d7f-490d-a822-1740bfa5c9ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:20 crc kubenswrapper[4854]: I0103 05:43:20.711370 4854 generic.go:334] "Generic (PLEG): container finished" podID="54056ea8-c177-4995-8261-209eb3200f5f" containerID="cf4de877932af47c42d0ca2ef55c63b7b82a9fc53197cb9d734f7bef5741437e" exitCode=0 Jan 03 05:43:20 crc kubenswrapper[4854]: I0103 05:43:20.711442 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mfbxz" event={"ID":"54056ea8-c177-4995-8261-209eb3200f5f","Type":"ContainerDied","Data":"cf4de877932af47c42d0ca2ef55c63b7b82a9fc53197cb9d734f7bef5741437e"} Jan 03 05:43:20 crc kubenswrapper[4854]: I0103 05:43:20.711540 4854 status_manager.go:851] "Failed to get status for pod" podUID="b633dc70-c725-4f1b-9595-aee7f6c165b4" pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-k8nxq\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:20 crc kubenswrapper[4854]: I0103 05:43:20.711827 4854 status_manager.go:851] "Failed to get status for pod" podUID="6127414f-e3e3-4c52-81a8-f6fea70b7d0c" pod="openshift-marketplace/community-operators-bqxfg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bqxfg\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:20 crc kubenswrapper[4854]: I0103 05:43:20.712214 4854 status_manager.go:851] "Failed to get status for pod" podUID="c31b366b-2182-4c59-8777-e552553ba8a8" pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-h7drl\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:20 crc kubenswrapper[4854]: I0103 05:43:20.712710 4854 status_manager.go:851] "Failed to get status for pod" podUID="28406837-5e09-49b4-8583-54a450f07ae4" pod="openshift-marketplace/redhat-operators-ztvzs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ztvzs\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:20 crc kubenswrapper[4854]: I0103 05:43:20.713221 4854 status_manager.go:851] "Failed to get status for pod" podUID="dd260432-f4bc-4c81-a5e1-e3205534cda8" pod="openshift-marketplace/redhat-marketplace-cj878" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cj878\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:20 crc kubenswrapper[4854]: I0103 05:43:20.713653 4854 status_manager.go:851] "Failed to get status for pod" podUID="95421875-a016-4eba-8017-27f0276a6bc4" pod="openshift-route-controller-manager/route-controller-manager-85f4ff9897-qqv5z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85f4ff9897-qqv5z\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:20 crc kubenswrapper[4854]: I0103 05:43:20.714044 4854 status_manager.go:851] "Failed to get status for pod" podUID="b1a6f0af-e686-4a3a-b1af-b4ab77e8362c" pod="openshift-controller-manager/controller-manager-b46bdbb7f-szr25" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b46bdbb7f-szr25\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:20 crc kubenswrapper[4854]: I0103 05:43:20.714436 4854 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:20 crc kubenswrapper[4854]: I0103 05:43:20.714945 4854 status_manager.go:851] "Failed to get status for pod" podUID="0380b43d-2d7f-490d-a822-1740bfa5c9ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:20 crc kubenswrapper[4854]: I0103 05:43:20.715867 4854 status_manager.go:851] "Failed to get status for pod" podUID="b633dc70-c725-4f1b-9595-aee7f6c165b4" pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-k8nxq\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:20 crc kubenswrapper[4854]: I0103 05:43:20.716405 4854 status_manager.go:851] "Failed to get status for pod" podUID="c31b366b-2182-4c59-8777-e552553ba8a8" pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-h7drl\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:20 crc kubenswrapper[4854]: I0103 05:43:20.716871 4854 status_manager.go:851] "Failed to get status for pod" podUID="6127414f-e3e3-4c52-81a8-f6fea70b7d0c" pod="openshift-marketplace/community-operators-bqxfg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bqxfg\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:20 crc kubenswrapper[4854]: I0103 05:43:20.717888 4854 status_manager.go:851] "Failed to get status for pod" podUID="54056ea8-c177-4995-8261-209eb3200f5f" pod="openshift-marketplace/community-operators-mfbxz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mfbxz\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:20 crc kubenswrapper[4854]: I0103 05:43:20.718668 4854 status_manager.go:851] "Failed to get status for pod" podUID="28406837-5e09-49b4-8583-54a450f07ae4" pod="openshift-marketplace/redhat-operators-ztvzs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ztvzs\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:20 crc kubenswrapper[4854]: I0103 05:43:20.719460 4854 status_manager.go:851] "Failed to get status for pod" podUID="b1a6f0af-e686-4a3a-b1af-b4ab77e8362c" pod="openshift-controller-manager/controller-manager-b46bdbb7f-szr25" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b46bdbb7f-szr25\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:20 crc kubenswrapper[4854]: I0103 05:43:20.719989 4854 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:20 crc kubenswrapper[4854]: I0103 05:43:20.720583 4854 status_manager.go:851] "Failed to get status for pod" podUID="dd260432-f4bc-4c81-a5e1-e3205534cda8" pod="openshift-marketplace/redhat-marketplace-cj878" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cj878\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:20 crc kubenswrapper[4854]: I0103 05:43:20.721150 4854 status_manager.go:851] "Failed to get status for pod" podUID="332dcfb7-8bcf-46bf-9168-4bdb4411e55e" pod="openshift-marketplace/redhat-operators-f2b22" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f2b22\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:20 crc kubenswrapper[4854]: I0103 05:43:20.721829 4854 status_manager.go:851] "Failed to get status for pod" podUID="95421875-a016-4eba-8017-27f0276a6bc4" pod="openshift-route-controller-manager/route-controller-manager-85f4ff9897-qqv5z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85f4ff9897-qqv5z\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:20 crc kubenswrapper[4854]: E0103 05:43:20.749443 4854 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/events\": dial tcp 38.102.83.102:6443: connect: connection refused" event=< Jan 03 05:43:20 crc kubenswrapper[4854]: &Event{ObjectMeta:{route-controller-manager-85f4ff9897-qqv5z.18872230cb7c57a1 openshift-route-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-route-controller-manager,Name:route-controller-manager-85f4ff9897-qqv5z,UID:95421875-a016-4eba-8017-27f0276a6bc4,APIVersion:v1,ResourceVersion:29227,FieldPath:spec.containers{route-controller-manager},},Reason:ProbeError,Message:Readiness probe error: Get "https://10.217.0.56:8443/healthz": dial tcp 10.217.0.56:8443: connect: connection refused Jan 03 05:43:20 crc kubenswrapper[4854]: body: Jan 03 05:43:20 crc kubenswrapper[4854]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-03 05:42:51.733907361 +0000 UTC m=+150.060483963,LastTimestamp:2026-01-03 05:42:51.733907361 +0000 UTC m=+150.060483963,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 03 05:43:20 crc kubenswrapper[4854]: > Jan 03 05:43:21 crc kubenswrapper[4854]: I0103 05:43:21.341962 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 05:43:21 crc kubenswrapper[4854]: I0103 05:43:21.342292 4854 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 03 05:43:21 crc kubenswrapper[4854]: I0103 05:43:21.342370 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 03 05:43:21 crc kubenswrapper[4854]: E0103 05:43:21.717481 4854 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.102:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 05:43:22 crc kubenswrapper[4854]: I0103 05:43:22.120856 4854 status_manager.go:851] "Failed to get status for pod" podUID="b633dc70-c725-4f1b-9595-aee7f6c165b4" pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-k8nxq\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:22 crc kubenswrapper[4854]: I0103 05:43:22.121386 4854 status_manager.go:851] "Failed to get status for pod" podUID="c31b366b-2182-4c59-8777-e552553ba8a8" pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-h7drl\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:22 crc kubenswrapper[4854]: I0103 05:43:22.121687 4854 status_manager.go:851] "Failed to get status for pod" podUID="6127414f-e3e3-4c52-81a8-f6fea70b7d0c" pod="openshift-marketplace/community-operators-bqxfg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bqxfg\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:22 crc kubenswrapper[4854]: I0103 05:43:22.121974 4854 status_manager.go:851] "Failed to get status for pod" podUID="54056ea8-c177-4995-8261-209eb3200f5f" pod="openshift-marketplace/community-operators-mfbxz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mfbxz\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:22 crc kubenswrapper[4854]: I0103 05:43:22.122292 4854 status_manager.go:851] "Failed to get status for pod" podUID="28406837-5e09-49b4-8583-54a450f07ae4" pod="openshift-marketplace/redhat-operators-ztvzs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ztvzs\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:22 crc kubenswrapper[4854]: I0103 05:43:22.122529 4854 status_manager.go:851] "Failed to get status for pod" podUID="b1a6f0af-e686-4a3a-b1af-b4ab77e8362c" pod="openshift-controller-manager/controller-manager-b46bdbb7f-szr25" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b46bdbb7f-szr25\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:22 crc kubenswrapper[4854]: I0103 05:43:22.122887 4854 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:22 crc kubenswrapper[4854]: I0103 05:43:22.123166 4854 status_manager.go:851] "Failed to get status for pod" podUID="dd260432-f4bc-4c81-a5e1-e3205534cda8" pod="openshift-marketplace/redhat-marketplace-cj878" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cj878\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:22 crc kubenswrapper[4854]: I0103 05:43:22.123440 4854 status_manager.go:851] "Failed to get status for pod" podUID="332dcfb7-8bcf-46bf-9168-4bdb4411e55e" pod="openshift-marketplace/redhat-operators-f2b22" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f2b22\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:22 crc kubenswrapper[4854]: I0103 05:43:22.123743 4854 status_manager.go:851] "Failed to get status for pod" podUID="95421875-a016-4eba-8017-27f0276a6bc4" pod="openshift-route-controller-manager/route-controller-manager-85f4ff9897-qqv5z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85f4ff9897-qqv5z\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:22 crc kubenswrapper[4854]: I0103 05:43:22.123998 4854 status_manager.go:851] "Failed to get status for pod" podUID="0380b43d-2d7f-490d-a822-1740bfa5c9ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:22 crc kubenswrapper[4854]: I0103 05:43:22.725680 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mfbxz" event={"ID":"54056ea8-c177-4995-8261-209eb3200f5f","Type":"ContainerStarted","Data":"99bfefcdc6181293de37907430c0cb1c85c057888c860b287f9c5ca01c37fd9c"} Jan 03 05:43:22 crc kubenswrapper[4854]: I0103 05:43:22.727374 4854 status_manager.go:851] "Failed to get status for pod" podUID="54056ea8-c177-4995-8261-209eb3200f5f" pod="openshift-marketplace/community-operators-mfbxz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mfbxz\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:22 crc kubenswrapper[4854]: I0103 05:43:22.728179 4854 status_manager.go:851] "Failed to get status for pod" podUID="28406837-5e09-49b4-8583-54a450f07ae4" pod="openshift-marketplace/redhat-operators-ztvzs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ztvzs\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:22 crc kubenswrapper[4854]: I0103 05:43:22.728935 4854 status_manager.go:851] "Failed to get status for pod" podUID="b1a6f0af-e686-4a3a-b1af-b4ab77e8362c" pod="openshift-controller-manager/controller-manager-b46bdbb7f-szr25" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b46bdbb7f-szr25\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:22 crc kubenswrapper[4854]: I0103 05:43:22.729907 4854 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:22 crc kubenswrapper[4854]: I0103 05:43:22.730312 4854 status_manager.go:851] "Failed to get status for pod" podUID="dd260432-f4bc-4c81-a5e1-e3205534cda8" pod="openshift-marketplace/redhat-marketplace-cj878" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cj878\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:22 crc kubenswrapper[4854]: I0103 05:43:22.730829 4854 status_manager.go:851] "Failed to get status for pod" podUID="332dcfb7-8bcf-46bf-9168-4bdb4411e55e" pod="openshift-marketplace/redhat-operators-f2b22" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f2b22\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:22 crc kubenswrapper[4854]: I0103 05:43:22.731531 4854 status_manager.go:851] "Failed to get status for pod" podUID="95421875-a016-4eba-8017-27f0276a6bc4" pod="openshift-route-controller-manager/route-controller-manager-85f4ff9897-qqv5z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85f4ff9897-qqv5z\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:22 crc kubenswrapper[4854]: I0103 05:43:22.732719 4854 status_manager.go:851] "Failed to get status for pod" podUID="0380b43d-2d7f-490d-a822-1740bfa5c9ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:22 crc kubenswrapper[4854]: I0103 05:43:22.733154 4854 status_manager.go:851] "Failed to get status for pod" podUID="b633dc70-c725-4f1b-9595-aee7f6c165b4" pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-k8nxq\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:22 crc kubenswrapper[4854]: I0103 05:43:22.733377 4854 status_manager.go:851] "Failed to get status for pod" podUID="c31b366b-2182-4c59-8777-e552553ba8a8" pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-h7drl\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:22 crc kubenswrapper[4854]: I0103 05:43:22.733716 4854 status_manager.go:851] "Failed to get status for pod" podUID="6127414f-e3e3-4c52-81a8-f6fea70b7d0c" pod="openshift-marketplace/community-operators-bqxfg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bqxfg\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:25 crc kubenswrapper[4854]: I0103 05:43:25.117512 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 05:43:25 crc kubenswrapper[4854]: I0103 05:43:25.119035 4854 status_manager.go:851] "Failed to get status for pod" podUID="332dcfb7-8bcf-46bf-9168-4bdb4411e55e" pod="openshift-marketplace/redhat-operators-f2b22" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f2b22\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:25 crc kubenswrapper[4854]: I0103 05:43:25.119588 4854 status_manager.go:851] "Failed to get status for pod" podUID="95421875-a016-4eba-8017-27f0276a6bc4" pod="openshift-route-controller-manager/route-controller-manager-85f4ff9897-qqv5z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85f4ff9897-qqv5z\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:25 crc kubenswrapper[4854]: I0103 05:43:25.120142 4854 status_manager.go:851] "Failed to get status for pod" podUID="b1a6f0af-e686-4a3a-b1af-b4ab77e8362c" pod="openshift-controller-manager/controller-manager-b46bdbb7f-szr25" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b46bdbb7f-szr25\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:25 crc kubenswrapper[4854]: I0103 05:43:25.120536 4854 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:25 crc kubenswrapper[4854]: I0103 05:43:25.120883 4854 status_manager.go:851] "Failed to get status for pod" podUID="dd260432-f4bc-4c81-a5e1-e3205534cda8" pod="openshift-marketplace/redhat-marketplace-cj878" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cj878\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:25 crc kubenswrapper[4854]: I0103 05:43:25.121295 4854 status_manager.go:851] "Failed to get status for pod" podUID="0380b43d-2d7f-490d-a822-1740bfa5c9ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:25 crc kubenswrapper[4854]: I0103 05:43:25.121698 4854 status_manager.go:851] "Failed to get status for pod" podUID="b633dc70-c725-4f1b-9595-aee7f6c165b4" pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-k8nxq\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:25 crc kubenswrapper[4854]: I0103 05:43:25.122004 4854 status_manager.go:851] "Failed to get status for pod" podUID="6127414f-e3e3-4c52-81a8-f6fea70b7d0c" pod="openshift-marketplace/community-operators-bqxfg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bqxfg\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:25 crc kubenswrapper[4854]: I0103 05:43:25.122710 4854 status_manager.go:851] "Failed to get status for pod" podUID="c31b366b-2182-4c59-8777-e552553ba8a8" pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-h7drl\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:25 crc kubenswrapper[4854]: I0103 05:43:25.123257 4854 status_manager.go:851] "Failed to get status for pod" podUID="54056ea8-c177-4995-8261-209eb3200f5f" pod="openshift-marketplace/community-operators-mfbxz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mfbxz\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:25 crc kubenswrapper[4854]: I0103 05:43:25.123700 4854 status_manager.go:851] "Failed to get status for pod" podUID="28406837-5e09-49b4-8583-54a450f07ae4" pod="openshift-marketplace/redhat-operators-ztvzs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ztvzs\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:25 crc kubenswrapper[4854]: I0103 05:43:25.141321 4854 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="24758257-4839-46a6-836c-76b2208dda54" Jan 03 05:43:25 crc kubenswrapper[4854]: I0103 05:43:25.141353 4854 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="24758257-4839-46a6-836c-76b2208dda54" Jan 03 05:43:25 crc kubenswrapper[4854]: E0103 05:43:25.141759 4854 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 05:43:25 crc kubenswrapper[4854]: I0103 05:43:25.142256 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 05:43:25 crc kubenswrapper[4854]: I0103 05:43:25.748466 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"85499c41f5e52282ce48de95be29a46ba592a2dd8e4f54dbada3e9b618069a2a"} Jan 03 05:43:25 crc kubenswrapper[4854]: I0103 05:43:25.929010 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bqxfg" Jan 03 05:43:25 crc kubenswrapper[4854]: I0103 05:43:25.929467 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bqxfg" Jan 03 05:43:25 crc kubenswrapper[4854]: I0103 05:43:25.999425 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bqxfg" Jan 03 05:43:26 crc kubenswrapper[4854]: I0103 05:43:26.000307 4854 status_manager.go:851] "Failed to get status for pod" podUID="54056ea8-c177-4995-8261-209eb3200f5f" pod="openshift-marketplace/community-operators-mfbxz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mfbxz\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:26 crc kubenswrapper[4854]: I0103 05:43:26.001037 4854 status_manager.go:851] "Failed to get status for pod" podUID="28406837-5e09-49b4-8583-54a450f07ae4" pod="openshift-marketplace/redhat-operators-ztvzs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ztvzs\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:26 crc kubenswrapper[4854]: I0103 05:43:26.001753 4854 status_manager.go:851] "Failed to get status for pod" podUID="95421875-a016-4eba-8017-27f0276a6bc4" pod="openshift-route-controller-manager/route-controller-manager-85f4ff9897-qqv5z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85f4ff9897-qqv5z\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:26 crc kubenswrapper[4854]: I0103 05:43:26.002277 4854 status_manager.go:851] "Failed to get status for pod" podUID="b1a6f0af-e686-4a3a-b1af-b4ab77e8362c" pod="openshift-controller-manager/controller-manager-b46bdbb7f-szr25" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b46bdbb7f-szr25\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:26 crc kubenswrapper[4854]: I0103 05:43:26.002814 4854 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:26 crc kubenswrapper[4854]: I0103 05:43:26.003493 4854 status_manager.go:851] "Failed to get status for pod" podUID="dd260432-f4bc-4c81-a5e1-e3205534cda8" pod="openshift-marketplace/redhat-marketplace-cj878" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cj878\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:26 crc kubenswrapper[4854]: I0103 05:43:26.003986 4854 status_manager.go:851] "Failed to get status for pod" podUID="332dcfb7-8bcf-46bf-9168-4bdb4411e55e" pod="openshift-marketplace/redhat-operators-f2b22" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f2b22\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:26 crc kubenswrapper[4854]: I0103 05:43:26.004453 4854 status_manager.go:851] "Failed to get status for pod" podUID="0380b43d-2d7f-490d-a822-1740bfa5c9ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:26 crc kubenswrapper[4854]: I0103 05:43:26.004936 4854 status_manager.go:851] "Failed to get status for pod" podUID="b633dc70-c725-4f1b-9595-aee7f6c165b4" pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-k8nxq\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:26 crc kubenswrapper[4854]: I0103 05:43:26.005435 4854 status_manager.go:851] "Failed to get status for pod" podUID="c31b366b-2182-4c59-8777-e552553ba8a8" pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-h7drl\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:26 crc kubenswrapper[4854]: I0103 05:43:26.006008 4854 status_manager.go:851] "Failed to get status for pod" podUID="6127414f-e3e3-4c52-81a8-f6fea70b7d0c" pod="openshift-marketplace/community-operators-bqxfg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bqxfg\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:26 crc kubenswrapper[4854]: I0103 05:43:26.311263 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mfbxz" Jan 03 05:43:26 crc kubenswrapper[4854]: I0103 05:43:26.311327 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mfbxz" Jan 03 05:43:26 crc kubenswrapper[4854]: I0103 05:43:26.380113 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mfbxz" Jan 03 05:43:26 crc kubenswrapper[4854]: I0103 05:43:26.381057 4854 status_manager.go:851] "Failed to get status for pod" podUID="54056ea8-c177-4995-8261-209eb3200f5f" pod="openshift-marketplace/community-operators-mfbxz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mfbxz\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:26 crc kubenswrapper[4854]: I0103 05:43:26.381746 4854 status_manager.go:851] "Failed to get status for pod" podUID="28406837-5e09-49b4-8583-54a450f07ae4" pod="openshift-marketplace/redhat-operators-ztvzs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ztvzs\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:26 crc kubenswrapper[4854]: I0103 05:43:26.382437 4854 status_manager.go:851] "Failed to get status for pod" podUID="332dcfb7-8bcf-46bf-9168-4bdb4411e55e" pod="openshift-marketplace/redhat-operators-f2b22" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f2b22\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:26 crc kubenswrapper[4854]: I0103 05:43:26.382971 4854 status_manager.go:851] "Failed to get status for pod" podUID="95421875-a016-4eba-8017-27f0276a6bc4" pod="openshift-route-controller-manager/route-controller-manager-85f4ff9897-qqv5z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85f4ff9897-qqv5z\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:26 crc kubenswrapper[4854]: I0103 05:43:26.383517 4854 status_manager.go:851] "Failed to get status for pod" podUID="b1a6f0af-e686-4a3a-b1af-b4ab77e8362c" pod="openshift-controller-manager/controller-manager-b46bdbb7f-szr25" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b46bdbb7f-szr25\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:26 crc kubenswrapper[4854]: I0103 05:43:26.383977 4854 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:26 crc kubenswrapper[4854]: I0103 05:43:26.384613 4854 status_manager.go:851] "Failed to get status for pod" podUID="dd260432-f4bc-4c81-a5e1-e3205534cda8" pod="openshift-marketplace/redhat-marketplace-cj878" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cj878\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:26 crc kubenswrapper[4854]: I0103 05:43:26.385122 4854 status_manager.go:851] "Failed to get status for pod" podUID="0380b43d-2d7f-490d-a822-1740bfa5c9ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:26 crc kubenswrapper[4854]: I0103 05:43:26.385593 4854 status_manager.go:851] "Failed to get status for pod" podUID="b633dc70-c725-4f1b-9595-aee7f6c165b4" pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-k8nxq\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:26 crc kubenswrapper[4854]: I0103 05:43:26.385991 4854 status_manager.go:851] "Failed to get status for pod" podUID="6127414f-e3e3-4c52-81a8-f6fea70b7d0c" pod="openshift-marketplace/community-operators-bqxfg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bqxfg\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:26 crc kubenswrapper[4854]: I0103 05:43:26.386523 4854 status_manager.go:851] "Failed to get status for pod" podUID="c31b366b-2182-4c59-8777-e552553ba8a8" pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-h7drl\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:26 crc kubenswrapper[4854]: I0103 05:43:26.757383 4854 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="8e68a098cb6f11cde51cac98a87a9c96cf1714c7a6fe91a90a5c32b5fecda3c3" exitCode=0 Jan 03 05:43:26 crc kubenswrapper[4854]: I0103 05:43:26.757440 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"8e68a098cb6f11cde51cac98a87a9c96cf1714c7a6fe91a90a5c32b5fecda3c3"} Jan 03 05:43:26 crc kubenswrapper[4854]: I0103 05:43:26.757807 4854 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="24758257-4839-46a6-836c-76b2208dda54" Jan 03 05:43:26 crc kubenswrapper[4854]: I0103 05:43:26.757842 4854 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="24758257-4839-46a6-836c-76b2208dda54" Jan 03 05:43:26 crc kubenswrapper[4854]: E0103 05:43:26.758315 4854 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 05:43:26 crc kubenswrapper[4854]: I0103 05:43:26.758562 4854 status_manager.go:851] "Failed to get status for pod" podUID="0380b43d-2d7f-490d-a822-1740bfa5c9ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:26 crc kubenswrapper[4854]: I0103 05:43:26.759023 4854 status_manager.go:851] "Failed to get status for pod" podUID="b633dc70-c725-4f1b-9595-aee7f6c165b4" pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-k8nxq\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:26 crc kubenswrapper[4854]: I0103 05:43:26.759819 4854 status_manager.go:851] "Failed to get status for pod" podUID="c31b366b-2182-4c59-8777-e552553ba8a8" pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-h7drl\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:26 crc kubenswrapper[4854]: I0103 05:43:26.760258 4854 status_manager.go:851] "Failed to get status for pod" podUID="6127414f-e3e3-4c52-81a8-f6fea70b7d0c" pod="openshift-marketplace/community-operators-bqxfg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bqxfg\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:26 crc kubenswrapper[4854]: I0103 05:43:26.760662 4854 status_manager.go:851] "Failed to get status for pod" podUID="54056ea8-c177-4995-8261-209eb3200f5f" pod="openshift-marketplace/community-operators-mfbxz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mfbxz\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:26 crc kubenswrapper[4854]: I0103 05:43:26.761002 4854 status_manager.go:851] "Failed to get status for pod" podUID="28406837-5e09-49b4-8583-54a450f07ae4" pod="openshift-marketplace/redhat-operators-ztvzs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ztvzs\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:26 crc kubenswrapper[4854]: I0103 05:43:26.761497 4854 status_manager.go:851] "Failed to get status for pod" podUID="b1a6f0af-e686-4a3a-b1af-b4ab77e8362c" pod="openshift-controller-manager/controller-manager-b46bdbb7f-szr25" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b46bdbb7f-szr25\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:26 crc kubenswrapper[4854]: I0103 05:43:26.761972 4854 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:26 crc kubenswrapper[4854]: I0103 05:43:26.762459 4854 status_manager.go:851] "Failed to get status for pod" podUID="dd260432-f4bc-4c81-a5e1-e3205534cda8" pod="openshift-marketplace/redhat-marketplace-cj878" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cj878\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:26 crc kubenswrapper[4854]: I0103 05:43:26.762926 4854 status_manager.go:851] "Failed to get status for pod" podUID="332dcfb7-8bcf-46bf-9168-4bdb4411e55e" pod="openshift-marketplace/redhat-operators-f2b22" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f2b22\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:26 crc kubenswrapper[4854]: I0103 05:43:26.763355 4854 status_manager.go:851] "Failed to get status for pod" podUID="95421875-a016-4eba-8017-27f0276a6bc4" pod="openshift-route-controller-manager/route-controller-manager-85f4ff9897-qqv5z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85f4ff9897-qqv5z\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:26 crc kubenswrapper[4854]: I0103 05:43:26.791808 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 05:43:26 crc kubenswrapper[4854]: I0103 05:43:26.835161 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bqxfg" Jan 03 05:43:26 crc kubenswrapper[4854]: I0103 05:43:26.835964 4854 status_manager.go:851] "Failed to get status for pod" podUID="54056ea8-c177-4995-8261-209eb3200f5f" pod="openshift-marketplace/community-operators-mfbxz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mfbxz\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:26 crc kubenswrapper[4854]: I0103 05:43:26.836274 4854 status_manager.go:851] "Failed to get status for pod" podUID="28406837-5e09-49b4-8583-54a450f07ae4" pod="openshift-marketplace/redhat-operators-ztvzs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ztvzs\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:26 crc kubenswrapper[4854]: I0103 05:43:26.836702 4854 status_manager.go:851] "Failed to get status for pod" podUID="dd260432-f4bc-4c81-a5e1-e3205534cda8" pod="openshift-marketplace/redhat-marketplace-cj878" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cj878\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:26 crc kubenswrapper[4854]: I0103 05:43:26.837296 4854 status_manager.go:851] "Failed to get status for pod" podUID="332dcfb7-8bcf-46bf-9168-4bdb4411e55e" pod="openshift-marketplace/redhat-operators-f2b22" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f2b22\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:26 crc kubenswrapper[4854]: I0103 05:43:26.837873 4854 status_manager.go:851] "Failed to get status for pod" podUID="95421875-a016-4eba-8017-27f0276a6bc4" pod="openshift-route-controller-manager/route-controller-manager-85f4ff9897-qqv5z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85f4ff9897-qqv5z\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:26 crc kubenswrapper[4854]: I0103 05:43:26.838421 4854 status_manager.go:851] "Failed to get status for pod" podUID="b1a6f0af-e686-4a3a-b1af-b4ab77e8362c" pod="openshift-controller-manager/controller-manager-b46bdbb7f-szr25" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b46bdbb7f-szr25\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:26 crc kubenswrapper[4854]: I0103 05:43:26.838935 4854 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:26 crc kubenswrapper[4854]: I0103 05:43:26.839520 4854 status_manager.go:851] "Failed to get status for pod" podUID="0380b43d-2d7f-490d-a822-1740bfa5c9ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:26 crc kubenswrapper[4854]: I0103 05:43:26.839913 4854 status_manager.go:851] "Failed to get status for pod" podUID="b633dc70-c725-4f1b-9595-aee7f6c165b4" pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-k8nxq\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:26 crc kubenswrapper[4854]: I0103 05:43:26.840399 4854 status_manager.go:851] "Failed to get status for pod" podUID="6127414f-e3e3-4c52-81a8-f6fea70b7d0c" pod="openshift-marketplace/community-operators-bqxfg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bqxfg\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:26 crc kubenswrapper[4854]: I0103 05:43:26.840876 4854 status_manager.go:851] "Failed to get status for pod" podUID="c31b366b-2182-4c59-8777-e552553ba8a8" pod="openshift-authentication/oauth-openshift-558db77b4-h7drl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-h7drl\": dial tcp 38.102.83.102:6443: connect: connection refused" Jan 03 05:43:26 crc kubenswrapper[4854]: E0103 05:43:26.885512 4854 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.102:6443: connect: connection refused" interval="7s" Jan 03 05:43:27 crc kubenswrapper[4854]: I0103 05:43:27.767666 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c3e25f46ad7b822048aac414b1c7a5d98067e55ee46b28da966e5786de7b38fd"} Jan 03 05:43:27 crc kubenswrapper[4854]: I0103 05:43:27.767993 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5531735273f0474ee090c7daa2a42b5dd716f6edd142aedd557f6e6b6d9a98bc"} Jan 03 05:43:27 crc kubenswrapper[4854]: I0103 05:43:27.768008 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"aa36eee1d03c06c22191396382879243c7d495ca87c9347bde9bc89a9a13a740"} Jan 03 05:43:27 crc kubenswrapper[4854]: I0103 05:43:27.768019 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"31f486a3fd876cd5a54d3a99373c331adebb1941761a4eea979a8f05ac4e77de"} Jan 03 05:43:28 crc kubenswrapper[4854]: I0103 05:43:28.775530 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"83147d895b3f74ae748bef2220394a1cc3925647e9b70e10b3fd512401f69400"} Jan 03 05:43:28 crc kubenswrapper[4854]: I0103 05:43:28.775806 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 05:43:28 crc kubenswrapper[4854]: I0103 05:43:28.775951 4854 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="24758257-4839-46a6-836c-76b2208dda54" Jan 03 05:43:28 crc kubenswrapper[4854]: I0103 05:43:28.775981 4854 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="24758257-4839-46a6-836c-76b2208dda54" Jan 03 05:43:28 crc kubenswrapper[4854]: I0103 05:43:28.918966 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-f2b22" Jan 03 05:43:28 crc kubenswrapper[4854]: I0103 05:43:28.919029 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-f2b22" Jan 03 05:43:28 crc kubenswrapper[4854]: I0103 05:43:28.981732 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-f2b22" Jan 03 05:43:29 crc kubenswrapper[4854]: I0103 05:43:29.861177 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-f2b22" Jan 03 05:43:30 crc kubenswrapper[4854]: I0103 05:43:30.143398 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 05:43:30 crc kubenswrapper[4854]: I0103 05:43:30.143820 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 05:43:30 crc kubenswrapper[4854]: I0103 05:43:30.151510 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 05:43:31 crc kubenswrapper[4854]: I0103 05:43:31.342223 4854 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 03 05:43:31 crc kubenswrapper[4854]: I0103 05:43:31.342634 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 03 05:43:33 crc kubenswrapper[4854]: I0103 05:43:33.786802 4854 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 05:43:33 crc kubenswrapper[4854]: I0103 05:43:33.821462 4854 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="24758257-4839-46a6-836c-76b2208dda54" Jan 03 05:43:33 crc kubenswrapper[4854]: I0103 05:43:33.821500 4854 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="24758257-4839-46a6-836c-76b2208dda54" Jan 03 05:43:33 crc kubenswrapper[4854]: I0103 05:43:33.825564 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 05:43:34 crc kubenswrapper[4854]: I0103 05:43:34.081969 4854 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="8402edbd-acc2-4145-9085-c04452ec5bdf" Jan 03 05:43:34 crc kubenswrapper[4854]: I0103 05:43:34.831917 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-vrzqb_ef543e1b-8068-4ea3-b32a-61027b32e95d/approver/0.log" Jan 03 05:43:34 crc kubenswrapper[4854]: I0103 05:43:34.833792 4854 generic.go:334] "Generic (PLEG): container finished" podID="ef543e1b-8068-4ea3-b32a-61027b32e95d" containerID="17f2eacd2ec85cb4998fb9f2b86f2f619e9708d40c28052bf7dd31ddd413eea7" exitCode=1 Jan 03 05:43:34 crc kubenswrapper[4854]: I0103 05:43:34.833872 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerDied","Data":"17f2eacd2ec85cb4998fb9f2b86f2f619e9708d40c28052bf7dd31ddd413eea7"} Jan 03 05:43:34 crc kubenswrapper[4854]: I0103 05:43:34.834357 4854 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="24758257-4839-46a6-836c-76b2208dda54" Jan 03 05:43:34 crc kubenswrapper[4854]: I0103 05:43:34.834384 4854 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="24758257-4839-46a6-836c-76b2208dda54" Jan 03 05:43:34 crc kubenswrapper[4854]: I0103 05:43:34.834917 4854 scope.go:117] "RemoveContainer" containerID="17f2eacd2ec85cb4998fb9f2b86f2f619e9708d40c28052bf7dd31ddd413eea7" Jan 03 05:43:34 crc kubenswrapper[4854]: I0103 05:43:34.839746 4854 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="8402edbd-acc2-4145-9085-c04452ec5bdf" Jan 03 05:43:35 crc kubenswrapper[4854]: I0103 05:43:35.845747 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-vrzqb_ef543e1b-8068-4ea3-b32a-61027b32e95d/approver/0.log" Jan 03 05:43:35 crc kubenswrapper[4854]: I0103 05:43:35.847660 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"9947548d4407c12ea7558fc69a8e82addf463e50dfce0035359515d4c4b695bc"} Jan 03 05:43:36 crc kubenswrapper[4854]: I0103 05:43:36.376810 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mfbxz" Jan 03 05:43:41 crc kubenswrapper[4854]: I0103 05:43:41.342622 4854 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 03 05:43:41 crc kubenswrapper[4854]: I0103 05:43:41.342702 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 03 05:43:41 crc kubenswrapper[4854]: I0103 05:43:41.342775 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 05:43:41 crc kubenswrapper[4854]: I0103 05:43:41.343690 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"1e783c688f3958d8106cc55b7b342eb3b92c06ef49c155bc3474c7118bccdd71"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Jan 03 05:43:41 crc kubenswrapper[4854]: I0103 05:43:41.344192 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://1e783c688f3958d8106cc55b7b342eb3b92c06ef49c155bc3474c7118bccdd71" gracePeriod=30 Jan 03 05:43:41 crc kubenswrapper[4854]: I0103 05:43:41.755747 4854 patch_prober.go:28] interesting pod/machine-config-daemon-qdhfx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 05:43:41 crc kubenswrapper[4854]: I0103 05:43:41.755834 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 05:43:57 crc kubenswrapper[4854]: I0103 05:43:57.766140 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 03 05:44:00 crc kubenswrapper[4854]: I0103 05:44:00.076589 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 03 05:44:00 crc kubenswrapper[4854]: I0103 05:44:00.153994 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 03 05:44:00 crc kubenswrapper[4854]: I0103 05:44:00.162484 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 03 05:44:00 crc kubenswrapper[4854]: I0103 05:44:00.906201 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 03 05:44:01 crc kubenswrapper[4854]: I0103 05:44:01.259128 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 03 05:44:02 crc kubenswrapper[4854]: I0103 05:44:02.416500 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 03 05:44:02 crc kubenswrapper[4854]: I0103 05:44:02.673254 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 03 05:44:02 crc kubenswrapper[4854]: I0103 05:44:02.776185 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 03 05:44:03 crc kubenswrapper[4854]: I0103 05:44:03.041159 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 03 05:44:03 crc kubenswrapper[4854]: I0103 05:44:03.153502 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 03 05:44:03 crc kubenswrapper[4854]: I0103 05:44:03.585418 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 03 05:44:03 crc kubenswrapper[4854]: I0103 05:44:03.605420 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 03 05:44:03 crc kubenswrapper[4854]: I0103 05:44:03.694958 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 03 05:44:03 crc kubenswrapper[4854]: I0103 05:44:03.740588 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 03 05:44:04 crc kubenswrapper[4854]: I0103 05:44:04.081104 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 03 05:44:04 crc kubenswrapper[4854]: I0103 05:44:04.175498 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 03 05:44:04 crc kubenswrapper[4854]: I0103 05:44:04.429823 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 03 05:44:04 crc kubenswrapper[4854]: I0103 05:44:04.518700 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 03 05:44:04 crc kubenswrapper[4854]: I0103 05:44:04.609768 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 03 05:44:04 crc kubenswrapper[4854]: I0103 05:44:04.665999 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 03 05:44:05 crc kubenswrapper[4854]: I0103 05:44:05.155385 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 03 05:44:05 crc kubenswrapper[4854]: I0103 05:44:05.249976 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 03 05:44:05 crc kubenswrapper[4854]: I0103 05:44:05.333564 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 03 05:44:05 crc kubenswrapper[4854]: I0103 05:44:05.378861 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 03 05:44:05 crc kubenswrapper[4854]: I0103 05:44:05.517233 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 03 05:44:05 crc kubenswrapper[4854]: I0103 05:44:05.858727 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 03 05:44:05 crc kubenswrapper[4854]: I0103 05:44:05.975566 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 03 05:44:06 crc kubenswrapper[4854]: I0103 05:44:06.163525 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 03 05:44:06 crc kubenswrapper[4854]: I0103 05:44:06.179393 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 03 05:44:07 crc kubenswrapper[4854]: I0103 05:44:07.155953 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 03 05:44:07 crc kubenswrapper[4854]: I0103 05:44:07.269336 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 03 05:44:07 crc kubenswrapper[4854]: I0103 05:44:07.362767 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 03 05:44:07 crc kubenswrapper[4854]: I0103 05:44:07.504248 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 03 05:44:07 crc kubenswrapper[4854]: I0103 05:44:07.577117 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 03 05:44:07 crc kubenswrapper[4854]: I0103 05:44:07.599070 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 03 05:44:08 crc kubenswrapper[4854]: I0103 05:44:08.088507 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 03 05:44:08 crc kubenswrapper[4854]: I0103 05:44:08.278210 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 03 05:44:08 crc kubenswrapper[4854]: I0103 05:44:08.498295 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 03 05:44:08 crc kubenswrapper[4854]: I0103 05:44:08.631758 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 03 05:44:09 crc kubenswrapper[4854]: I0103 05:44:09.100242 4854 generic.go:334] "Generic (PLEG): container finished" podID="5b7f5d78-25a5-497a-9315-494fe26edb93" containerID="10cde2faee74631a8c6185f6e956d7af2bdb78e3cb320f987bb30ac1860b9571" exitCode=0 Jan 03 05:44:09 crc kubenswrapper[4854]: I0103 05:44:09.100308 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2jw8v" event={"ID":"5b7f5d78-25a5-497a-9315-494fe26edb93","Type":"ContainerDied","Data":"10cde2faee74631a8c6185f6e956d7af2bdb78e3cb320f987bb30ac1860b9571"} Jan 03 05:44:09 crc kubenswrapper[4854]: I0103 05:44:09.101328 4854 scope.go:117] "RemoveContainer" containerID="10cde2faee74631a8c6185f6e956d7af2bdb78e3cb320f987bb30ac1860b9571" Jan 03 05:44:09 crc kubenswrapper[4854]: I0103 05:44:09.136527 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 03 05:44:09 crc kubenswrapper[4854]: I0103 05:44:09.445209 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 03 05:44:09 crc kubenswrapper[4854]: I0103 05:44:09.524259 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 03 05:44:09 crc kubenswrapper[4854]: I0103 05:44:09.583315 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 03 05:44:09 crc kubenswrapper[4854]: I0103 05:44:09.810112 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 03 05:44:10 crc kubenswrapper[4854]: I0103 05:44:10.106926 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2jw8v" event={"ID":"5b7f5d78-25a5-497a-9315-494fe26edb93","Type":"ContainerStarted","Data":"116076dc26f956b7f5b8277e722211dfb77ce5ffeb7b78f8e1ea0358f7dd9fcb"} Jan 03 05:44:10 crc kubenswrapper[4854]: I0103 05:44:10.107760 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-2jw8v" Jan 03 05:44:10 crc kubenswrapper[4854]: I0103 05:44:10.115648 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-2jw8v" Jan 03 05:44:10 crc kubenswrapper[4854]: I0103 05:44:10.509582 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 03 05:44:10 crc kubenswrapper[4854]: I0103 05:44:10.650834 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 03 05:44:10 crc kubenswrapper[4854]: I0103 05:44:10.662923 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 03 05:44:10 crc kubenswrapper[4854]: I0103 05:44:10.721899 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 03 05:44:10 crc kubenswrapper[4854]: I0103 05:44:10.863015 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 03 05:44:10 crc kubenswrapper[4854]: I0103 05:44:10.934673 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 03 05:44:11 crc kubenswrapper[4854]: I0103 05:44:11.043529 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 03 05:44:11 crc kubenswrapper[4854]: I0103 05:44:11.054870 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 03 05:44:11 crc kubenswrapper[4854]: I0103 05:44:11.115978 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 03 05:44:11 crc kubenswrapper[4854]: I0103 05:44:11.225916 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 03 05:44:11 crc kubenswrapper[4854]: I0103 05:44:11.275962 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 03 05:44:11 crc kubenswrapper[4854]: I0103 05:44:11.310917 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 03 05:44:11 crc kubenswrapper[4854]: I0103 05:44:11.319884 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 03 05:44:11 crc kubenswrapper[4854]: I0103 05:44:11.483727 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 03 05:44:11 crc kubenswrapper[4854]: I0103 05:44:11.755505 4854 patch_prober.go:28] interesting pod/machine-config-daemon-qdhfx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 05:44:11 crc kubenswrapper[4854]: I0103 05:44:11.755563 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 05:44:11 crc kubenswrapper[4854]: I0103 05:44:11.755605 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" Jan 03 05:44:11 crc kubenswrapper[4854]: I0103 05:44:11.756249 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"41ee4426739e125fc38ef9de0bc907f228c08816a774c8b5f992bf1e1c0c09cc"} pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 03 05:44:11 crc kubenswrapper[4854]: I0103 05:44:11.756305 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" containerID="cri-o://41ee4426739e125fc38ef9de0bc907f228c08816a774c8b5f992bf1e1c0c09cc" gracePeriod=600 Jan 03 05:44:12 crc kubenswrapper[4854]: I0103 05:44:12.053808 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 03 05:44:12 crc kubenswrapper[4854]: I0103 05:44:12.119234 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 03 05:44:12 crc kubenswrapper[4854]: I0103 05:44:12.120769 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 03 05:44:12 crc kubenswrapper[4854]: I0103 05:44:12.120837 4854 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="1e783c688f3958d8106cc55b7b342eb3b92c06ef49c155bc3474c7118bccdd71" exitCode=137 Jan 03 05:44:12 crc kubenswrapper[4854]: I0103 05:44:12.125553 4854 generic.go:334] "Generic (PLEG): container finished" podID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerID="41ee4426739e125fc38ef9de0bc907f228c08816a774c8b5f992bf1e1c0c09cc" exitCode=0 Jan 03 05:44:12 crc kubenswrapper[4854]: I0103 05:44:12.127639 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"1e783c688f3958d8106cc55b7b342eb3b92c06ef49c155bc3474c7118bccdd71"} Jan 03 05:44:12 crc kubenswrapper[4854]: I0103 05:44:12.127695 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5a0763e01c1342b89c8825637cbcf287d92d7340beb51666b69cf6ebf12fd3b9"} Jan 03 05:44:12 crc kubenswrapper[4854]: I0103 05:44:12.127714 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" event={"ID":"e8c88d7d-092b-44f7-b4c8-3540be3c0e8b","Type":"ContainerDied","Data":"41ee4426739e125fc38ef9de0bc907f228c08816a774c8b5f992bf1e1c0c09cc"} Jan 03 05:44:12 crc kubenswrapper[4854]: I0103 05:44:12.127728 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" event={"ID":"e8c88d7d-092b-44f7-b4c8-3540be3c0e8b","Type":"ContainerStarted","Data":"d8ba999ad3c3dcd9750b64af99186e9f84152e1189793a50472b4e974fec8292"} Jan 03 05:44:12 crc kubenswrapper[4854]: I0103 05:44:12.127753 4854 scope.go:117] "RemoveContainer" containerID="d23ed4cba923a9b14622426134be2d60f5a835b8bbabc385821b9cfbeead4b13" Jan 03 05:44:12 crc kubenswrapper[4854]: I0103 05:44:12.278649 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 03 05:44:12 crc kubenswrapper[4854]: I0103 05:44:12.388023 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 03 05:44:12 crc kubenswrapper[4854]: I0103 05:44:12.650512 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 03 05:44:12 crc kubenswrapper[4854]: I0103 05:44:12.808370 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 03 05:44:12 crc kubenswrapper[4854]: I0103 05:44:12.819403 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 03 05:44:12 crc kubenswrapper[4854]: I0103 05:44:12.916452 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 03 05:44:12 crc kubenswrapper[4854]: I0103 05:44:12.959581 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 03 05:44:12 crc kubenswrapper[4854]: I0103 05:44:12.990565 4854 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 03 05:44:13 crc kubenswrapper[4854]: I0103 05:44:13.136681 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 03 05:44:13 crc kubenswrapper[4854]: I0103 05:44:13.206421 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 03 05:44:13 crc kubenswrapper[4854]: I0103 05:44:13.281826 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 03 05:44:13 crc kubenswrapper[4854]: I0103 05:44:13.302643 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 03 05:44:13 crc kubenswrapper[4854]: I0103 05:44:13.304511 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 03 05:44:13 crc kubenswrapper[4854]: I0103 05:44:13.377732 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 03 05:44:13 crc kubenswrapper[4854]: I0103 05:44:13.770539 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 03 05:44:13 crc kubenswrapper[4854]: I0103 05:44:13.875457 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 03 05:44:14 crc kubenswrapper[4854]: I0103 05:44:14.098176 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 03 05:44:14 crc kubenswrapper[4854]: I0103 05:44:14.099559 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 03 05:44:14 crc kubenswrapper[4854]: I0103 05:44:14.104725 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 03 05:44:14 crc kubenswrapper[4854]: I0103 05:44:14.476646 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 03 05:44:14 crc kubenswrapper[4854]: I0103 05:44:14.578120 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 03 05:44:14 crc kubenswrapper[4854]: I0103 05:44:14.647423 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 03 05:44:14 crc kubenswrapper[4854]: I0103 05:44:14.705138 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 03 05:44:14 crc kubenswrapper[4854]: I0103 05:44:14.740522 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 03 05:44:14 crc kubenswrapper[4854]: I0103 05:44:14.838947 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 03 05:44:14 crc kubenswrapper[4854]: I0103 05:44:14.923531 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 03 05:44:15 crc kubenswrapper[4854]: I0103 05:44:15.239748 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 03 05:44:15 crc kubenswrapper[4854]: I0103 05:44:15.302998 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 03 05:44:15 crc kubenswrapper[4854]: I0103 05:44:15.337809 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 03 05:44:15 crc kubenswrapper[4854]: I0103 05:44:15.768567 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 03 05:44:16 crc kubenswrapper[4854]: I0103 05:44:16.026767 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 03 05:44:16 crc kubenswrapper[4854]: I0103 05:44:16.056659 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 03 05:44:16 crc kubenswrapper[4854]: I0103 05:44:16.293506 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 03 05:44:16 crc kubenswrapper[4854]: I0103 05:44:16.401206 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 03 05:44:16 crc kubenswrapper[4854]: I0103 05:44:16.413762 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 03 05:44:16 crc kubenswrapper[4854]: I0103 05:44:16.448735 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 03 05:44:16 crc kubenswrapper[4854]: I0103 05:44:16.791687 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 05:44:17 crc kubenswrapper[4854]: I0103 05:44:17.246848 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 03 05:44:17 crc kubenswrapper[4854]: I0103 05:44:17.350486 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 03 05:44:17 crc kubenswrapper[4854]: I0103 05:44:17.474952 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 03 05:44:17 crc kubenswrapper[4854]: I0103 05:44:17.838610 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 03 05:44:18 crc kubenswrapper[4854]: I0103 05:44:18.266120 4854 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 03 05:44:18 crc kubenswrapper[4854]: I0103 05:44:18.375207 4854 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 03 05:44:18 crc kubenswrapper[4854]: I0103 05:44:18.399582 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 03 05:44:18 crc kubenswrapper[4854]: I0103 05:44:18.961696 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 03 05:44:19 crc kubenswrapper[4854]: I0103 05:44:19.105587 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 03 05:44:19 crc kubenswrapper[4854]: I0103 05:44:19.214113 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 03 05:44:19 crc kubenswrapper[4854]: I0103 05:44:19.280444 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 03 05:44:19 crc kubenswrapper[4854]: I0103 05:44:19.587370 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 03 05:44:19 crc kubenswrapper[4854]: I0103 05:44:19.650770 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 03 05:44:19 crc kubenswrapper[4854]: I0103 05:44:19.888852 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 03 05:44:19 crc kubenswrapper[4854]: I0103 05:44:19.898720 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 03 05:44:20 crc kubenswrapper[4854]: I0103 05:44:20.316204 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 03 05:44:20 crc kubenswrapper[4854]: I0103 05:44:20.375379 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 03 05:44:20 crc kubenswrapper[4854]: I0103 05:44:20.629359 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 03 05:44:20 crc kubenswrapper[4854]: I0103 05:44:20.638975 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 03 05:44:21 crc kubenswrapper[4854]: I0103 05:44:21.137528 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 03 05:44:21 crc kubenswrapper[4854]: I0103 05:44:21.184072 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 03 05:44:21 crc kubenswrapper[4854]: I0103 05:44:21.241069 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 03 05:44:21 crc kubenswrapper[4854]: I0103 05:44:21.342280 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 05:44:21 crc kubenswrapper[4854]: I0103 05:44:21.347038 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 05:44:21 crc kubenswrapper[4854]: I0103 05:44:21.594484 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 03 05:44:21 crc kubenswrapper[4854]: I0103 05:44:21.916281 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 03 05:44:22 crc kubenswrapper[4854]: I0103 05:44:22.189198 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 03 05:44:22 crc kubenswrapper[4854]: I0103 05:44:22.208806 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 05:44:22 crc kubenswrapper[4854]: I0103 05:44:22.437712 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 03 05:44:22 crc kubenswrapper[4854]: I0103 05:44:22.478338 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 03 05:44:22 crc kubenswrapper[4854]: I0103 05:44:22.985029 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 03 05:44:23 crc kubenswrapper[4854]: I0103 05:44:23.356275 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 03 05:44:23 crc kubenswrapper[4854]: I0103 05:44:23.574864 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.072896 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.109716 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.195857 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.324112 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.422787 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.547250 4854 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.548889 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mfbxz" podStartSLOduration=65.465801616 podStartE2EDuration="3m19.548875636s" podCreationTimestamp="2026-01-03 05:41:05 +0000 UTC" firstStartedPulling="2026-01-03 05:41:07.714333835 +0000 UTC m=+46.040910407" lastFinishedPulling="2026-01-03 05:43:21.797407845 +0000 UTC m=+180.123984427" observedRunningTime="2026-01-03 05:43:34.017375778 +0000 UTC m=+192.343952370" watchObservedRunningTime="2026-01-03 05:44:24.548875636 +0000 UTC m=+242.875452208" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.549155 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bqxfg" podStartSLOduration=72.950538705 podStartE2EDuration="3m19.549151853s" podCreationTimestamp="2026-01-03 05:41:05 +0000 UTC" firstStartedPulling="2026-01-03 05:41:07.673516448 +0000 UTC m=+46.000093020" lastFinishedPulling="2026-01-03 05:43:14.272129556 +0000 UTC m=+172.598706168" observedRunningTime="2026-01-03 05:43:33.960456222 +0000 UTC m=+192.287032804" watchObservedRunningTime="2026-01-03 05:44:24.549151853 +0000 UTC m=+242.875728425" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.549535 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-f2b22" podStartSLOduration=68.920818948 podStartE2EDuration="3m16.549532502s" podCreationTimestamp="2026-01-03 05:41:08 +0000 UTC" firstStartedPulling="2026-01-03 05:41:09.746585479 +0000 UTC m=+48.073162051" lastFinishedPulling="2026-01-03 05:43:17.375299043 +0000 UTC m=+175.701875605" observedRunningTime="2026-01-03 05:43:33.838955641 +0000 UTC m=+192.165532253" watchObservedRunningTime="2026-01-03 05:44:24.549532502 +0000 UTC m=+242.876109074" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.551289 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ztvzs","openshift-controller-manager/controller-manager-b46bdbb7f-szr25","openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-h7drl","openshift-route-controller-manager/route-controller-manager-85f4ff9897-qqv5z","openshift-marketplace/redhat-marketplace-cj878"] Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.551360 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7ff6f7c9f7-sxln8","openshift-kube-apiserver/kube-apiserver-crc","openshift-route-controller-manager/route-controller-manager-799fd78b6c-2ngzx","openshift-authentication/oauth-openshift-6994f97844-8cxlw"] Jan 03 05:44:24 crc kubenswrapper[4854]: E0103 05:44:24.551565 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28406837-5e09-49b4-8583-54a450f07ae4" containerName="registry-server" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.551583 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="28406837-5e09-49b4-8583-54a450f07ae4" containerName="registry-server" Jan 03 05:44:24 crc kubenswrapper[4854]: E0103 05:44:24.551600 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd260432-f4bc-4c81-a5e1-e3205534cda8" containerName="extract-utilities" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.551607 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd260432-f4bc-4c81-a5e1-e3205534cda8" containerName="extract-utilities" Jan 03 05:44:24 crc kubenswrapper[4854]: E0103 05:44:24.551615 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd260432-f4bc-4c81-a5e1-e3205534cda8" containerName="extract-content" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.551622 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd260432-f4bc-4c81-a5e1-e3205534cda8" containerName="extract-content" Jan 03 05:44:24 crc kubenswrapper[4854]: E0103 05:44:24.551633 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1a6f0af-e686-4a3a-b1af-b4ab77e8362c" containerName="controller-manager" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.551642 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1a6f0af-e686-4a3a-b1af-b4ab77e8362c" containerName="controller-manager" Jan 03 05:44:24 crc kubenswrapper[4854]: E0103 05:44:24.551654 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd260432-f4bc-4c81-a5e1-e3205534cda8" containerName="registry-server" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.551663 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd260432-f4bc-4c81-a5e1-e3205534cda8" containerName="registry-server" Jan 03 05:44:24 crc kubenswrapper[4854]: E0103 05:44:24.551673 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28406837-5e09-49b4-8583-54a450f07ae4" containerName="extract-content" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.551682 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="28406837-5e09-49b4-8583-54a450f07ae4" containerName="extract-content" Jan 03 05:44:24 crc kubenswrapper[4854]: E0103 05:44:24.551693 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28406837-5e09-49b4-8583-54a450f07ae4" containerName="extract-utilities" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.551700 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="28406837-5e09-49b4-8583-54a450f07ae4" containerName="extract-utilities" Jan 03 05:44:24 crc kubenswrapper[4854]: E0103 05:44:24.551716 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0380b43d-2d7f-490d-a822-1740bfa5c9ac" containerName="installer" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.551724 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="0380b43d-2d7f-490d-a822-1740bfa5c9ac" containerName="installer" Jan 03 05:44:24 crc kubenswrapper[4854]: E0103 05:44:24.551734 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95421875-a016-4eba-8017-27f0276a6bc4" containerName="route-controller-manager" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.551742 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="95421875-a016-4eba-8017-27f0276a6bc4" containerName="route-controller-manager" Jan 03 05:44:24 crc kubenswrapper[4854]: E0103 05:44:24.551751 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c31b366b-2182-4c59-8777-e552553ba8a8" containerName="oauth-openshift" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.551759 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="c31b366b-2182-4c59-8777-e552553ba8a8" containerName="oauth-openshift" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.551774 4854 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="24758257-4839-46a6-836c-76b2208dda54" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.551791 4854 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="24758257-4839-46a6-836c-76b2208dda54" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.551884 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd260432-f4bc-4c81-a5e1-e3205534cda8" containerName="registry-server" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.551903 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="95421875-a016-4eba-8017-27f0276a6bc4" containerName="route-controller-manager" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.551915 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1a6f0af-e686-4a3a-b1af-b4ab77e8362c" containerName="controller-manager" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.551926 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="28406837-5e09-49b4-8583-54a450f07ae4" containerName="registry-server" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.551939 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="0380b43d-2d7f-490d-a822-1740bfa5c9ac" containerName="installer" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.551950 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="c31b366b-2182-4c59-8777-e552553ba8a8" containerName="oauth-openshift" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.552797 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.552878 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-sxln8" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.553910 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wc7xf"] Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.555180 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-2ngzx" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.557418 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.557910 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.561714 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.567525 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.568582 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.568884 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.569159 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.569778 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.569839 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.569862 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.570148 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.570198 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.570348 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.570368 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.570426 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.570557 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.570585 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.570639 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.570665 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.570920 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.570937 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.571263 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.572025 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.575634 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.575983 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.585615 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.588211 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.590021 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.595715 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.611654 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f4664fb-8570-4f61-b99f-37a3e9031738-client-ca\") pod \"controller-manager-7ff6f7c9f7-sxln8\" (UID: \"8f4664fb-8570-4f61-b99f-37a3e9031738\") " pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-sxln8" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.611700 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f4664fb-8570-4f61-b99f-37a3e9031738-config\") pod \"controller-manager-7ff6f7c9f7-sxln8\" (UID: \"8f4664fb-8570-4f61-b99f-37a3e9031738\") " pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-sxln8" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.611725 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a91a6bd1-6075-4979-a9af-2ed9d6a75e42-config\") pod \"route-controller-manager-799fd78b6c-2ngzx\" (UID: \"a91a6bd1-6075-4979-a9af-2ed9d6a75e42\") " pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-2ngzx" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.611837 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/159783f1-b3b7-432d-b243-e8e7076ddd0a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6994f97844-8cxlw\" (UID: \"159783f1-b3b7-432d-b243-e8e7076ddd0a\") " pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.611895 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/159783f1-b3b7-432d-b243-e8e7076ddd0a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6994f97844-8cxlw\" (UID: \"159783f1-b3b7-432d-b243-e8e7076ddd0a\") " pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.611935 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww8tk\" (UniqueName: \"kubernetes.io/projected/a91a6bd1-6075-4979-a9af-2ed9d6a75e42-kube-api-access-ww8tk\") pod \"route-controller-manager-799fd78b6c-2ngzx\" (UID: \"a91a6bd1-6075-4979-a9af-2ed9d6a75e42\") " pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-2ngzx" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.611986 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/159783f1-b3b7-432d-b243-e8e7076ddd0a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6994f97844-8cxlw\" (UID: \"159783f1-b3b7-432d-b243-e8e7076ddd0a\") " pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.612009 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a91a6bd1-6075-4979-a9af-2ed9d6a75e42-client-ca\") pod \"route-controller-manager-799fd78b6c-2ngzx\" (UID: \"a91a6bd1-6075-4979-a9af-2ed9d6a75e42\") " pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-2ngzx" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.612039 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/159783f1-b3b7-432d-b243-e8e7076ddd0a-audit-policies\") pod \"oauth-openshift-6994f97844-8cxlw\" (UID: \"159783f1-b3b7-432d-b243-e8e7076ddd0a\") " pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.612067 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/159783f1-b3b7-432d-b243-e8e7076ddd0a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6994f97844-8cxlw\" (UID: \"159783f1-b3b7-432d-b243-e8e7076ddd0a\") " pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.612144 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f4664fb-8570-4f61-b99f-37a3e9031738-serving-cert\") pod \"controller-manager-7ff6f7c9f7-sxln8\" (UID: \"8f4664fb-8570-4f61-b99f-37a3e9031738\") " pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-sxln8" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.612179 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8f4664fb-8570-4f61-b99f-37a3e9031738-proxy-ca-bundles\") pod \"controller-manager-7ff6f7c9f7-sxln8\" (UID: \"8f4664fb-8570-4f61-b99f-37a3e9031738\") " pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-sxln8" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.612305 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/159783f1-b3b7-432d-b243-e8e7076ddd0a-v4-0-config-system-router-certs\") pod \"oauth-openshift-6994f97844-8cxlw\" (UID: \"159783f1-b3b7-432d-b243-e8e7076ddd0a\") " pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.612329 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/159783f1-b3b7-432d-b243-e8e7076ddd0a-audit-dir\") pod \"oauth-openshift-6994f97844-8cxlw\" (UID: \"159783f1-b3b7-432d-b243-e8e7076ddd0a\") " pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.612349 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/159783f1-b3b7-432d-b243-e8e7076ddd0a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6994f97844-8cxlw\" (UID: \"159783f1-b3b7-432d-b243-e8e7076ddd0a\") " pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.612378 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvj6m\" (UniqueName: \"kubernetes.io/projected/8f4664fb-8570-4f61-b99f-37a3e9031738-kube-api-access-rvj6m\") pod \"controller-manager-7ff6f7c9f7-sxln8\" (UID: \"8f4664fb-8570-4f61-b99f-37a3e9031738\") " pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-sxln8" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.612400 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/159783f1-b3b7-432d-b243-e8e7076ddd0a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6994f97844-8cxlw\" (UID: \"159783f1-b3b7-432d-b243-e8e7076ddd0a\") " pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.612423 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a91a6bd1-6075-4979-a9af-2ed9d6a75e42-serving-cert\") pod \"route-controller-manager-799fd78b6c-2ngzx\" (UID: \"a91a6bd1-6075-4979-a9af-2ed9d6a75e42\") " pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-2ngzx" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.612460 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/159783f1-b3b7-432d-b243-e8e7076ddd0a-v4-0-config-system-session\") pod \"oauth-openshift-6994f97844-8cxlw\" (UID: \"159783f1-b3b7-432d-b243-e8e7076ddd0a\") " pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.612482 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/159783f1-b3b7-432d-b243-e8e7076ddd0a-v4-0-config-system-service-ca\") pod \"oauth-openshift-6994f97844-8cxlw\" (UID: \"159783f1-b3b7-432d-b243-e8e7076ddd0a\") " pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.612502 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/159783f1-b3b7-432d-b243-e8e7076ddd0a-v4-0-config-user-template-login\") pod \"oauth-openshift-6994f97844-8cxlw\" (UID: \"159783f1-b3b7-432d-b243-e8e7076ddd0a\") " pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.612520 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwbs7\" (UniqueName: \"kubernetes.io/projected/159783f1-b3b7-432d-b243-e8e7076ddd0a-kube-api-access-hwbs7\") pod \"oauth-openshift-6994f97844-8cxlw\" (UID: \"159783f1-b3b7-432d-b243-e8e7076ddd0a\") " pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.612543 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/159783f1-b3b7-432d-b243-e8e7076ddd0a-v4-0-config-user-template-error\") pod \"oauth-openshift-6994f97844-8cxlw\" (UID: \"159783f1-b3b7-432d-b243-e8e7076ddd0a\") " pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.640144 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=51.640123696 podStartE2EDuration="51.640123696s" podCreationTimestamp="2026-01-03 05:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:44:24.635031447 +0000 UTC m=+242.961608059" watchObservedRunningTime="2026-01-03 05:44:24.640123696 +0000 UTC m=+242.966700298" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.713305 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f4664fb-8570-4f61-b99f-37a3e9031738-client-ca\") pod \"controller-manager-7ff6f7c9f7-sxln8\" (UID: \"8f4664fb-8570-4f61-b99f-37a3e9031738\") " pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-sxln8" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.713390 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f4664fb-8570-4f61-b99f-37a3e9031738-config\") pod \"controller-manager-7ff6f7c9f7-sxln8\" (UID: \"8f4664fb-8570-4f61-b99f-37a3e9031738\") " pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-sxln8" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.713435 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a91a6bd1-6075-4979-a9af-2ed9d6a75e42-config\") pod \"route-controller-manager-799fd78b6c-2ngzx\" (UID: \"a91a6bd1-6075-4979-a9af-2ed9d6a75e42\") " pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-2ngzx" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.713473 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/159783f1-b3b7-432d-b243-e8e7076ddd0a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6994f97844-8cxlw\" (UID: \"159783f1-b3b7-432d-b243-e8e7076ddd0a\") " pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.713521 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/159783f1-b3b7-432d-b243-e8e7076ddd0a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6994f97844-8cxlw\" (UID: \"159783f1-b3b7-432d-b243-e8e7076ddd0a\") " pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.713559 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ww8tk\" (UniqueName: \"kubernetes.io/projected/a91a6bd1-6075-4979-a9af-2ed9d6a75e42-kube-api-access-ww8tk\") pod \"route-controller-manager-799fd78b6c-2ngzx\" (UID: \"a91a6bd1-6075-4979-a9af-2ed9d6a75e42\") " pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-2ngzx" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.713608 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/159783f1-b3b7-432d-b243-e8e7076ddd0a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6994f97844-8cxlw\" (UID: \"159783f1-b3b7-432d-b243-e8e7076ddd0a\") " pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.713639 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a91a6bd1-6075-4979-a9af-2ed9d6a75e42-client-ca\") pod \"route-controller-manager-799fd78b6c-2ngzx\" (UID: \"a91a6bd1-6075-4979-a9af-2ed9d6a75e42\") " pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-2ngzx" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.713686 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/159783f1-b3b7-432d-b243-e8e7076ddd0a-audit-policies\") pod \"oauth-openshift-6994f97844-8cxlw\" (UID: \"159783f1-b3b7-432d-b243-e8e7076ddd0a\") " pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.713755 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/159783f1-b3b7-432d-b243-e8e7076ddd0a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6994f97844-8cxlw\" (UID: \"159783f1-b3b7-432d-b243-e8e7076ddd0a\") " pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.713785 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f4664fb-8570-4f61-b99f-37a3e9031738-serving-cert\") pod \"controller-manager-7ff6f7c9f7-sxln8\" (UID: \"8f4664fb-8570-4f61-b99f-37a3e9031738\") " pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-sxln8" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.713818 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8f4664fb-8570-4f61-b99f-37a3e9031738-proxy-ca-bundles\") pod \"controller-manager-7ff6f7c9f7-sxln8\" (UID: \"8f4664fb-8570-4f61-b99f-37a3e9031738\") " pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-sxln8" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.713871 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/159783f1-b3b7-432d-b243-e8e7076ddd0a-v4-0-config-system-router-certs\") pod \"oauth-openshift-6994f97844-8cxlw\" (UID: \"159783f1-b3b7-432d-b243-e8e7076ddd0a\") " pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.713905 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/159783f1-b3b7-432d-b243-e8e7076ddd0a-audit-dir\") pod \"oauth-openshift-6994f97844-8cxlw\" (UID: \"159783f1-b3b7-432d-b243-e8e7076ddd0a\") " pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.713937 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/159783f1-b3b7-432d-b243-e8e7076ddd0a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6994f97844-8cxlw\" (UID: \"159783f1-b3b7-432d-b243-e8e7076ddd0a\") " pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.713982 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/159783f1-b3b7-432d-b243-e8e7076ddd0a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6994f97844-8cxlw\" (UID: \"159783f1-b3b7-432d-b243-e8e7076ddd0a\") " pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.714016 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvj6m\" (UniqueName: \"kubernetes.io/projected/8f4664fb-8570-4f61-b99f-37a3e9031738-kube-api-access-rvj6m\") pod \"controller-manager-7ff6f7c9f7-sxln8\" (UID: \"8f4664fb-8570-4f61-b99f-37a3e9031738\") " pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-sxln8" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.714053 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a91a6bd1-6075-4979-a9af-2ed9d6a75e42-serving-cert\") pod \"route-controller-manager-799fd78b6c-2ngzx\" (UID: \"a91a6bd1-6075-4979-a9af-2ed9d6a75e42\") " pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-2ngzx" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.714114 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/159783f1-b3b7-432d-b243-e8e7076ddd0a-v4-0-config-system-session\") pod \"oauth-openshift-6994f97844-8cxlw\" (UID: \"159783f1-b3b7-432d-b243-e8e7076ddd0a\") " pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.714151 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/159783f1-b3b7-432d-b243-e8e7076ddd0a-v4-0-config-system-service-ca\") pod \"oauth-openshift-6994f97844-8cxlw\" (UID: \"159783f1-b3b7-432d-b243-e8e7076ddd0a\") " pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.714189 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/159783f1-b3b7-432d-b243-e8e7076ddd0a-v4-0-config-user-template-login\") pod \"oauth-openshift-6994f97844-8cxlw\" (UID: \"159783f1-b3b7-432d-b243-e8e7076ddd0a\") " pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.714223 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwbs7\" (UniqueName: \"kubernetes.io/projected/159783f1-b3b7-432d-b243-e8e7076ddd0a-kube-api-access-hwbs7\") pod \"oauth-openshift-6994f97844-8cxlw\" (UID: \"159783f1-b3b7-432d-b243-e8e7076ddd0a\") " pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.714256 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/159783f1-b3b7-432d-b243-e8e7076ddd0a-v4-0-config-user-template-error\") pod \"oauth-openshift-6994f97844-8cxlw\" (UID: \"159783f1-b3b7-432d-b243-e8e7076ddd0a\") " pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.715359 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f4664fb-8570-4f61-b99f-37a3e9031738-client-ca\") pod \"controller-manager-7ff6f7c9f7-sxln8\" (UID: \"8f4664fb-8570-4f61-b99f-37a3e9031738\") " pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-sxln8" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.717018 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a91a6bd1-6075-4979-a9af-2ed9d6a75e42-config\") pod \"route-controller-manager-799fd78b6c-2ngzx\" (UID: \"a91a6bd1-6075-4979-a9af-2ed9d6a75e42\") " pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-2ngzx" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.717145 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/159783f1-b3b7-432d-b243-e8e7076ddd0a-audit-policies\") pod \"oauth-openshift-6994f97844-8cxlw\" (UID: \"159783f1-b3b7-432d-b243-e8e7076ddd0a\") " pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.717533 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f4664fb-8570-4f61-b99f-37a3e9031738-config\") pod \"controller-manager-7ff6f7c9f7-sxln8\" (UID: \"8f4664fb-8570-4f61-b99f-37a3e9031738\") " pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-sxln8" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.717868 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/159783f1-b3b7-432d-b243-e8e7076ddd0a-v4-0-config-system-service-ca\") pod \"oauth-openshift-6994f97844-8cxlw\" (UID: \"159783f1-b3b7-432d-b243-e8e7076ddd0a\") " pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.717913 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8f4664fb-8570-4f61-b99f-37a3e9031738-proxy-ca-bundles\") pod \"controller-manager-7ff6f7c9f7-sxln8\" (UID: \"8f4664fb-8570-4f61-b99f-37a3e9031738\") " pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-sxln8" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.718190 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/159783f1-b3b7-432d-b243-e8e7076ddd0a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6994f97844-8cxlw\" (UID: \"159783f1-b3b7-432d-b243-e8e7076ddd0a\") " pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.718360 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/159783f1-b3b7-432d-b243-e8e7076ddd0a-audit-dir\") pod \"oauth-openshift-6994f97844-8cxlw\" (UID: \"159783f1-b3b7-432d-b243-e8e7076ddd0a\") " pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.719274 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/159783f1-b3b7-432d-b243-e8e7076ddd0a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6994f97844-8cxlw\" (UID: \"159783f1-b3b7-432d-b243-e8e7076ddd0a\") " pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.719584 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a91a6bd1-6075-4979-a9af-2ed9d6a75e42-client-ca\") pod \"route-controller-manager-799fd78b6c-2ngzx\" (UID: \"a91a6bd1-6075-4979-a9af-2ed9d6a75e42\") " pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-2ngzx" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.722384 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/159783f1-b3b7-432d-b243-e8e7076ddd0a-v4-0-config-user-template-error\") pod \"oauth-openshift-6994f97844-8cxlw\" (UID: \"159783f1-b3b7-432d-b243-e8e7076ddd0a\") " pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.722752 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/159783f1-b3b7-432d-b243-e8e7076ddd0a-v4-0-config-user-template-login\") pod \"oauth-openshift-6994f97844-8cxlw\" (UID: \"159783f1-b3b7-432d-b243-e8e7076ddd0a\") " pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.724835 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f4664fb-8570-4f61-b99f-37a3e9031738-serving-cert\") pod \"controller-manager-7ff6f7c9f7-sxln8\" (UID: \"8f4664fb-8570-4f61-b99f-37a3e9031738\") " pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-sxln8" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.725264 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/159783f1-b3b7-432d-b243-e8e7076ddd0a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6994f97844-8cxlw\" (UID: \"159783f1-b3b7-432d-b243-e8e7076ddd0a\") " pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.725316 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/159783f1-b3b7-432d-b243-e8e7076ddd0a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6994f97844-8cxlw\" (UID: \"159783f1-b3b7-432d-b243-e8e7076ddd0a\") " pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.726076 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/159783f1-b3b7-432d-b243-e8e7076ddd0a-v4-0-config-system-session\") pod \"oauth-openshift-6994f97844-8cxlw\" (UID: \"159783f1-b3b7-432d-b243-e8e7076ddd0a\") " pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.726539 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/159783f1-b3b7-432d-b243-e8e7076ddd0a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6994f97844-8cxlw\" (UID: \"159783f1-b3b7-432d-b243-e8e7076ddd0a\") " pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.727741 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a91a6bd1-6075-4979-a9af-2ed9d6a75e42-serving-cert\") pod \"route-controller-manager-799fd78b6c-2ngzx\" (UID: \"a91a6bd1-6075-4979-a9af-2ed9d6a75e42\") " pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-2ngzx" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.728576 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/159783f1-b3b7-432d-b243-e8e7076ddd0a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6994f97844-8cxlw\" (UID: \"159783f1-b3b7-432d-b243-e8e7076ddd0a\") " pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.729797 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/159783f1-b3b7-432d-b243-e8e7076ddd0a-v4-0-config-system-router-certs\") pod \"oauth-openshift-6994f97844-8cxlw\" (UID: \"159783f1-b3b7-432d-b243-e8e7076ddd0a\") " pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.739130 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwbs7\" (UniqueName: \"kubernetes.io/projected/159783f1-b3b7-432d-b243-e8e7076ddd0a-kube-api-access-hwbs7\") pod \"oauth-openshift-6994f97844-8cxlw\" (UID: \"159783f1-b3b7-432d-b243-e8e7076ddd0a\") " pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.740028 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvj6m\" (UniqueName: \"kubernetes.io/projected/8f4664fb-8570-4f61-b99f-37a3e9031738-kube-api-access-rvj6m\") pod \"controller-manager-7ff6f7c9f7-sxln8\" (UID: \"8f4664fb-8570-4f61-b99f-37a3e9031738\") " pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-sxln8" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.742539 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww8tk\" (UniqueName: \"kubernetes.io/projected/a91a6bd1-6075-4979-a9af-2ed9d6a75e42-kube-api-access-ww8tk\") pod \"route-controller-manager-799fd78b6c-2ngzx\" (UID: \"a91a6bd1-6075-4979-a9af-2ed9d6a75e42\") " pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-2ngzx" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.776320 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.795412 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.892820 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.911420 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-2ngzx" Jan 03 05:44:24 crc kubenswrapper[4854]: I0103 05:44:24.922492 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-sxln8" Jan 03 05:44:25 crc kubenswrapper[4854]: I0103 05:44:25.206267 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-799fd78b6c-2ngzx"] Jan 03 05:44:25 crc kubenswrapper[4854]: W0103 05:44:25.214296 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda91a6bd1_6075_4979_a9af_2ed9d6a75e42.slice/crio-34da3867bd51379f89c5f2a811817d9fe84de20c6101827fa9c3b1a623003241 WatchSource:0}: Error finding container 34da3867bd51379f89c5f2a811817d9fe84de20c6101827fa9c3b1a623003241: Status 404 returned error can't find the container with id 34da3867bd51379f89c5f2a811817d9fe84de20c6101827fa9c3b1a623003241 Jan 03 05:44:25 crc kubenswrapper[4854]: I0103 05:44:25.222331 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-2ngzx" event={"ID":"a91a6bd1-6075-4979-a9af-2ed9d6a75e42","Type":"ContainerStarted","Data":"34da3867bd51379f89c5f2a811817d9fe84de20c6101827fa9c3b1a623003241"} Jan 03 05:44:25 crc kubenswrapper[4854]: I0103 05:44:25.470568 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6994f97844-8cxlw"] Jan 03 05:44:25 crc kubenswrapper[4854]: I0103 05:44:25.484214 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7ff6f7c9f7-sxln8"] Jan 03 05:44:25 crc kubenswrapper[4854]: W0103 05:44:25.499280 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f4664fb_8570_4f61_b99f_37a3e9031738.slice/crio-ac2c3b2b3e8ac75104447e9bf4bb9f3ad00d2f3d09db6b7b203f0f5ac926fde1 WatchSource:0}: Error finding container ac2c3b2b3e8ac75104447e9bf4bb9f3ad00d2f3d09db6b7b203f0f5ac926fde1: Status 404 returned error can't find the container with id ac2c3b2b3e8ac75104447e9bf4bb9f3ad00d2f3d09db6b7b203f0f5ac926fde1 Jan 03 05:44:25 crc kubenswrapper[4854]: I0103 05:44:25.632199 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 03 05:44:26 crc kubenswrapper[4854]: I0103 05:44:26.128786 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28406837-5e09-49b4-8583-54a450f07ae4" path="/var/lib/kubelet/pods/28406837-5e09-49b4-8583-54a450f07ae4/volumes" Jan 03 05:44:26 crc kubenswrapper[4854]: I0103 05:44:26.130576 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95421875-a016-4eba-8017-27f0276a6bc4" path="/var/lib/kubelet/pods/95421875-a016-4eba-8017-27f0276a6bc4/volumes" Jan 03 05:44:26 crc kubenswrapper[4854]: I0103 05:44:26.131756 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1a6f0af-e686-4a3a-b1af-b4ab77e8362c" path="/var/lib/kubelet/pods/b1a6f0af-e686-4a3a-b1af-b4ab77e8362c/volumes" Jan 03 05:44:26 crc kubenswrapper[4854]: I0103 05:44:26.133629 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c31b366b-2182-4c59-8777-e552553ba8a8" path="/var/lib/kubelet/pods/c31b366b-2182-4c59-8777-e552553ba8a8/volumes" Jan 03 05:44:26 crc kubenswrapper[4854]: I0103 05:44:26.134495 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd260432-f4bc-4c81-a5e1-e3205534cda8" path="/var/lib/kubelet/pods/dd260432-f4bc-4c81-a5e1-e3205534cda8/volumes" Jan 03 05:44:26 crc kubenswrapper[4854]: I0103 05:44:26.231617 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" event={"ID":"159783f1-b3b7-432d-b243-e8e7076ddd0a","Type":"ContainerStarted","Data":"63a6df97a4d6d0dd53797147a04bd9ead8a73e7a7c379690c6469deb9f718104"} Jan 03 05:44:26 crc kubenswrapper[4854]: I0103 05:44:26.234304 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-2ngzx" event={"ID":"a91a6bd1-6075-4979-a9af-2ed9d6a75e42","Type":"ContainerStarted","Data":"8fd9e73d4dee375af45bdeaa052ee325a24d295cff7e1a8acaff89ef472b8667"} Jan 03 05:44:26 crc kubenswrapper[4854]: I0103 05:44:26.234603 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-2ngzx" Jan 03 05:44:26 crc kubenswrapper[4854]: I0103 05:44:26.235876 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-sxln8" event={"ID":"8f4664fb-8570-4f61-b99f-37a3e9031738","Type":"ContainerStarted","Data":"ac2c3b2b3e8ac75104447e9bf4bb9f3ad00d2f3d09db6b7b203f0f5ac926fde1"} Jan 03 05:44:26 crc kubenswrapper[4854]: I0103 05:44:26.244516 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-2ngzx" Jan 03 05:44:26 crc kubenswrapper[4854]: I0103 05:44:26.264488 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-2ngzx" podStartSLOduration=16.26447504 podStartE2EDuration="16.26447504s" podCreationTimestamp="2026-01-03 05:44:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:44:26.260462639 +0000 UTC m=+244.587039211" watchObservedRunningTime="2026-01-03 05:44:26.26447504 +0000 UTC m=+244.591051612" Jan 03 05:44:26 crc kubenswrapper[4854]: I0103 05:44:26.278124 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 03 05:44:27 crc kubenswrapper[4854]: I0103 05:44:27.245915 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" event={"ID":"159783f1-b3b7-432d-b243-e8e7076ddd0a","Type":"ContainerStarted","Data":"1c1339677d0c8a6d7d7eee61fd4fa15d6a40580599301989032bde78a8b8e7c2"} Jan 03 05:44:27 crc kubenswrapper[4854]: I0103 05:44:27.246197 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" Jan 03 05:44:27 crc kubenswrapper[4854]: I0103 05:44:27.248824 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-sxln8" event={"ID":"8f4664fb-8570-4f61-b99f-37a3e9031738","Type":"ContainerStarted","Data":"d312f0bb860ce2820784141d1dab5cd2a5799a42fc7edb81219216d442d05031"} Jan 03 05:44:27 crc kubenswrapper[4854]: I0103 05:44:27.255404 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" Jan 03 05:44:27 crc kubenswrapper[4854]: I0103 05:44:27.313776 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" podStartSLOduration=121.313758938 podStartE2EDuration="2m1.313758938s" podCreationTimestamp="2026-01-03 05:42:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:44:27.311902181 +0000 UTC m=+245.638478763" watchObservedRunningTime="2026-01-03 05:44:27.313758938 +0000 UTC m=+245.640335520" Jan 03 05:44:27 crc kubenswrapper[4854]: I0103 05:44:27.371384 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-sxln8" podStartSLOduration=17.371359607 podStartE2EDuration="17.371359607s" podCreationTimestamp="2026-01-03 05:44:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:44:27.344412585 +0000 UTC m=+245.670989157" watchObservedRunningTime="2026-01-03 05:44:27.371359607 +0000 UTC m=+245.697936199" Jan 03 05:44:28 crc kubenswrapper[4854]: I0103 05:44:28.046664 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 03 05:44:28 crc kubenswrapper[4854]: I0103 05:44:28.254967 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-sxln8" Jan 03 05:44:28 crc kubenswrapper[4854]: I0103 05:44:28.258572 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-sxln8" Jan 03 05:44:28 crc kubenswrapper[4854]: I0103 05:44:28.312418 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7ff6f7c9f7-sxln8"] Jan 03 05:44:28 crc kubenswrapper[4854]: I0103 05:44:28.331263 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-799fd78b6c-2ngzx"] Jan 03 05:44:28 crc kubenswrapper[4854]: I0103 05:44:28.463129 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 03 05:44:29 crc kubenswrapper[4854]: I0103 05:44:29.258109 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-2ngzx" podUID="a91a6bd1-6075-4979-a9af-2ed9d6a75e42" containerName="route-controller-manager" containerID="cri-o://8fd9e73d4dee375af45bdeaa052ee325a24d295cff7e1a8acaff89ef472b8667" gracePeriod=30 Jan 03 05:44:29 crc kubenswrapper[4854]: I0103 05:44:29.505988 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 03 05:44:29 crc kubenswrapper[4854]: I0103 05:44:29.681745 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-2ngzx" Jan 03 05:44:29 crc kubenswrapper[4854]: I0103 05:44:29.705397 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8596db5b-95z6q"] Jan 03 05:44:29 crc kubenswrapper[4854]: E0103 05:44:29.705609 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a91a6bd1-6075-4979-a9af-2ed9d6a75e42" containerName="route-controller-manager" Jan 03 05:44:29 crc kubenswrapper[4854]: I0103 05:44:29.705624 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="a91a6bd1-6075-4979-a9af-2ed9d6a75e42" containerName="route-controller-manager" Jan 03 05:44:29 crc kubenswrapper[4854]: I0103 05:44:29.705714 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="a91a6bd1-6075-4979-a9af-2ed9d6a75e42" containerName="route-controller-manager" Jan 03 05:44:29 crc kubenswrapper[4854]: I0103 05:44:29.706073 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8596db5b-95z6q" Jan 03 05:44:29 crc kubenswrapper[4854]: I0103 05:44:29.727105 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8596db5b-95z6q"] Jan 03 05:44:29 crc kubenswrapper[4854]: I0103 05:44:29.795572 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a91a6bd1-6075-4979-a9af-2ed9d6a75e42-config\") pod \"a91a6bd1-6075-4979-a9af-2ed9d6a75e42\" (UID: \"a91a6bd1-6075-4979-a9af-2ed9d6a75e42\") " Jan 03 05:44:29 crc kubenswrapper[4854]: I0103 05:44:29.795722 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ww8tk\" (UniqueName: \"kubernetes.io/projected/a91a6bd1-6075-4979-a9af-2ed9d6a75e42-kube-api-access-ww8tk\") pod \"a91a6bd1-6075-4979-a9af-2ed9d6a75e42\" (UID: \"a91a6bd1-6075-4979-a9af-2ed9d6a75e42\") " Jan 03 05:44:29 crc kubenswrapper[4854]: I0103 05:44:29.795760 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a91a6bd1-6075-4979-a9af-2ed9d6a75e42-serving-cert\") pod \"a91a6bd1-6075-4979-a9af-2ed9d6a75e42\" (UID: \"a91a6bd1-6075-4979-a9af-2ed9d6a75e42\") " Jan 03 05:44:29 crc kubenswrapper[4854]: I0103 05:44:29.795792 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a91a6bd1-6075-4979-a9af-2ed9d6a75e42-client-ca\") pod \"a91a6bd1-6075-4979-a9af-2ed9d6a75e42\" (UID: \"a91a6bd1-6075-4979-a9af-2ed9d6a75e42\") " Jan 03 05:44:29 crc kubenswrapper[4854]: I0103 05:44:29.795971 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s2w4\" (UniqueName: \"kubernetes.io/projected/a7ab4e78-6d39-46bb-bb00-cd098e009115-kube-api-access-5s2w4\") pod \"route-controller-manager-8596db5b-95z6q\" (UID: \"a7ab4e78-6d39-46bb-bb00-cd098e009115\") " pod="openshift-route-controller-manager/route-controller-manager-8596db5b-95z6q" Jan 03 05:44:29 crc kubenswrapper[4854]: I0103 05:44:29.796031 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7ab4e78-6d39-46bb-bb00-cd098e009115-config\") pod \"route-controller-manager-8596db5b-95z6q\" (UID: \"a7ab4e78-6d39-46bb-bb00-cd098e009115\") " pod="openshift-route-controller-manager/route-controller-manager-8596db5b-95z6q" Jan 03 05:44:29 crc kubenswrapper[4854]: I0103 05:44:29.796093 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7ab4e78-6d39-46bb-bb00-cd098e009115-serving-cert\") pod \"route-controller-manager-8596db5b-95z6q\" (UID: \"a7ab4e78-6d39-46bb-bb00-cd098e009115\") " pod="openshift-route-controller-manager/route-controller-manager-8596db5b-95z6q" Jan 03 05:44:29 crc kubenswrapper[4854]: I0103 05:44:29.796111 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a7ab4e78-6d39-46bb-bb00-cd098e009115-client-ca\") pod \"route-controller-manager-8596db5b-95z6q\" (UID: \"a7ab4e78-6d39-46bb-bb00-cd098e009115\") " pod="openshift-route-controller-manager/route-controller-manager-8596db5b-95z6q" Jan 03 05:44:29 crc kubenswrapper[4854]: I0103 05:44:29.796272 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a91a6bd1-6075-4979-a9af-2ed9d6a75e42-config" (OuterVolumeSpecName: "config") pod "a91a6bd1-6075-4979-a9af-2ed9d6a75e42" (UID: "a91a6bd1-6075-4979-a9af-2ed9d6a75e42"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:44:29 crc kubenswrapper[4854]: I0103 05:44:29.796526 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a91a6bd1-6075-4979-a9af-2ed9d6a75e42-client-ca" (OuterVolumeSpecName: "client-ca") pod "a91a6bd1-6075-4979-a9af-2ed9d6a75e42" (UID: "a91a6bd1-6075-4979-a9af-2ed9d6a75e42"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:44:29 crc kubenswrapper[4854]: I0103 05:44:29.804062 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a91a6bd1-6075-4979-a9af-2ed9d6a75e42-kube-api-access-ww8tk" (OuterVolumeSpecName: "kube-api-access-ww8tk") pod "a91a6bd1-6075-4979-a9af-2ed9d6a75e42" (UID: "a91a6bd1-6075-4979-a9af-2ed9d6a75e42"). InnerVolumeSpecName "kube-api-access-ww8tk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:44:29 crc kubenswrapper[4854]: I0103 05:44:29.810274 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a91a6bd1-6075-4979-a9af-2ed9d6a75e42-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a91a6bd1-6075-4979-a9af-2ed9d6a75e42" (UID: "a91a6bd1-6075-4979-a9af-2ed9d6a75e42"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:44:29 crc kubenswrapper[4854]: I0103 05:44:29.897606 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7ab4e78-6d39-46bb-bb00-cd098e009115-config\") pod \"route-controller-manager-8596db5b-95z6q\" (UID: \"a7ab4e78-6d39-46bb-bb00-cd098e009115\") " pod="openshift-route-controller-manager/route-controller-manager-8596db5b-95z6q" Jan 03 05:44:29 crc kubenswrapper[4854]: I0103 05:44:29.897683 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7ab4e78-6d39-46bb-bb00-cd098e009115-serving-cert\") pod \"route-controller-manager-8596db5b-95z6q\" (UID: \"a7ab4e78-6d39-46bb-bb00-cd098e009115\") " pod="openshift-route-controller-manager/route-controller-manager-8596db5b-95z6q" Jan 03 05:44:29 crc kubenswrapper[4854]: I0103 05:44:29.897704 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a7ab4e78-6d39-46bb-bb00-cd098e009115-client-ca\") pod \"route-controller-manager-8596db5b-95z6q\" (UID: \"a7ab4e78-6d39-46bb-bb00-cd098e009115\") " pod="openshift-route-controller-manager/route-controller-manager-8596db5b-95z6q" Jan 03 05:44:29 crc kubenswrapper[4854]: I0103 05:44:29.897728 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s2w4\" (UniqueName: \"kubernetes.io/projected/a7ab4e78-6d39-46bb-bb00-cd098e009115-kube-api-access-5s2w4\") pod \"route-controller-manager-8596db5b-95z6q\" (UID: \"a7ab4e78-6d39-46bb-bb00-cd098e009115\") " pod="openshift-route-controller-manager/route-controller-manager-8596db5b-95z6q" Jan 03 05:44:29 crc kubenswrapper[4854]: I0103 05:44:29.897782 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ww8tk\" (UniqueName: \"kubernetes.io/projected/a91a6bd1-6075-4979-a9af-2ed9d6a75e42-kube-api-access-ww8tk\") on node \"crc\" DevicePath \"\"" Jan 03 05:44:29 crc kubenswrapper[4854]: I0103 05:44:29.897794 4854 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a91a6bd1-6075-4979-a9af-2ed9d6a75e42-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 05:44:29 crc kubenswrapper[4854]: I0103 05:44:29.897803 4854 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a91a6bd1-6075-4979-a9af-2ed9d6a75e42-client-ca\") on node \"crc\" DevicePath \"\"" Jan 03 05:44:29 crc kubenswrapper[4854]: I0103 05:44:29.897816 4854 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a91a6bd1-6075-4979-a9af-2ed9d6a75e42-config\") on node \"crc\" DevicePath \"\"" Jan 03 05:44:29 crc kubenswrapper[4854]: I0103 05:44:29.899008 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a7ab4e78-6d39-46bb-bb00-cd098e009115-client-ca\") pod \"route-controller-manager-8596db5b-95z6q\" (UID: \"a7ab4e78-6d39-46bb-bb00-cd098e009115\") " pod="openshift-route-controller-manager/route-controller-manager-8596db5b-95z6q" Jan 03 05:44:29 crc kubenswrapper[4854]: I0103 05:44:29.899144 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7ab4e78-6d39-46bb-bb00-cd098e009115-config\") pod \"route-controller-manager-8596db5b-95z6q\" (UID: \"a7ab4e78-6d39-46bb-bb00-cd098e009115\") " pod="openshift-route-controller-manager/route-controller-manager-8596db5b-95z6q" Jan 03 05:44:29 crc kubenswrapper[4854]: I0103 05:44:29.901277 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7ab4e78-6d39-46bb-bb00-cd098e009115-serving-cert\") pod \"route-controller-manager-8596db5b-95z6q\" (UID: \"a7ab4e78-6d39-46bb-bb00-cd098e009115\") " pod="openshift-route-controller-manager/route-controller-manager-8596db5b-95z6q" Jan 03 05:44:29 crc kubenswrapper[4854]: I0103 05:44:29.922822 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s2w4\" (UniqueName: \"kubernetes.io/projected/a7ab4e78-6d39-46bb-bb00-cd098e009115-kube-api-access-5s2w4\") pod \"route-controller-manager-8596db5b-95z6q\" (UID: \"a7ab4e78-6d39-46bb-bb00-cd098e009115\") " pod="openshift-route-controller-manager/route-controller-manager-8596db5b-95z6q" Jan 03 05:44:30 crc kubenswrapper[4854]: I0103 05:44:30.029371 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8596db5b-95z6q" Jan 03 05:44:30 crc kubenswrapper[4854]: I0103 05:44:30.050353 4854 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 03 05:44:30 crc kubenswrapper[4854]: I0103 05:44:30.050579 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://98de16ac545bc42fd7e40a9a51c5b4ea215977b46afc3d7a69dcbe3032d6151a" gracePeriod=5 Jan 03 05:44:30 crc kubenswrapper[4854]: I0103 05:44:30.263854 4854 generic.go:334] "Generic (PLEG): container finished" podID="a91a6bd1-6075-4979-a9af-2ed9d6a75e42" containerID="8fd9e73d4dee375af45bdeaa052ee325a24d295cff7e1a8acaff89ef472b8667" exitCode=0 Jan 03 05:44:30 crc kubenswrapper[4854]: I0103 05:44:30.264046 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-sxln8" podUID="8f4664fb-8570-4f61-b99f-37a3e9031738" containerName="controller-manager" containerID="cri-o://d312f0bb860ce2820784141d1dab5cd2a5799a42fc7edb81219216d442d05031" gracePeriod=30 Jan 03 05:44:30 crc kubenswrapper[4854]: I0103 05:44:30.264348 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-2ngzx" Jan 03 05:44:30 crc kubenswrapper[4854]: I0103 05:44:30.264871 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-2ngzx" event={"ID":"a91a6bd1-6075-4979-a9af-2ed9d6a75e42","Type":"ContainerDied","Data":"8fd9e73d4dee375af45bdeaa052ee325a24d295cff7e1a8acaff89ef472b8667"} Jan 03 05:44:30 crc kubenswrapper[4854]: I0103 05:44:30.264899 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-2ngzx" event={"ID":"a91a6bd1-6075-4979-a9af-2ed9d6a75e42","Type":"ContainerDied","Data":"34da3867bd51379f89c5f2a811817d9fe84de20c6101827fa9c3b1a623003241"} Jan 03 05:44:30 crc kubenswrapper[4854]: I0103 05:44:30.264914 4854 scope.go:117] "RemoveContainer" containerID="8fd9e73d4dee375af45bdeaa052ee325a24d295cff7e1a8acaff89ef472b8667" Jan 03 05:44:30 crc kubenswrapper[4854]: I0103 05:44:30.290657 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-799fd78b6c-2ngzx"] Jan 03 05:44:30 crc kubenswrapper[4854]: I0103 05:44:30.292067 4854 scope.go:117] "RemoveContainer" containerID="8fd9e73d4dee375af45bdeaa052ee325a24d295cff7e1a8acaff89ef472b8667" Jan 03 05:44:30 crc kubenswrapper[4854]: E0103 05:44:30.292478 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fd9e73d4dee375af45bdeaa052ee325a24d295cff7e1a8acaff89ef472b8667\": container with ID starting with 8fd9e73d4dee375af45bdeaa052ee325a24d295cff7e1a8acaff89ef472b8667 not found: ID does not exist" containerID="8fd9e73d4dee375af45bdeaa052ee325a24d295cff7e1a8acaff89ef472b8667" Jan 03 05:44:30 crc kubenswrapper[4854]: I0103 05:44:30.292504 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fd9e73d4dee375af45bdeaa052ee325a24d295cff7e1a8acaff89ef472b8667"} err="failed to get container status \"8fd9e73d4dee375af45bdeaa052ee325a24d295cff7e1a8acaff89ef472b8667\": rpc error: code = NotFound desc = could not find container \"8fd9e73d4dee375af45bdeaa052ee325a24d295cff7e1a8acaff89ef472b8667\": container with ID starting with 8fd9e73d4dee375af45bdeaa052ee325a24d295cff7e1a8acaff89ef472b8667 not found: ID does not exist" Jan 03 05:44:30 crc kubenswrapper[4854]: I0103 05:44:30.295513 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-799fd78b6c-2ngzx"] Jan 03 05:44:30 crc kubenswrapper[4854]: I0103 05:44:30.502326 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8596db5b-95z6q"] Jan 03 05:44:30 crc kubenswrapper[4854]: W0103 05:44:30.512411 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7ab4e78_6d39_46bb_bb00_cd098e009115.slice/crio-a0de1a67b8d3faff9426842cbcbfb04a466305cbfb5ff5b1b668e0c9f0ea4a0b WatchSource:0}: Error finding container a0de1a67b8d3faff9426842cbcbfb04a466305cbfb5ff5b1b668e0c9f0ea4a0b: Status 404 returned error can't find the container with id a0de1a67b8d3faff9426842cbcbfb04a466305cbfb5ff5b1b668e0c9f0ea4a0b Jan 03 05:44:30 crc kubenswrapper[4854]: I0103 05:44:30.772718 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 03 05:44:31 crc kubenswrapper[4854]: I0103 05:44:31.277020 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8596db5b-95z6q" event={"ID":"a7ab4e78-6d39-46bb-bb00-cd098e009115","Type":"ContainerStarted","Data":"a0de1a67b8d3faff9426842cbcbfb04a466305cbfb5ff5b1b668e0c9f0ea4a0b"} Jan 03 05:44:31 crc kubenswrapper[4854]: I0103 05:44:31.279692 4854 generic.go:334] "Generic (PLEG): container finished" podID="8f4664fb-8570-4f61-b99f-37a3e9031738" containerID="d312f0bb860ce2820784141d1dab5cd2a5799a42fc7edb81219216d442d05031" exitCode=0 Jan 03 05:44:31 crc kubenswrapper[4854]: I0103 05:44:31.279754 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-sxln8" event={"ID":"8f4664fb-8570-4f61-b99f-37a3e9031738","Type":"ContainerDied","Data":"d312f0bb860ce2820784141d1dab5cd2a5799a42fc7edb81219216d442d05031"} Jan 03 05:44:31 crc kubenswrapper[4854]: I0103 05:44:31.284739 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 03 05:44:31 crc kubenswrapper[4854]: I0103 05:44:31.623763 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 03 05:44:32 crc kubenswrapper[4854]: I0103 05:44:32.134241 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a91a6bd1-6075-4979-a9af-2ed9d6a75e42" path="/var/lib/kubelet/pods/a91a6bd1-6075-4979-a9af-2ed9d6a75e42/volumes" Jan 03 05:44:32 crc kubenswrapper[4854]: I0103 05:44:32.288359 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8596db5b-95z6q" event={"ID":"a7ab4e78-6d39-46bb-bb00-cd098e009115","Type":"ContainerStarted","Data":"ad936cc3a4ec9bd5774af87fc83e8145bcf85213f2a2d73afc36a8ce4549f79c"} Jan 03 05:44:32 crc kubenswrapper[4854]: I0103 05:44:32.867519 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 03 05:44:33 crc kubenswrapper[4854]: I0103 05:44:33.079779 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-sxln8" Jan 03 05:44:33 crc kubenswrapper[4854]: I0103 05:44:33.106762 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-68ff5b985d-c4tlf"] Jan 03 05:44:33 crc kubenswrapper[4854]: E0103 05:44:33.106974 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f4664fb-8570-4f61-b99f-37a3e9031738" containerName="controller-manager" Jan 03 05:44:33 crc kubenswrapper[4854]: I0103 05:44:33.106985 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f4664fb-8570-4f61-b99f-37a3e9031738" containerName="controller-manager" Jan 03 05:44:33 crc kubenswrapper[4854]: E0103 05:44:33.107003 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 03 05:44:33 crc kubenswrapper[4854]: I0103 05:44:33.107008 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 03 05:44:33 crc kubenswrapper[4854]: I0103 05:44:33.107120 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 03 05:44:33 crc kubenswrapper[4854]: I0103 05:44:33.107141 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f4664fb-8570-4f61-b99f-37a3e9031738" containerName="controller-manager" Jan 03 05:44:33 crc kubenswrapper[4854]: I0103 05:44:33.107501 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68ff5b985d-c4tlf" Jan 03 05:44:33 crc kubenswrapper[4854]: I0103 05:44:33.130565 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-68ff5b985d-c4tlf"] Jan 03 05:44:33 crc kubenswrapper[4854]: I0103 05:44:33.147103 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f4664fb-8570-4f61-b99f-37a3e9031738-config\") pod \"8f4664fb-8570-4f61-b99f-37a3e9031738\" (UID: \"8f4664fb-8570-4f61-b99f-37a3e9031738\") " Jan 03 05:44:33 crc kubenswrapper[4854]: I0103 05:44:33.147222 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f4664fb-8570-4f61-b99f-37a3e9031738-client-ca\") pod \"8f4664fb-8570-4f61-b99f-37a3e9031738\" (UID: \"8f4664fb-8570-4f61-b99f-37a3e9031738\") " Jan 03 05:44:33 crc kubenswrapper[4854]: I0103 05:44:33.148175 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f4664fb-8570-4f61-b99f-37a3e9031738-config" (OuterVolumeSpecName: "config") pod "8f4664fb-8570-4f61-b99f-37a3e9031738" (UID: "8f4664fb-8570-4f61-b99f-37a3e9031738"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:44:33 crc kubenswrapper[4854]: I0103 05:44:33.148216 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f4664fb-8570-4f61-b99f-37a3e9031738-client-ca" (OuterVolumeSpecName: "client-ca") pod "8f4664fb-8570-4f61-b99f-37a3e9031738" (UID: "8f4664fb-8570-4f61-b99f-37a3e9031738"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:44:33 crc kubenswrapper[4854]: I0103 05:44:33.148356 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f4664fb-8570-4f61-b99f-37a3e9031738-serving-cert\") pod \"8f4664fb-8570-4f61-b99f-37a3e9031738\" (UID: \"8f4664fb-8570-4f61-b99f-37a3e9031738\") " Jan 03 05:44:33 crc kubenswrapper[4854]: I0103 05:44:33.149353 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8f4664fb-8570-4f61-b99f-37a3e9031738-proxy-ca-bundles\") pod \"8f4664fb-8570-4f61-b99f-37a3e9031738\" (UID: \"8f4664fb-8570-4f61-b99f-37a3e9031738\") " Jan 03 05:44:33 crc kubenswrapper[4854]: I0103 05:44:33.149385 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvj6m\" (UniqueName: \"kubernetes.io/projected/8f4664fb-8570-4f61-b99f-37a3e9031738-kube-api-access-rvj6m\") pod \"8f4664fb-8570-4f61-b99f-37a3e9031738\" (UID: \"8f4664fb-8570-4f61-b99f-37a3e9031738\") " Jan 03 05:44:33 crc kubenswrapper[4854]: I0103 05:44:33.149958 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f4664fb-8570-4f61-b99f-37a3e9031738-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "8f4664fb-8570-4f61-b99f-37a3e9031738" (UID: "8f4664fb-8570-4f61-b99f-37a3e9031738"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:44:33 crc kubenswrapper[4854]: I0103 05:44:33.150012 4854 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f4664fb-8570-4f61-b99f-37a3e9031738-config\") on node \"crc\" DevicePath \"\"" Jan 03 05:44:33 crc kubenswrapper[4854]: I0103 05:44:33.150033 4854 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f4664fb-8570-4f61-b99f-37a3e9031738-client-ca\") on node \"crc\" DevicePath \"\"" Jan 03 05:44:33 crc kubenswrapper[4854]: I0103 05:44:33.154139 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f4664fb-8570-4f61-b99f-37a3e9031738-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8f4664fb-8570-4f61-b99f-37a3e9031738" (UID: "8f4664fb-8570-4f61-b99f-37a3e9031738"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:44:33 crc kubenswrapper[4854]: I0103 05:44:33.160300 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f4664fb-8570-4f61-b99f-37a3e9031738-kube-api-access-rvj6m" (OuterVolumeSpecName: "kube-api-access-rvj6m") pod "8f4664fb-8570-4f61-b99f-37a3e9031738" (UID: "8f4664fb-8570-4f61-b99f-37a3e9031738"). InnerVolumeSpecName "kube-api-access-rvj6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:44:33 crc kubenswrapper[4854]: I0103 05:44:33.251219 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20f336e4-1795-44b6-bde4-e614b5bee120-serving-cert\") pod \"controller-manager-68ff5b985d-c4tlf\" (UID: \"20f336e4-1795-44b6-bde4-e614b5bee120\") " pod="openshift-controller-manager/controller-manager-68ff5b985d-c4tlf" Jan 03 05:44:33 crc kubenswrapper[4854]: I0103 05:44:33.251321 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20f336e4-1795-44b6-bde4-e614b5bee120-client-ca\") pod \"controller-manager-68ff5b985d-c4tlf\" (UID: \"20f336e4-1795-44b6-bde4-e614b5bee120\") " pod="openshift-controller-manager/controller-manager-68ff5b985d-c4tlf" Jan 03 05:44:33 crc kubenswrapper[4854]: I0103 05:44:33.251535 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20f336e4-1795-44b6-bde4-e614b5bee120-config\") pod \"controller-manager-68ff5b985d-c4tlf\" (UID: \"20f336e4-1795-44b6-bde4-e614b5bee120\") " pod="openshift-controller-manager/controller-manager-68ff5b985d-c4tlf" Jan 03 05:44:33 crc kubenswrapper[4854]: I0103 05:44:33.251564 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/20f336e4-1795-44b6-bde4-e614b5bee120-proxy-ca-bundles\") pod \"controller-manager-68ff5b985d-c4tlf\" (UID: \"20f336e4-1795-44b6-bde4-e614b5bee120\") " pod="openshift-controller-manager/controller-manager-68ff5b985d-c4tlf" Jan 03 05:44:33 crc kubenswrapper[4854]: I0103 05:44:33.251646 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l5t6\" (UniqueName: \"kubernetes.io/projected/20f336e4-1795-44b6-bde4-e614b5bee120-kube-api-access-7l5t6\") pod \"controller-manager-68ff5b985d-c4tlf\" (UID: \"20f336e4-1795-44b6-bde4-e614b5bee120\") " pod="openshift-controller-manager/controller-manager-68ff5b985d-c4tlf" Jan 03 05:44:33 crc kubenswrapper[4854]: I0103 05:44:33.251839 4854 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f4664fb-8570-4f61-b99f-37a3e9031738-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 05:44:33 crc kubenswrapper[4854]: I0103 05:44:33.251867 4854 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8f4664fb-8570-4f61-b99f-37a3e9031738-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 03 05:44:33 crc kubenswrapper[4854]: I0103 05:44:33.251893 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvj6m\" (UniqueName: \"kubernetes.io/projected/8f4664fb-8570-4f61-b99f-37a3e9031738-kube-api-access-rvj6m\") on node \"crc\" DevicePath \"\"" Jan 03 05:44:33 crc kubenswrapper[4854]: I0103 05:44:33.297138 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-sxln8" event={"ID":"8f4664fb-8570-4f61-b99f-37a3e9031738","Type":"ContainerDied","Data":"ac2c3b2b3e8ac75104447e9bf4bb9f3ad00d2f3d09db6b7b203f0f5ac926fde1"} Jan 03 05:44:33 crc kubenswrapper[4854]: I0103 05:44:33.297187 4854 scope.go:117] "RemoveContainer" containerID="d312f0bb860ce2820784141d1dab5cd2a5799a42fc7edb81219216d442d05031" Jan 03 05:44:33 crc kubenswrapper[4854]: I0103 05:44:33.297186 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-sxln8" Jan 03 05:44:33 crc kubenswrapper[4854]: I0103 05:44:33.297558 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-8596db5b-95z6q" Jan 03 05:44:33 crc kubenswrapper[4854]: I0103 05:44:33.302539 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-8596db5b-95z6q" Jan 03 05:44:33 crc kubenswrapper[4854]: I0103 05:44:33.317372 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-8596db5b-95z6q" podStartSLOduration=5.317312983 podStartE2EDuration="5.317312983s" podCreationTimestamp="2026-01-03 05:44:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:44:33.316310108 +0000 UTC m=+251.642886710" watchObservedRunningTime="2026-01-03 05:44:33.317312983 +0000 UTC m=+251.643889615" Jan 03 05:44:33 crc kubenswrapper[4854]: I0103 05:44:33.353233 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20f336e4-1795-44b6-bde4-e614b5bee120-client-ca\") pod \"controller-manager-68ff5b985d-c4tlf\" (UID: \"20f336e4-1795-44b6-bde4-e614b5bee120\") " pod="openshift-controller-manager/controller-manager-68ff5b985d-c4tlf" Jan 03 05:44:33 crc kubenswrapper[4854]: I0103 05:44:33.353339 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20f336e4-1795-44b6-bde4-e614b5bee120-config\") pod \"controller-manager-68ff5b985d-c4tlf\" (UID: \"20f336e4-1795-44b6-bde4-e614b5bee120\") " pod="openshift-controller-manager/controller-manager-68ff5b985d-c4tlf" Jan 03 05:44:33 crc kubenswrapper[4854]: I0103 05:44:33.353401 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/20f336e4-1795-44b6-bde4-e614b5bee120-proxy-ca-bundles\") pod \"controller-manager-68ff5b985d-c4tlf\" (UID: \"20f336e4-1795-44b6-bde4-e614b5bee120\") " pod="openshift-controller-manager/controller-manager-68ff5b985d-c4tlf" Jan 03 05:44:33 crc kubenswrapper[4854]: I0103 05:44:33.353454 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7l5t6\" (UniqueName: \"kubernetes.io/projected/20f336e4-1795-44b6-bde4-e614b5bee120-kube-api-access-7l5t6\") pod \"controller-manager-68ff5b985d-c4tlf\" (UID: \"20f336e4-1795-44b6-bde4-e614b5bee120\") " pod="openshift-controller-manager/controller-manager-68ff5b985d-c4tlf" Jan 03 05:44:33 crc kubenswrapper[4854]: I0103 05:44:33.353520 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20f336e4-1795-44b6-bde4-e614b5bee120-serving-cert\") pod \"controller-manager-68ff5b985d-c4tlf\" (UID: \"20f336e4-1795-44b6-bde4-e614b5bee120\") " pod="openshift-controller-manager/controller-manager-68ff5b985d-c4tlf" Jan 03 05:44:33 crc kubenswrapper[4854]: I0103 05:44:33.355092 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20f336e4-1795-44b6-bde4-e614b5bee120-client-ca\") pod \"controller-manager-68ff5b985d-c4tlf\" (UID: \"20f336e4-1795-44b6-bde4-e614b5bee120\") " pod="openshift-controller-manager/controller-manager-68ff5b985d-c4tlf" Jan 03 05:44:33 crc kubenswrapper[4854]: I0103 05:44:33.358120 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20f336e4-1795-44b6-bde4-e614b5bee120-config\") pod \"controller-manager-68ff5b985d-c4tlf\" (UID: \"20f336e4-1795-44b6-bde4-e614b5bee120\") " pod="openshift-controller-manager/controller-manager-68ff5b985d-c4tlf" Jan 03 05:44:33 crc kubenswrapper[4854]: I0103 05:44:33.361297 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7ff6f7c9f7-sxln8"] Jan 03 05:44:33 crc kubenswrapper[4854]: I0103 05:44:33.365546 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/20f336e4-1795-44b6-bde4-e614b5bee120-proxy-ca-bundles\") pod \"controller-manager-68ff5b985d-c4tlf\" (UID: \"20f336e4-1795-44b6-bde4-e614b5bee120\") " pod="openshift-controller-manager/controller-manager-68ff5b985d-c4tlf" Jan 03 05:44:33 crc kubenswrapper[4854]: I0103 05:44:33.370900 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7l5t6\" (UniqueName: \"kubernetes.io/projected/20f336e4-1795-44b6-bde4-e614b5bee120-kube-api-access-7l5t6\") pod \"controller-manager-68ff5b985d-c4tlf\" (UID: \"20f336e4-1795-44b6-bde4-e614b5bee120\") " pod="openshift-controller-manager/controller-manager-68ff5b985d-c4tlf" Jan 03 05:44:33 crc kubenswrapper[4854]: I0103 05:44:33.372290 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20f336e4-1795-44b6-bde4-e614b5bee120-serving-cert\") pod \"controller-manager-68ff5b985d-c4tlf\" (UID: \"20f336e4-1795-44b6-bde4-e614b5bee120\") " pod="openshift-controller-manager/controller-manager-68ff5b985d-c4tlf" Jan 03 05:44:33 crc kubenswrapper[4854]: I0103 05:44:33.374889 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7ff6f7c9f7-sxln8"] Jan 03 05:44:33 crc kubenswrapper[4854]: I0103 05:44:33.426769 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68ff5b985d-c4tlf" Jan 03 05:44:33 crc kubenswrapper[4854]: I0103 05:44:33.753046 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 03 05:44:33 crc kubenswrapper[4854]: I0103 05:44:33.875155 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-68ff5b985d-c4tlf"] Jan 03 05:44:33 crc kubenswrapper[4854]: W0103 05:44:33.879064 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20f336e4_1795_44b6_bde4_e614b5bee120.slice/crio-cb6d3001da420da3cab59d26267b7798517060cc8de9272617cb516a3947043e WatchSource:0}: Error finding container cb6d3001da420da3cab59d26267b7798517060cc8de9272617cb516a3947043e: Status 404 returned error can't find the container with id cb6d3001da420da3cab59d26267b7798517060cc8de9272617cb516a3947043e Jan 03 05:44:34 crc kubenswrapper[4854]: I0103 05:44:34.127399 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f4664fb-8570-4f61-b99f-37a3e9031738" path="/var/lib/kubelet/pods/8f4664fb-8570-4f61-b99f-37a3e9031738/volumes" Jan 03 05:44:34 crc kubenswrapper[4854]: I0103 05:44:34.134586 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 03 05:44:34 crc kubenswrapper[4854]: I0103 05:44:34.306436 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68ff5b985d-c4tlf" event={"ID":"20f336e4-1795-44b6-bde4-e614b5bee120","Type":"ContainerStarted","Data":"cb6d3001da420da3cab59d26267b7798517060cc8de9272617cb516a3947043e"} Jan 03 05:44:34 crc kubenswrapper[4854]: I0103 05:44:34.563516 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 03 05:44:34 crc kubenswrapper[4854]: I0103 05:44:34.664401 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 03 05:44:34 crc kubenswrapper[4854]: I0103 05:44:34.888825 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 03 05:44:35 crc kubenswrapper[4854]: I0103 05:44:35.069208 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 03 05:44:35 crc kubenswrapper[4854]: I0103 05:44:35.324293 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68ff5b985d-c4tlf" event={"ID":"20f336e4-1795-44b6-bde4-e614b5bee120","Type":"ContainerStarted","Data":"0578ac453395e979a1316b50402ba660b646103a4ccce294c2f4164820cea48e"} Jan 03 05:44:35 crc kubenswrapper[4854]: I0103 05:44:35.353126 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-68ff5b985d-c4tlf" podStartSLOduration=7.353075937 podStartE2EDuration="7.353075937s" podCreationTimestamp="2026-01-03 05:44:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:44:35.347173097 +0000 UTC m=+253.673749699" watchObservedRunningTime="2026-01-03 05:44:35.353075937 +0000 UTC m=+253.679652549" Jan 03 05:44:35 crc kubenswrapper[4854]: I0103 05:44:35.913597 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 03 05:44:36 crc kubenswrapper[4854]: I0103 05:44:36.143522 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 03 05:44:36 crc kubenswrapper[4854]: I0103 05:44:36.178888 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 03 05:44:36 crc kubenswrapper[4854]: I0103 05:44:36.334176 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 03 05:44:36 crc kubenswrapper[4854]: I0103 05:44:36.334252 4854 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="98de16ac545bc42fd7e40a9a51c5b4ea215977b46afc3d7a69dcbe3032d6151a" exitCode=137 Jan 03 05:44:36 crc kubenswrapper[4854]: I0103 05:44:36.334624 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-68ff5b985d-c4tlf" Jan 03 05:44:36 crc kubenswrapper[4854]: I0103 05:44:36.349285 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-68ff5b985d-c4tlf" Jan 03 05:44:36 crc kubenswrapper[4854]: I0103 05:44:36.492663 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 03 05:44:36 crc kubenswrapper[4854]: I0103 05:44:36.509117 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 03 05:44:36 crc kubenswrapper[4854]: I0103 05:44:36.509259 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 05:44:36 crc kubenswrapper[4854]: I0103 05:44:36.603632 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 03 05:44:36 crc kubenswrapper[4854]: I0103 05:44:36.603718 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 03 05:44:36 crc kubenswrapper[4854]: I0103 05:44:36.603781 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 03 05:44:36 crc kubenswrapper[4854]: I0103 05:44:36.603819 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 03 05:44:36 crc kubenswrapper[4854]: I0103 05:44:36.603813 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 05:44:36 crc kubenswrapper[4854]: I0103 05:44:36.603889 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 03 05:44:36 crc kubenswrapper[4854]: I0103 05:44:36.603937 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 05:44:36 crc kubenswrapper[4854]: I0103 05:44:36.604000 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 05:44:36 crc kubenswrapper[4854]: I0103 05:44:36.604029 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 05:44:36 crc kubenswrapper[4854]: I0103 05:44:36.604363 4854 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 03 05:44:36 crc kubenswrapper[4854]: I0103 05:44:36.604392 4854 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 03 05:44:36 crc kubenswrapper[4854]: I0103 05:44:36.604411 4854 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 03 05:44:36 crc kubenswrapper[4854]: I0103 05:44:36.604429 4854 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 03 05:44:36 crc kubenswrapper[4854]: I0103 05:44:36.614792 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 05:44:36 crc kubenswrapper[4854]: I0103 05:44:36.705954 4854 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 03 05:44:36 crc kubenswrapper[4854]: I0103 05:44:36.753140 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 03 05:44:37 crc kubenswrapper[4854]: I0103 05:44:37.341103 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 03 05:44:37 crc kubenswrapper[4854]: I0103 05:44:37.341583 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 05:44:37 crc kubenswrapper[4854]: I0103 05:44:37.341996 4854 scope.go:117] "RemoveContainer" containerID="98de16ac545bc42fd7e40a9a51c5b4ea215977b46afc3d7a69dcbe3032d6151a" Jan 03 05:44:37 crc kubenswrapper[4854]: I0103 05:44:37.544606 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 03 05:44:37 crc kubenswrapper[4854]: I0103 05:44:37.892854 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 03 05:44:38 crc kubenswrapper[4854]: I0103 05:44:38.130557 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 03 05:44:38 crc kubenswrapper[4854]: I0103 05:44:38.332410 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 03 05:44:38 crc kubenswrapper[4854]: I0103 05:44:38.637349 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 03 05:44:38 crc kubenswrapper[4854]: I0103 05:44:38.793987 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 03 05:44:39 crc kubenswrapper[4854]: I0103 05:44:39.179276 4854 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 03 05:44:39 crc kubenswrapper[4854]: I0103 05:44:39.516725 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 03 05:44:40 crc kubenswrapper[4854]: I0103 05:44:40.033118 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-64gkx"] Jan 03 05:44:40 crc kubenswrapper[4854]: I0103 05:44:40.033590 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-64gkx" podUID="80855e9f-3a0c-439c-87cf-933b8825c398" containerName="registry-server" containerID="cri-o://c084121ee582724f034cb0a71f515135288bad9d7d51135e49266427e49c725c" gracePeriod=30 Jan 03 05:44:40 crc kubenswrapper[4854]: I0103 05:44:40.046649 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bqxfg"] Jan 03 05:44:40 crc kubenswrapper[4854]: I0103 05:44:40.047011 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bqxfg" podUID="6127414f-e3e3-4c52-81a8-f6fea70b7d0c" containerName="registry-server" containerID="cri-o://7d0908a790711ab88ce877b0913dd162e60e002f0e3309ef83e63fbfc04c76d8" gracePeriod=30 Jan 03 05:44:40 crc kubenswrapper[4854]: I0103 05:44:40.059812 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mfbxz"] Jan 03 05:44:40 crc kubenswrapper[4854]: I0103 05:44:40.060289 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mfbxz" podUID="54056ea8-c177-4995-8261-209eb3200f5f" containerName="registry-server" containerID="cri-o://99bfefcdc6181293de37907430c0cb1c85c057888c860b287f9c5ca01c37fd9c" gracePeriod=30 Jan 03 05:44:40 crc kubenswrapper[4854]: I0103 05:44:40.084833 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2jw8v"] Jan 03 05:44:40 crc kubenswrapper[4854]: I0103 05:44:40.085154 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-2jw8v" podUID="5b7f5d78-25a5-497a-9315-494fe26edb93" containerName="marketplace-operator" containerID="cri-o://116076dc26f956b7f5b8277e722211dfb77ce5ffeb7b78f8e1ea0358f7dd9fcb" gracePeriod=30 Jan 03 05:44:40 crc kubenswrapper[4854]: I0103 05:44:40.102877 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c8dxw"] Jan 03 05:44:40 crc kubenswrapper[4854]: I0103 05:44:40.103783 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-c8dxw" podUID="be6f79dd-0ea7-442e-ab1e-e35b15d45721" containerName="registry-server" containerID="cri-o://d69da54aa74785fd1ee550ed529b9658bf6d8cb5e91ca308295fdce996446a48" gracePeriod=30 Jan 03 05:44:40 crc kubenswrapper[4854]: I0103 05:44:40.114885 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f2b22"] Jan 03 05:44:40 crc kubenswrapper[4854]: I0103 05:44:40.115922 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-f2b22" podUID="332dcfb7-8bcf-46bf-9168-4bdb4411e55e" containerName="registry-server" containerID="cri-o://879c4d1e22238a11a74d0e95a96ce85f406cd3da2e7217ea2f4dec58d97aea69" gracePeriod=30 Jan 03 05:44:40 crc kubenswrapper[4854]: I0103 05:44:40.124195 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lcbzf"] Jan 03 05:44:40 crc kubenswrapper[4854]: I0103 05:44:40.124890 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-lcbzf" Jan 03 05:44:40 crc kubenswrapper[4854]: I0103 05:44:40.134693 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lcbzf"] Jan 03 05:44:40 crc kubenswrapper[4854]: I0103 05:44:40.266695 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 03 05:44:40 crc kubenswrapper[4854]: I0103 05:44:40.287149 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/07528198-b6c3-44c7-aec4-4647d7a06116-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lcbzf\" (UID: \"07528198-b6c3-44c7-aec4-4647d7a06116\") " pod="openshift-marketplace/marketplace-operator-79b997595-lcbzf" Jan 03 05:44:40 crc kubenswrapper[4854]: I0103 05:44:40.287306 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlf46\" (UniqueName: \"kubernetes.io/projected/07528198-b6c3-44c7-aec4-4647d7a06116-kube-api-access-hlf46\") pod \"marketplace-operator-79b997595-lcbzf\" (UID: \"07528198-b6c3-44c7-aec4-4647d7a06116\") " pod="openshift-marketplace/marketplace-operator-79b997595-lcbzf" Jan 03 05:44:40 crc kubenswrapper[4854]: I0103 05:44:40.287424 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/07528198-b6c3-44c7-aec4-4647d7a06116-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lcbzf\" (UID: \"07528198-b6c3-44c7-aec4-4647d7a06116\") " pod="openshift-marketplace/marketplace-operator-79b997595-lcbzf" Jan 03 05:44:40 crc kubenswrapper[4854]: I0103 05:44:40.389296 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlf46\" (UniqueName: \"kubernetes.io/projected/07528198-b6c3-44c7-aec4-4647d7a06116-kube-api-access-hlf46\") pod \"marketplace-operator-79b997595-lcbzf\" (UID: \"07528198-b6c3-44c7-aec4-4647d7a06116\") " pod="openshift-marketplace/marketplace-operator-79b997595-lcbzf" Jan 03 05:44:40 crc kubenswrapper[4854]: I0103 05:44:40.389380 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/07528198-b6c3-44c7-aec4-4647d7a06116-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lcbzf\" (UID: \"07528198-b6c3-44c7-aec4-4647d7a06116\") " pod="openshift-marketplace/marketplace-operator-79b997595-lcbzf" Jan 03 05:44:40 crc kubenswrapper[4854]: I0103 05:44:40.390489 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/07528198-b6c3-44c7-aec4-4647d7a06116-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lcbzf\" (UID: \"07528198-b6c3-44c7-aec4-4647d7a06116\") " pod="openshift-marketplace/marketplace-operator-79b997595-lcbzf" Jan 03 05:44:40 crc kubenswrapper[4854]: I0103 05:44:40.391418 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/07528198-b6c3-44c7-aec4-4647d7a06116-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lcbzf\" (UID: \"07528198-b6c3-44c7-aec4-4647d7a06116\") " pod="openshift-marketplace/marketplace-operator-79b997595-lcbzf" Jan 03 05:44:40 crc kubenswrapper[4854]: I0103 05:44:40.399511 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/07528198-b6c3-44c7-aec4-4647d7a06116-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lcbzf\" (UID: \"07528198-b6c3-44c7-aec4-4647d7a06116\") " pod="openshift-marketplace/marketplace-operator-79b997595-lcbzf" Jan 03 05:44:40 crc kubenswrapper[4854]: I0103 05:44:40.407227 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlf46\" (UniqueName: \"kubernetes.io/projected/07528198-b6c3-44c7-aec4-4647d7a06116-kube-api-access-hlf46\") pod \"marketplace-operator-79b997595-lcbzf\" (UID: \"07528198-b6c3-44c7-aec4-4647d7a06116\") " pod="openshift-marketplace/marketplace-operator-79b997595-lcbzf" Jan 03 05:44:40 crc kubenswrapper[4854]: I0103 05:44:40.442648 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-lcbzf" Jan 03 05:44:40 crc kubenswrapper[4854]: I0103 05:44:40.689273 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 03 05:44:40 crc kubenswrapper[4854]: I0103 05:44:40.716187 4854 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-2jw8v container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Jan 03 05:44:40 crc kubenswrapper[4854]: I0103 05:44:40.716276 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-2jw8v" podUID="5b7f5d78-25a5-497a-9315-494fe26edb93" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Jan 03 05:44:40 crc kubenswrapper[4854]: I0103 05:44:40.913808 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lcbzf"] Jan 03 05:44:41 crc kubenswrapper[4854]: I0103 05:44:41.378859 4854 generic.go:334] "Generic (PLEG): container finished" podID="6127414f-e3e3-4c52-81a8-f6fea70b7d0c" containerID="7d0908a790711ab88ce877b0913dd162e60e002f0e3309ef83e63fbfc04c76d8" exitCode=0 Jan 03 05:44:41 crc kubenswrapper[4854]: I0103 05:44:41.379313 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bqxfg" event={"ID":"6127414f-e3e3-4c52-81a8-f6fea70b7d0c","Type":"ContainerDied","Data":"7d0908a790711ab88ce877b0913dd162e60e002f0e3309ef83e63fbfc04c76d8"} Jan 03 05:44:41 crc kubenswrapper[4854]: I0103 05:44:41.382751 4854 generic.go:334] "Generic (PLEG): container finished" podID="54056ea8-c177-4995-8261-209eb3200f5f" containerID="99bfefcdc6181293de37907430c0cb1c85c057888c860b287f9c5ca01c37fd9c" exitCode=0 Jan 03 05:44:41 crc kubenswrapper[4854]: I0103 05:44:41.382806 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mfbxz" event={"ID":"54056ea8-c177-4995-8261-209eb3200f5f","Type":"ContainerDied","Data":"99bfefcdc6181293de37907430c0cb1c85c057888c860b287f9c5ca01c37fd9c"} Jan 03 05:44:41 crc kubenswrapper[4854]: I0103 05:44:41.387971 4854 generic.go:334] "Generic (PLEG): container finished" podID="332dcfb7-8bcf-46bf-9168-4bdb4411e55e" containerID="879c4d1e22238a11a74d0e95a96ce85f406cd3da2e7217ea2f4dec58d97aea69" exitCode=0 Jan 03 05:44:41 crc kubenswrapper[4854]: I0103 05:44:41.388022 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f2b22" event={"ID":"332dcfb7-8bcf-46bf-9168-4bdb4411e55e","Type":"ContainerDied","Data":"879c4d1e22238a11a74d0e95a96ce85f406cd3da2e7217ea2f4dec58d97aea69"} Jan 03 05:44:41 crc kubenswrapper[4854]: I0103 05:44:41.390393 4854 generic.go:334] "Generic (PLEG): container finished" podID="5b7f5d78-25a5-497a-9315-494fe26edb93" containerID="116076dc26f956b7f5b8277e722211dfb77ce5ffeb7b78f8e1ea0358f7dd9fcb" exitCode=0 Jan 03 05:44:41 crc kubenswrapper[4854]: I0103 05:44:41.390435 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2jw8v" event={"ID":"5b7f5d78-25a5-497a-9315-494fe26edb93","Type":"ContainerDied","Data":"116076dc26f956b7f5b8277e722211dfb77ce5ffeb7b78f8e1ea0358f7dd9fcb"} Jan 03 05:44:41 crc kubenswrapper[4854]: I0103 05:44:41.390458 4854 scope.go:117] "RemoveContainer" containerID="10cde2faee74631a8c6185f6e956d7af2bdb78e3cb320f987bb30ac1860b9571" Jan 03 05:44:41 crc kubenswrapper[4854]: I0103 05:44:41.407527 4854 generic.go:334] "Generic (PLEG): container finished" podID="80855e9f-3a0c-439c-87cf-933b8825c398" containerID="c084121ee582724f034cb0a71f515135288bad9d7d51135e49266427e49c725c" exitCode=0 Jan 03 05:44:41 crc kubenswrapper[4854]: I0103 05:44:41.407597 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-64gkx" event={"ID":"80855e9f-3a0c-439c-87cf-933b8825c398","Type":"ContainerDied","Data":"c084121ee582724f034cb0a71f515135288bad9d7d51135e49266427e49c725c"} Jan 03 05:44:41 crc kubenswrapper[4854]: I0103 05:44:41.410107 4854 generic.go:334] "Generic (PLEG): container finished" podID="be6f79dd-0ea7-442e-ab1e-e35b15d45721" containerID="d69da54aa74785fd1ee550ed529b9658bf6d8cb5e91ca308295fdce996446a48" exitCode=0 Jan 03 05:44:41 crc kubenswrapper[4854]: I0103 05:44:41.410144 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c8dxw" event={"ID":"be6f79dd-0ea7-442e-ab1e-e35b15d45721","Type":"ContainerDied","Data":"d69da54aa74785fd1ee550ed529b9658bf6d8cb5e91ca308295fdce996446a48"} Jan 03 05:44:41 crc kubenswrapper[4854]: I0103 05:44:41.411327 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lcbzf" event={"ID":"07528198-b6c3-44c7-aec4-4647d7a06116","Type":"ContainerStarted","Data":"a7a63037d37b6fac54b0bc52c839f468d82108330b9ebd9b890ec2d0f1e99c45"} Jan 03 05:44:41 crc kubenswrapper[4854]: I0103 05:44:41.698897 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bqxfg" Jan 03 05:44:41 crc kubenswrapper[4854]: I0103 05:44:41.746940 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 03 05:44:41 crc kubenswrapper[4854]: I0103 05:44:41.814976 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6127414f-e3e3-4c52-81a8-f6fea70b7d0c-utilities\") pod \"6127414f-e3e3-4c52-81a8-f6fea70b7d0c\" (UID: \"6127414f-e3e3-4c52-81a8-f6fea70b7d0c\") " Jan 03 05:44:41 crc kubenswrapper[4854]: I0103 05:44:41.815051 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gshw8\" (UniqueName: \"kubernetes.io/projected/6127414f-e3e3-4c52-81a8-f6fea70b7d0c-kube-api-access-gshw8\") pod \"6127414f-e3e3-4c52-81a8-f6fea70b7d0c\" (UID: \"6127414f-e3e3-4c52-81a8-f6fea70b7d0c\") " Jan 03 05:44:41 crc kubenswrapper[4854]: I0103 05:44:41.815151 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6127414f-e3e3-4c52-81a8-f6fea70b7d0c-catalog-content\") pod \"6127414f-e3e3-4c52-81a8-f6fea70b7d0c\" (UID: \"6127414f-e3e3-4c52-81a8-f6fea70b7d0c\") " Jan 03 05:44:41 crc kubenswrapper[4854]: I0103 05:44:41.816402 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6127414f-e3e3-4c52-81a8-f6fea70b7d0c-utilities" (OuterVolumeSpecName: "utilities") pod "6127414f-e3e3-4c52-81a8-f6fea70b7d0c" (UID: "6127414f-e3e3-4c52-81a8-f6fea70b7d0c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:44:41 crc kubenswrapper[4854]: I0103 05:44:41.821437 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6127414f-e3e3-4c52-81a8-f6fea70b7d0c-kube-api-access-gshw8" (OuterVolumeSpecName: "kube-api-access-gshw8") pod "6127414f-e3e3-4c52-81a8-f6fea70b7d0c" (UID: "6127414f-e3e3-4c52-81a8-f6fea70b7d0c"). InnerVolumeSpecName "kube-api-access-gshw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:44:41 crc kubenswrapper[4854]: I0103 05:44:41.845382 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c8dxw" Jan 03 05:44:41 crc kubenswrapper[4854]: I0103 05:44:41.852730 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-64gkx" Jan 03 05:44:41 crc kubenswrapper[4854]: I0103 05:44:41.856632 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f2b22" Jan 03 05:44:41 crc kubenswrapper[4854]: I0103 05:44:41.860671 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2jw8v" Jan 03 05:44:41 crc kubenswrapper[4854]: I0103 05:44:41.867464 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mfbxz" Jan 03 05:44:41 crc kubenswrapper[4854]: I0103 05:44:41.869286 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 03 05:44:41 crc kubenswrapper[4854]: I0103 05:44:41.894842 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6127414f-e3e3-4c52-81a8-f6fea70b7d0c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6127414f-e3e3-4c52-81a8-f6fea70b7d0c" (UID: "6127414f-e3e3-4c52-81a8-f6fea70b7d0c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:44:41 crc kubenswrapper[4854]: I0103 05:44:41.916417 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be6f79dd-0ea7-442e-ab1e-e35b15d45721-catalog-content\") pod \"be6f79dd-0ea7-442e-ab1e-e35b15d45721\" (UID: \"be6f79dd-0ea7-442e-ab1e-e35b15d45721\") " Jan 03 05:44:41 crc kubenswrapper[4854]: I0103 05:44:41.916543 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be6f79dd-0ea7-442e-ab1e-e35b15d45721-utilities\") pod \"be6f79dd-0ea7-442e-ab1e-e35b15d45721\" (UID: \"be6f79dd-0ea7-442e-ab1e-e35b15d45721\") " Jan 03 05:44:41 crc kubenswrapper[4854]: I0103 05:44:41.916614 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9kcs\" (UniqueName: \"kubernetes.io/projected/be6f79dd-0ea7-442e-ab1e-e35b15d45721-kube-api-access-c9kcs\") pod \"be6f79dd-0ea7-442e-ab1e-e35b15d45721\" (UID: \"be6f79dd-0ea7-442e-ab1e-e35b15d45721\") " Jan 03 05:44:41 crc kubenswrapper[4854]: I0103 05:44:41.916849 4854 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6127414f-e3e3-4c52-81a8-f6fea70b7d0c-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 05:44:41 crc kubenswrapper[4854]: I0103 05:44:41.916866 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gshw8\" (UniqueName: \"kubernetes.io/projected/6127414f-e3e3-4c52-81a8-f6fea70b7d0c-kube-api-access-gshw8\") on node \"crc\" DevicePath \"\"" Jan 03 05:44:41 crc kubenswrapper[4854]: I0103 05:44:41.916876 4854 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6127414f-e3e3-4c52-81a8-f6fea70b7d0c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 05:44:41 crc kubenswrapper[4854]: I0103 05:44:41.917774 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be6f79dd-0ea7-442e-ab1e-e35b15d45721-utilities" (OuterVolumeSpecName: "utilities") pod "be6f79dd-0ea7-442e-ab1e-e35b15d45721" (UID: "be6f79dd-0ea7-442e-ab1e-e35b15d45721"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:44:41 crc kubenswrapper[4854]: I0103 05:44:41.921224 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be6f79dd-0ea7-442e-ab1e-e35b15d45721-kube-api-access-c9kcs" (OuterVolumeSpecName: "kube-api-access-c9kcs") pod "be6f79dd-0ea7-442e-ab1e-e35b15d45721" (UID: "be6f79dd-0ea7-442e-ab1e-e35b15d45721"). InnerVolumeSpecName "kube-api-access-c9kcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:44:41 crc kubenswrapper[4854]: I0103 05:44:41.942692 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be6f79dd-0ea7-442e-ab1e-e35b15d45721-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "be6f79dd-0ea7-442e-ab1e-e35b15d45721" (UID: "be6f79dd-0ea7-442e-ab1e-e35b15d45721"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.017493 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kz766\" (UniqueName: \"kubernetes.io/projected/332dcfb7-8bcf-46bf-9168-4bdb4411e55e-kube-api-access-kz766\") pod \"332dcfb7-8bcf-46bf-9168-4bdb4411e55e\" (UID: \"332dcfb7-8bcf-46bf-9168-4bdb4411e55e\") " Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.017569 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/332dcfb7-8bcf-46bf-9168-4bdb4411e55e-utilities\") pod \"332dcfb7-8bcf-46bf-9168-4bdb4411e55e\" (UID: \"332dcfb7-8bcf-46bf-9168-4bdb4411e55e\") " Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.017590 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57qpt\" (UniqueName: \"kubernetes.io/projected/54056ea8-c177-4995-8261-209eb3200f5f-kube-api-access-57qpt\") pod \"54056ea8-c177-4995-8261-209eb3200f5f\" (UID: \"54056ea8-c177-4995-8261-209eb3200f5f\") " Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.017614 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54056ea8-c177-4995-8261-209eb3200f5f-catalog-content\") pod \"54056ea8-c177-4995-8261-209eb3200f5f\" (UID: \"54056ea8-c177-4995-8261-209eb3200f5f\") " Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.017630 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80855e9f-3a0c-439c-87cf-933b8825c398-utilities\") pod \"80855e9f-3a0c-439c-87cf-933b8825c398\" (UID: \"80855e9f-3a0c-439c-87cf-933b8825c398\") " Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.017687 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80855e9f-3a0c-439c-87cf-933b8825c398-catalog-content\") pod \"80855e9f-3a0c-439c-87cf-933b8825c398\" (UID: \"80855e9f-3a0c-439c-87cf-933b8825c398\") " Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.017735 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdlsv\" (UniqueName: \"kubernetes.io/projected/5b7f5d78-25a5-497a-9315-494fe26edb93-kube-api-access-pdlsv\") pod \"5b7f5d78-25a5-497a-9315-494fe26edb93\" (UID: \"5b7f5d78-25a5-497a-9315-494fe26edb93\") " Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.017755 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5b7f5d78-25a5-497a-9315-494fe26edb93-marketplace-trusted-ca\") pod \"5b7f5d78-25a5-497a-9315-494fe26edb93\" (UID: \"5b7f5d78-25a5-497a-9315-494fe26edb93\") " Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.017775 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtmpx\" (UniqueName: \"kubernetes.io/projected/80855e9f-3a0c-439c-87cf-933b8825c398-kube-api-access-jtmpx\") pod \"80855e9f-3a0c-439c-87cf-933b8825c398\" (UID: \"80855e9f-3a0c-439c-87cf-933b8825c398\") " Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.017816 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54056ea8-c177-4995-8261-209eb3200f5f-utilities\") pod \"54056ea8-c177-4995-8261-209eb3200f5f\" (UID: \"54056ea8-c177-4995-8261-209eb3200f5f\") " Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.017835 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5b7f5d78-25a5-497a-9315-494fe26edb93-marketplace-operator-metrics\") pod \"5b7f5d78-25a5-497a-9315-494fe26edb93\" (UID: \"5b7f5d78-25a5-497a-9315-494fe26edb93\") " Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.017858 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/332dcfb7-8bcf-46bf-9168-4bdb4411e55e-catalog-content\") pod \"332dcfb7-8bcf-46bf-9168-4bdb4411e55e\" (UID: \"332dcfb7-8bcf-46bf-9168-4bdb4411e55e\") " Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.018067 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9kcs\" (UniqueName: \"kubernetes.io/projected/be6f79dd-0ea7-442e-ab1e-e35b15d45721-kube-api-access-c9kcs\") on node \"crc\" DevicePath \"\"" Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.018097 4854 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be6f79dd-0ea7-442e-ab1e-e35b15d45721-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.018108 4854 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be6f79dd-0ea7-442e-ab1e-e35b15d45721-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.018267 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/332dcfb7-8bcf-46bf-9168-4bdb4411e55e-utilities" (OuterVolumeSpecName: "utilities") pod "332dcfb7-8bcf-46bf-9168-4bdb4411e55e" (UID: "332dcfb7-8bcf-46bf-9168-4bdb4411e55e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.018401 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b7f5d78-25a5-497a-9315-494fe26edb93-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "5b7f5d78-25a5-497a-9315-494fe26edb93" (UID: "5b7f5d78-25a5-497a-9315-494fe26edb93"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.020095 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54056ea8-c177-4995-8261-209eb3200f5f-utilities" (OuterVolumeSpecName: "utilities") pod "54056ea8-c177-4995-8261-209eb3200f5f" (UID: "54056ea8-c177-4995-8261-209eb3200f5f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.020247 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54056ea8-c177-4995-8261-209eb3200f5f-kube-api-access-57qpt" (OuterVolumeSpecName: "kube-api-access-57qpt") pod "54056ea8-c177-4995-8261-209eb3200f5f" (UID: "54056ea8-c177-4995-8261-209eb3200f5f"). InnerVolumeSpecName "kube-api-access-57qpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.021509 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b7f5d78-25a5-497a-9315-494fe26edb93-kube-api-access-pdlsv" (OuterVolumeSpecName: "kube-api-access-pdlsv") pod "5b7f5d78-25a5-497a-9315-494fe26edb93" (UID: "5b7f5d78-25a5-497a-9315-494fe26edb93"). InnerVolumeSpecName "kube-api-access-pdlsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.022166 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80855e9f-3a0c-439c-87cf-933b8825c398-kube-api-access-jtmpx" (OuterVolumeSpecName: "kube-api-access-jtmpx") pod "80855e9f-3a0c-439c-87cf-933b8825c398" (UID: "80855e9f-3a0c-439c-87cf-933b8825c398"). InnerVolumeSpecName "kube-api-access-jtmpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.024250 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b7f5d78-25a5-497a-9315-494fe26edb93-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "5b7f5d78-25a5-497a-9315-494fe26edb93" (UID: "5b7f5d78-25a5-497a-9315-494fe26edb93"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.024568 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/332dcfb7-8bcf-46bf-9168-4bdb4411e55e-kube-api-access-kz766" (OuterVolumeSpecName: "kube-api-access-kz766") pod "332dcfb7-8bcf-46bf-9168-4bdb4411e55e" (UID: "332dcfb7-8bcf-46bf-9168-4bdb4411e55e"). InnerVolumeSpecName "kube-api-access-kz766". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.027964 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80855e9f-3a0c-439c-87cf-933b8825c398-utilities" (OuterVolumeSpecName: "utilities") pod "80855e9f-3a0c-439c-87cf-933b8825c398" (UID: "80855e9f-3a0c-439c-87cf-933b8825c398"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.080328 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80855e9f-3a0c-439c-87cf-933b8825c398-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "80855e9f-3a0c-439c-87cf-933b8825c398" (UID: "80855e9f-3a0c-439c-87cf-933b8825c398"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.099120 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54056ea8-c177-4995-8261-209eb3200f5f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "54056ea8-c177-4995-8261-209eb3200f5f" (UID: "54056ea8-c177-4995-8261-209eb3200f5f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.109425 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.118690 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kz766\" (UniqueName: \"kubernetes.io/projected/332dcfb7-8bcf-46bf-9168-4bdb4411e55e-kube-api-access-kz766\") on node \"crc\" DevicePath \"\"" Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.118712 4854 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/332dcfb7-8bcf-46bf-9168-4bdb4411e55e-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.118724 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57qpt\" (UniqueName: \"kubernetes.io/projected/54056ea8-c177-4995-8261-209eb3200f5f-kube-api-access-57qpt\") on node \"crc\" DevicePath \"\"" Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.118732 4854 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54056ea8-c177-4995-8261-209eb3200f5f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.118741 4854 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80855e9f-3a0c-439c-87cf-933b8825c398-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.118749 4854 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80855e9f-3a0c-439c-87cf-933b8825c398-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.118758 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdlsv\" (UniqueName: \"kubernetes.io/projected/5b7f5d78-25a5-497a-9315-494fe26edb93-kube-api-access-pdlsv\") on node \"crc\" DevicePath \"\"" Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.118768 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtmpx\" (UniqueName: \"kubernetes.io/projected/80855e9f-3a0c-439c-87cf-933b8825c398-kube-api-access-jtmpx\") on node \"crc\" DevicePath \"\"" Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.118777 4854 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5b7f5d78-25a5-497a-9315-494fe26edb93-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.118785 4854 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54056ea8-c177-4995-8261-209eb3200f5f-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.118794 4854 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5b7f5d78-25a5-497a-9315-494fe26edb93-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.150325 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/332dcfb7-8bcf-46bf-9168-4bdb4411e55e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "332dcfb7-8bcf-46bf-9168-4bdb4411e55e" (UID: "332dcfb7-8bcf-46bf-9168-4bdb4411e55e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.220299 4854 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/332dcfb7-8bcf-46bf-9168-4bdb4411e55e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.420815 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c8dxw" event={"ID":"be6f79dd-0ea7-442e-ab1e-e35b15d45721","Type":"ContainerDied","Data":"714fcb62e30045c4d93426893a2a510ee20c068a585a9edf26cc6be8db3fb41e"} Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.421335 4854 scope.go:117] "RemoveContainer" containerID="d69da54aa74785fd1ee550ed529b9658bf6d8cb5e91ca308295fdce996446a48" Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.420921 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c8dxw" Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.423494 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lcbzf" event={"ID":"07528198-b6c3-44c7-aec4-4647d7a06116","Type":"ContainerStarted","Data":"ccd9c5a4f61c165f96a2b42680ccd773657a0bfcb8c8599cfbbde2f069b6a6c0"} Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.424346 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-lcbzf" Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.427334 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bqxfg" event={"ID":"6127414f-e3e3-4c52-81a8-f6fea70b7d0c","Type":"ContainerDied","Data":"292e5e8a3392ac1fdc0b9b4546a40aad83a6f2d190d0d5215204fb53bca1581f"} Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.427666 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bqxfg" Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.428343 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-lcbzf" Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.429924 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mfbxz" event={"ID":"54056ea8-c177-4995-8261-209eb3200f5f","Type":"ContainerDied","Data":"39ad847905265176cc0e5e459ad99dd3086d02c764b303959bc23f43071c1b5f"} Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.430009 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mfbxz" Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.438489 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f2b22" event={"ID":"332dcfb7-8bcf-46bf-9168-4bdb4411e55e","Type":"ContainerDied","Data":"f7c4e2e159f4fd1ad4dda96be9bcc2a89a389df5d1c707fb0a48335017eb6b64"} Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.438873 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f2b22" Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.446763 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2jw8v" Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.447279 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2jw8v" event={"ID":"5b7f5d78-25a5-497a-9315-494fe26edb93","Type":"ContainerDied","Data":"e0665b3045a4304a58bf0d3e0540ab570d559dd3be1331fcbf5c0932309d1f22"} Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.449016 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-lcbzf" podStartSLOduration=2.448999581 podStartE2EDuration="2.448999581s" podCreationTimestamp="2026-01-03 05:44:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:44:42.444787475 +0000 UTC m=+260.771364067" watchObservedRunningTime="2026-01-03 05:44:42.448999581 +0000 UTC m=+260.775576153" Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.454588 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-64gkx" event={"ID":"80855e9f-3a0c-439c-87cf-933b8825c398","Type":"ContainerDied","Data":"d28c158b59cc86c2e867be449bab788745fbdae228a2129c303acc502bd7f9dd"} Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.454689 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-64gkx" Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.457911 4854 scope.go:117] "RemoveContainer" containerID="c7dec33c8735f5b4e73712682ae1341fc298b8078565d9fb49fb2bcc536db146" Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.489395 4854 scope.go:117] "RemoveContainer" containerID="6e85964ce18a1663b8c86086eb0ea005f34d83633036e6ee5c67aa5f0cdea28c" Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.506025 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2jw8v"] Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.517168 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2jw8v"] Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.518552 4854 scope.go:117] "RemoveContainer" containerID="7d0908a790711ab88ce877b0913dd162e60e002f0e3309ef83e63fbfc04c76d8" Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.519283 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-64gkx"] Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.521566 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-64gkx"] Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.533300 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f2b22"] Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.538956 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-f2b22"] Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.544595 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c8dxw"] Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.549676 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-c8dxw"] Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.555600 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bqxfg"] Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.558513 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bqxfg"] Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.560857 4854 scope.go:117] "RemoveContainer" containerID="f82c4f86011d8a52d9798620693b5d376a0813d6575a5151c80632ad36eeec27" Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.570600 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mfbxz"] Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.576268 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mfbxz"] Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.582530 4854 scope.go:117] "RemoveContainer" containerID="56a849392b60c950d251d6f57a5fc8a99f1af50bc3d3301a78065d9e3b1a5e1b" Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.610790 4854 scope.go:117] "RemoveContainer" containerID="99bfefcdc6181293de37907430c0cb1c85c057888c860b287f9c5ca01c37fd9c" Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.635190 4854 scope.go:117] "RemoveContainer" containerID="cf4de877932af47c42d0ca2ef55c63b7b82a9fc53197cb9d734f7bef5741437e" Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.652589 4854 scope.go:117] "RemoveContainer" containerID="02aa45d7932041c095b0d160f4283abc73127fc0b55b44cbca74b6b43b39a74f" Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.670605 4854 scope.go:117] "RemoveContainer" containerID="879c4d1e22238a11a74d0e95a96ce85f406cd3da2e7217ea2f4dec58d97aea69" Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.698418 4854 scope.go:117] "RemoveContainer" containerID="52a75d9111d592b472af1dc45f1f0e978fa384ce0d37e2d305e42bac2b12c7fa" Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.722542 4854 scope.go:117] "RemoveContainer" containerID="fe03d90b389b8feb1e2d2b0401e8de71976317947ab776a2536d1d01888eedc4" Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.743165 4854 scope.go:117] "RemoveContainer" containerID="116076dc26f956b7f5b8277e722211dfb77ce5ffeb7b78f8e1ea0358f7dd9fcb" Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.762860 4854 scope.go:117] "RemoveContainer" containerID="c084121ee582724f034cb0a71f515135288bad9d7d51135e49266427e49c725c" Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.779947 4854 scope.go:117] "RemoveContainer" containerID="5009f6a4818fde46c27896adc63c2053578af8e21905d228fc7129847413f341" Jan 03 05:44:42 crc kubenswrapper[4854]: I0103 05:44:42.808067 4854 scope.go:117] "RemoveContainer" containerID="ea8e5712cadd80a475b2f05b210acffb5e24206ae558a8139fe633a1c7bf8f0b" Jan 03 05:44:43 crc kubenswrapper[4854]: I0103 05:44:43.147808 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 03 05:44:43 crc kubenswrapper[4854]: I0103 05:44:43.260845 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 03 05:44:43 crc kubenswrapper[4854]: I0103 05:44:43.557123 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 03 05:44:44 crc kubenswrapper[4854]: I0103 05:44:44.130485 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="332dcfb7-8bcf-46bf-9168-4bdb4411e55e" path="/var/lib/kubelet/pods/332dcfb7-8bcf-46bf-9168-4bdb4411e55e/volumes" Jan 03 05:44:44 crc kubenswrapper[4854]: I0103 05:44:44.132310 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54056ea8-c177-4995-8261-209eb3200f5f" path="/var/lib/kubelet/pods/54056ea8-c177-4995-8261-209eb3200f5f/volumes" Jan 03 05:44:44 crc kubenswrapper[4854]: I0103 05:44:44.133820 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b7f5d78-25a5-497a-9315-494fe26edb93" path="/var/lib/kubelet/pods/5b7f5d78-25a5-497a-9315-494fe26edb93/volumes" Jan 03 05:44:44 crc kubenswrapper[4854]: I0103 05:44:44.136052 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6127414f-e3e3-4c52-81a8-f6fea70b7d0c" path="/var/lib/kubelet/pods/6127414f-e3e3-4c52-81a8-f6fea70b7d0c/volumes" Jan 03 05:44:44 crc kubenswrapper[4854]: I0103 05:44:44.138457 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80855e9f-3a0c-439c-87cf-933b8825c398" path="/var/lib/kubelet/pods/80855e9f-3a0c-439c-87cf-933b8825c398/volumes" Jan 03 05:44:44 crc kubenswrapper[4854]: I0103 05:44:44.140643 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be6f79dd-0ea7-442e-ab1e-e35b15d45721" path="/var/lib/kubelet/pods/be6f79dd-0ea7-442e-ab1e-e35b15d45721/volumes" Jan 03 05:44:44 crc kubenswrapper[4854]: I0103 05:44:44.203990 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 03 05:44:44 crc kubenswrapper[4854]: I0103 05:44:44.590242 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 03 05:44:44 crc kubenswrapper[4854]: I0103 05:44:44.646565 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 03 05:44:44 crc kubenswrapper[4854]: I0103 05:44:44.908526 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 03 05:44:45 crc kubenswrapper[4854]: I0103 05:44:45.507650 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 03 05:44:45 crc kubenswrapper[4854]: I0103 05:44:45.726325 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 03 05:44:46 crc kubenswrapper[4854]: I0103 05:44:46.137977 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 03 05:44:47 crc kubenswrapper[4854]: I0103 05:44:47.090683 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 03 05:44:48 crc kubenswrapper[4854]: I0103 05:44:48.021118 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 03 05:44:48 crc kubenswrapper[4854]: I0103 05:44:48.050637 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 03 05:44:48 crc kubenswrapper[4854]: I0103 05:44:48.509142 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 03 05:44:48 crc kubenswrapper[4854]: I0103 05:44:48.925105 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 03 05:44:49 crc kubenswrapper[4854]: I0103 05:44:49.621375 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" podUID="ea3251f8-9e38-4094-86f1-98187e5b2c75" containerName="registry" containerID="cri-o://139b7c856f5bdcb9662e9b7169a664887b143754dfffc4e657b867f6147a922b" gracePeriod=30 Jan 03 05:44:50 crc kubenswrapper[4854]: I0103 05:44:50.044900 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:44:50 crc kubenswrapper[4854]: I0103 05:44:50.125117 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"ea3251f8-9e38-4094-86f1-98187e5b2c75\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " Jan 03 05:44:50 crc kubenswrapper[4854]: I0103 05:44:50.125161 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ea3251f8-9e38-4094-86f1-98187e5b2c75-registry-tls\") pod \"ea3251f8-9e38-4094-86f1-98187e5b2c75\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " Jan 03 05:44:50 crc kubenswrapper[4854]: I0103 05:44:50.125184 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ea3251f8-9e38-4094-86f1-98187e5b2c75-installation-pull-secrets\") pod \"ea3251f8-9e38-4094-86f1-98187e5b2c75\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " Jan 03 05:44:50 crc kubenswrapper[4854]: I0103 05:44:50.125208 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ea3251f8-9e38-4094-86f1-98187e5b2c75-registry-certificates\") pod \"ea3251f8-9e38-4094-86f1-98187e5b2c75\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " Jan 03 05:44:50 crc kubenswrapper[4854]: I0103 05:44:50.125239 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ea3251f8-9e38-4094-86f1-98187e5b2c75-ca-trust-extracted\") pod \"ea3251f8-9e38-4094-86f1-98187e5b2c75\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " Jan 03 05:44:50 crc kubenswrapper[4854]: I0103 05:44:50.125260 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ea3251f8-9e38-4094-86f1-98187e5b2c75-bound-sa-token\") pod \"ea3251f8-9e38-4094-86f1-98187e5b2c75\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " Jan 03 05:44:50 crc kubenswrapper[4854]: I0103 05:44:50.125280 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ea3251f8-9e38-4094-86f1-98187e5b2c75-trusted-ca\") pod \"ea3251f8-9e38-4094-86f1-98187e5b2c75\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " Jan 03 05:44:50 crc kubenswrapper[4854]: I0103 05:44:50.125307 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85lzq\" (UniqueName: \"kubernetes.io/projected/ea3251f8-9e38-4094-86f1-98187e5b2c75-kube-api-access-85lzq\") pod \"ea3251f8-9e38-4094-86f1-98187e5b2c75\" (UID: \"ea3251f8-9e38-4094-86f1-98187e5b2c75\") " Jan 03 05:44:50 crc kubenswrapper[4854]: I0103 05:44:50.126975 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea3251f8-9e38-4094-86f1-98187e5b2c75-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "ea3251f8-9e38-4094-86f1-98187e5b2c75" (UID: "ea3251f8-9e38-4094-86f1-98187e5b2c75"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:44:50 crc kubenswrapper[4854]: I0103 05:44:50.128183 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea3251f8-9e38-4094-86f1-98187e5b2c75-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "ea3251f8-9e38-4094-86f1-98187e5b2c75" (UID: "ea3251f8-9e38-4094-86f1-98187e5b2c75"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:44:50 crc kubenswrapper[4854]: I0103 05:44:50.131765 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea3251f8-9e38-4094-86f1-98187e5b2c75-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "ea3251f8-9e38-4094-86f1-98187e5b2c75" (UID: "ea3251f8-9e38-4094-86f1-98187e5b2c75"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:44:50 crc kubenswrapper[4854]: I0103 05:44:50.133011 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea3251f8-9e38-4094-86f1-98187e5b2c75-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "ea3251f8-9e38-4094-86f1-98187e5b2c75" (UID: "ea3251f8-9e38-4094-86f1-98187e5b2c75"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:44:50 crc kubenswrapper[4854]: I0103 05:44:50.133713 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea3251f8-9e38-4094-86f1-98187e5b2c75-kube-api-access-85lzq" (OuterVolumeSpecName: "kube-api-access-85lzq") pod "ea3251f8-9e38-4094-86f1-98187e5b2c75" (UID: "ea3251f8-9e38-4094-86f1-98187e5b2c75"). InnerVolumeSpecName "kube-api-access-85lzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:44:50 crc kubenswrapper[4854]: I0103 05:44:50.136614 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "ea3251f8-9e38-4094-86f1-98187e5b2c75" (UID: "ea3251f8-9e38-4094-86f1-98187e5b2c75"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 03 05:44:50 crc kubenswrapper[4854]: I0103 05:44:50.137480 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea3251f8-9e38-4094-86f1-98187e5b2c75-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "ea3251f8-9e38-4094-86f1-98187e5b2c75" (UID: "ea3251f8-9e38-4094-86f1-98187e5b2c75"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:44:50 crc kubenswrapper[4854]: I0103 05:44:50.147252 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea3251f8-9e38-4094-86f1-98187e5b2c75-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "ea3251f8-9e38-4094-86f1-98187e5b2c75" (UID: "ea3251f8-9e38-4094-86f1-98187e5b2c75"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:44:50 crc kubenswrapper[4854]: I0103 05:44:50.226695 4854 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ea3251f8-9e38-4094-86f1-98187e5b2c75-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 03 05:44:50 crc kubenswrapper[4854]: I0103 05:44:50.226979 4854 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ea3251f8-9e38-4094-86f1-98187e5b2c75-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 03 05:44:50 crc kubenswrapper[4854]: I0103 05:44:50.227095 4854 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ea3251f8-9e38-4094-86f1-98187e5b2c75-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 03 05:44:50 crc kubenswrapper[4854]: I0103 05:44:50.227219 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85lzq\" (UniqueName: \"kubernetes.io/projected/ea3251f8-9e38-4094-86f1-98187e5b2c75-kube-api-access-85lzq\") on node \"crc\" DevicePath \"\"" Jan 03 05:44:50 crc kubenswrapper[4854]: I0103 05:44:50.227302 4854 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ea3251f8-9e38-4094-86f1-98187e5b2c75-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 03 05:44:50 crc kubenswrapper[4854]: I0103 05:44:50.227379 4854 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ea3251f8-9e38-4094-86f1-98187e5b2c75-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 03 05:44:50 crc kubenswrapper[4854]: I0103 05:44:50.227440 4854 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ea3251f8-9e38-4094-86f1-98187e5b2c75-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 03 05:44:50 crc kubenswrapper[4854]: I0103 05:44:50.521356 4854 generic.go:334] "Generic (PLEG): container finished" podID="ea3251f8-9e38-4094-86f1-98187e5b2c75" containerID="139b7c856f5bdcb9662e9b7169a664887b143754dfffc4e657b867f6147a922b" exitCode=0 Jan 03 05:44:50 crc kubenswrapper[4854]: I0103 05:44:50.521411 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" Jan 03 05:44:50 crc kubenswrapper[4854]: I0103 05:44:50.521414 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" event={"ID":"ea3251f8-9e38-4094-86f1-98187e5b2c75","Type":"ContainerDied","Data":"139b7c856f5bdcb9662e9b7169a664887b143754dfffc4e657b867f6147a922b"} Jan 03 05:44:50 crc kubenswrapper[4854]: I0103 05:44:50.521692 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wc7xf" event={"ID":"ea3251f8-9e38-4094-86f1-98187e5b2c75","Type":"ContainerDied","Data":"bc8bcb436e40f9b244ffa717821590a37f65f804f0ec9c69c9a9fcb2fe572167"} Jan 03 05:44:50 crc kubenswrapper[4854]: I0103 05:44:50.521720 4854 scope.go:117] "RemoveContainer" containerID="139b7c856f5bdcb9662e9b7169a664887b143754dfffc4e657b867f6147a922b" Jan 03 05:44:50 crc kubenswrapper[4854]: I0103 05:44:50.547330 4854 scope.go:117] "RemoveContainer" containerID="139b7c856f5bdcb9662e9b7169a664887b143754dfffc4e657b867f6147a922b" Jan 03 05:44:50 crc kubenswrapper[4854]: E0103 05:44:50.547735 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"139b7c856f5bdcb9662e9b7169a664887b143754dfffc4e657b867f6147a922b\": container with ID starting with 139b7c856f5bdcb9662e9b7169a664887b143754dfffc4e657b867f6147a922b not found: ID does not exist" containerID="139b7c856f5bdcb9662e9b7169a664887b143754dfffc4e657b867f6147a922b" Jan 03 05:44:50 crc kubenswrapper[4854]: I0103 05:44:50.547759 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"139b7c856f5bdcb9662e9b7169a664887b143754dfffc4e657b867f6147a922b"} err="failed to get container status \"139b7c856f5bdcb9662e9b7169a664887b143754dfffc4e657b867f6147a922b\": rpc error: code = NotFound desc = could not find container \"139b7c856f5bdcb9662e9b7169a664887b143754dfffc4e657b867f6147a922b\": container with ID starting with 139b7c856f5bdcb9662e9b7169a664887b143754dfffc4e657b867f6147a922b not found: ID does not exist" Jan 03 05:44:50 crc kubenswrapper[4854]: I0103 05:44:50.550091 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wc7xf"] Jan 03 05:44:50 crc kubenswrapper[4854]: I0103 05:44:50.555591 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wc7xf"] Jan 03 05:44:50 crc kubenswrapper[4854]: I0103 05:44:50.715156 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 03 05:44:51 crc kubenswrapper[4854]: I0103 05:44:51.027935 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 03 05:44:51 crc kubenswrapper[4854]: I0103 05:44:51.088618 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 03 05:44:51 crc kubenswrapper[4854]: I0103 05:44:51.525971 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 03 05:44:51 crc kubenswrapper[4854]: I0103 05:44:51.560553 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 03 05:44:52 crc kubenswrapper[4854]: I0103 05:44:52.128443 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea3251f8-9e38-4094-86f1-98187e5b2c75" path="/var/lib/kubelet/pods/ea3251f8-9e38-4094-86f1-98187e5b2c75/volumes" Jan 03 05:44:52 crc kubenswrapper[4854]: I0103 05:44:52.294098 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 03 05:44:52 crc kubenswrapper[4854]: I0103 05:44:52.472569 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 03 05:44:52 crc kubenswrapper[4854]: I0103 05:44:52.818483 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 03 05:44:53 crc kubenswrapper[4854]: I0103 05:44:53.049931 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 03 05:44:53 crc kubenswrapper[4854]: I0103 05:44:53.773193 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 03 05:44:54 crc kubenswrapper[4854]: I0103 05:44:54.238841 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 03 05:44:55 crc kubenswrapper[4854]: I0103 05:44:55.067907 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 03 05:44:55 crc kubenswrapper[4854]: I0103 05:44:55.470545 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 03 05:44:55 crc kubenswrapper[4854]: I0103 05:44:55.529843 4854 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 03 05:44:55 crc kubenswrapper[4854]: I0103 05:44:55.659279 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 03 05:44:56 crc kubenswrapper[4854]: I0103 05:44:56.173659 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 03 05:44:57 crc kubenswrapper[4854]: I0103 05:44:57.127499 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 03 05:44:57 crc kubenswrapper[4854]: I0103 05:44:57.401204 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 03 05:44:57 crc kubenswrapper[4854]: I0103 05:44:57.540509 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 03 05:44:57 crc kubenswrapper[4854]: I0103 05:44:57.864529 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5dgzc"] Jan 03 05:44:57 crc kubenswrapper[4854]: E0103 05:44:57.865399 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54056ea8-c177-4995-8261-209eb3200f5f" containerName="extract-utilities" Jan 03 05:44:57 crc kubenswrapper[4854]: I0103 05:44:57.865477 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="54056ea8-c177-4995-8261-209eb3200f5f" containerName="extract-utilities" Jan 03 05:44:57 crc kubenswrapper[4854]: E0103 05:44:57.865537 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be6f79dd-0ea7-442e-ab1e-e35b15d45721" containerName="extract-content" Jan 03 05:44:57 crc kubenswrapper[4854]: I0103 05:44:57.865600 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="be6f79dd-0ea7-442e-ab1e-e35b15d45721" containerName="extract-content" Jan 03 05:44:57 crc kubenswrapper[4854]: E0103 05:44:57.865665 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80855e9f-3a0c-439c-87cf-933b8825c398" containerName="registry-server" Jan 03 05:44:57 crc kubenswrapper[4854]: I0103 05:44:57.865724 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="80855e9f-3a0c-439c-87cf-933b8825c398" containerName="registry-server" Jan 03 05:44:57 crc kubenswrapper[4854]: E0103 05:44:57.865787 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54056ea8-c177-4995-8261-209eb3200f5f" containerName="extract-content" Jan 03 05:44:57 crc kubenswrapper[4854]: I0103 05:44:57.865848 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="54056ea8-c177-4995-8261-209eb3200f5f" containerName="extract-content" Jan 03 05:44:57 crc kubenswrapper[4854]: E0103 05:44:57.865905 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80855e9f-3a0c-439c-87cf-933b8825c398" containerName="extract-utilities" Jan 03 05:44:57 crc kubenswrapper[4854]: I0103 05:44:57.865960 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="80855e9f-3a0c-439c-87cf-933b8825c398" containerName="extract-utilities" Jan 03 05:44:57 crc kubenswrapper[4854]: E0103 05:44:57.866016 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6127414f-e3e3-4c52-81a8-f6fea70b7d0c" containerName="extract-content" Jan 03 05:44:57 crc kubenswrapper[4854]: I0103 05:44:57.866096 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="6127414f-e3e3-4c52-81a8-f6fea70b7d0c" containerName="extract-content" Jan 03 05:44:57 crc kubenswrapper[4854]: E0103 05:44:57.866162 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b7f5d78-25a5-497a-9315-494fe26edb93" containerName="marketplace-operator" Jan 03 05:44:57 crc kubenswrapper[4854]: I0103 05:44:57.866226 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b7f5d78-25a5-497a-9315-494fe26edb93" containerName="marketplace-operator" Jan 03 05:44:57 crc kubenswrapper[4854]: E0103 05:44:57.866285 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80855e9f-3a0c-439c-87cf-933b8825c398" containerName="extract-content" Jan 03 05:44:57 crc kubenswrapper[4854]: I0103 05:44:57.866344 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="80855e9f-3a0c-439c-87cf-933b8825c398" containerName="extract-content" Jan 03 05:44:57 crc kubenswrapper[4854]: E0103 05:44:57.866402 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6127414f-e3e3-4c52-81a8-f6fea70b7d0c" containerName="extract-utilities" Jan 03 05:44:57 crc kubenswrapper[4854]: I0103 05:44:57.866457 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="6127414f-e3e3-4c52-81a8-f6fea70b7d0c" containerName="extract-utilities" Jan 03 05:44:57 crc kubenswrapper[4854]: E0103 05:44:57.866513 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54056ea8-c177-4995-8261-209eb3200f5f" containerName="registry-server" Jan 03 05:44:57 crc kubenswrapper[4854]: I0103 05:44:57.866575 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="54056ea8-c177-4995-8261-209eb3200f5f" containerName="registry-server" Jan 03 05:44:57 crc kubenswrapper[4854]: E0103 05:44:57.866636 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="332dcfb7-8bcf-46bf-9168-4bdb4411e55e" containerName="extract-utilities" Jan 03 05:44:57 crc kubenswrapper[4854]: I0103 05:44:57.866691 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="332dcfb7-8bcf-46bf-9168-4bdb4411e55e" containerName="extract-utilities" Jan 03 05:44:57 crc kubenswrapper[4854]: E0103 05:44:57.866752 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be6f79dd-0ea7-442e-ab1e-e35b15d45721" containerName="registry-server" Jan 03 05:44:57 crc kubenswrapper[4854]: I0103 05:44:57.866812 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="be6f79dd-0ea7-442e-ab1e-e35b15d45721" containerName="registry-server" Jan 03 05:44:57 crc kubenswrapper[4854]: E0103 05:44:57.866873 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea3251f8-9e38-4094-86f1-98187e5b2c75" containerName="registry" Jan 03 05:44:57 crc kubenswrapper[4854]: I0103 05:44:57.866934 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea3251f8-9e38-4094-86f1-98187e5b2c75" containerName="registry" Jan 03 05:44:57 crc kubenswrapper[4854]: E0103 05:44:57.866991 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6127414f-e3e3-4c52-81a8-f6fea70b7d0c" containerName="registry-server" Jan 03 05:44:57 crc kubenswrapper[4854]: I0103 05:44:57.867049 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="6127414f-e3e3-4c52-81a8-f6fea70b7d0c" containerName="registry-server" Jan 03 05:44:57 crc kubenswrapper[4854]: E0103 05:44:57.867153 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be6f79dd-0ea7-442e-ab1e-e35b15d45721" containerName="extract-utilities" Jan 03 05:44:57 crc kubenswrapper[4854]: I0103 05:44:57.867221 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="be6f79dd-0ea7-442e-ab1e-e35b15d45721" containerName="extract-utilities" Jan 03 05:44:57 crc kubenswrapper[4854]: E0103 05:44:57.867319 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="332dcfb7-8bcf-46bf-9168-4bdb4411e55e" containerName="registry-server" Jan 03 05:44:57 crc kubenswrapper[4854]: I0103 05:44:57.867381 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="332dcfb7-8bcf-46bf-9168-4bdb4411e55e" containerName="registry-server" Jan 03 05:44:57 crc kubenswrapper[4854]: E0103 05:44:57.867462 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="332dcfb7-8bcf-46bf-9168-4bdb4411e55e" containerName="extract-content" Jan 03 05:44:57 crc kubenswrapper[4854]: I0103 05:44:57.867520 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="332dcfb7-8bcf-46bf-9168-4bdb4411e55e" containerName="extract-content" Jan 03 05:44:57 crc kubenswrapper[4854]: E0103 05:44:57.867702 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b7f5d78-25a5-497a-9315-494fe26edb93" containerName="marketplace-operator" Jan 03 05:44:57 crc kubenswrapper[4854]: I0103 05:44:57.867790 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b7f5d78-25a5-497a-9315-494fe26edb93" containerName="marketplace-operator" Jan 03 05:44:57 crc kubenswrapper[4854]: I0103 05:44:57.868461 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b7f5d78-25a5-497a-9315-494fe26edb93" containerName="marketplace-operator" Jan 03 05:44:57 crc kubenswrapper[4854]: I0103 05:44:57.868557 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="6127414f-e3e3-4c52-81a8-f6fea70b7d0c" containerName="registry-server" Jan 03 05:44:57 crc kubenswrapper[4854]: I0103 05:44:57.868633 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="be6f79dd-0ea7-442e-ab1e-e35b15d45721" containerName="registry-server" Jan 03 05:44:57 crc kubenswrapper[4854]: I0103 05:44:57.868694 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b7f5d78-25a5-497a-9315-494fe26edb93" containerName="marketplace-operator" Jan 03 05:44:57 crc kubenswrapper[4854]: I0103 05:44:57.868754 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="332dcfb7-8bcf-46bf-9168-4bdb4411e55e" containerName="registry-server" Jan 03 05:44:57 crc kubenswrapper[4854]: I0103 05:44:57.869180 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="54056ea8-c177-4995-8261-209eb3200f5f" containerName="registry-server" Jan 03 05:44:57 crc kubenswrapper[4854]: I0103 05:44:57.869271 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="80855e9f-3a0c-439c-87cf-933b8825c398" containerName="registry-server" Jan 03 05:44:57 crc kubenswrapper[4854]: I0103 05:44:57.869350 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea3251f8-9e38-4094-86f1-98187e5b2c75" containerName="registry" Jan 03 05:44:57 crc kubenswrapper[4854]: I0103 05:44:57.870310 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5dgzc" Jan 03 05:44:57 crc kubenswrapper[4854]: I0103 05:44:57.872410 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 03 05:44:57 crc kubenswrapper[4854]: I0103 05:44:57.873902 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5dgzc"] Jan 03 05:44:57 crc kubenswrapper[4854]: I0103 05:44:57.927028 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff5e61bf-01e1-4eb9-93fd-89ac38c3932d-utilities\") pod \"community-operators-5dgzc\" (UID: \"ff5e61bf-01e1-4eb9-93fd-89ac38c3932d\") " pod="openshift-marketplace/community-operators-5dgzc" Jan 03 05:44:57 crc kubenswrapper[4854]: I0103 05:44:57.927436 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff5e61bf-01e1-4eb9-93fd-89ac38c3932d-catalog-content\") pod \"community-operators-5dgzc\" (UID: \"ff5e61bf-01e1-4eb9-93fd-89ac38c3932d\") " pod="openshift-marketplace/community-operators-5dgzc" Jan 03 05:44:57 crc kubenswrapper[4854]: I0103 05:44:57.927581 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dsls\" (UniqueName: \"kubernetes.io/projected/ff5e61bf-01e1-4eb9-93fd-89ac38c3932d-kube-api-access-9dsls\") pod \"community-operators-5dgzc\" (UID: \"ff5e61bf-01e1-4eb9-93fd-89ac38c3932d\") " pod="openshift-marketplace/community-operators-5dgzc" Jan 03 05:44:58 crc kubenswrapper[4854]: I0103 05:44:58.028948 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff5e61bf-01e1-4eb9-93fd-89ac38c3932d-utilities\") pod \"community-operators-5dgzc\" (UID: \"ff5e61bf-01e1-4eb9-93fd-89ac38c3932d\") " pod="openshift-marketplace/community-operators-5dgzc" Jan 03 05:44:58 crc kubenswrapper[4854]: I0103 05:44:58.029201 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff5e61bf-01e1-4eb9-93fd-89ac38c3932d-catalog-content\") pod \"community-operators-5dgzc\" (UID: \"ff5e61bf-01e1-4eb9-93fd-89ac38c3932d\") " pod="openshift-marketplace/community-operators-5dgzc" Jan 03 05:44:58 crc kubenswrapper[4854]: I0103 05:44:58.029304 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dsls\" (UniqueName: \"kubernetes.io/projected/ff5e61bf-01e1-4eb9-93fd-89ac38c3932d-kube-api-access-9dsls\") pod \"community-operators-5dgzc\" (UID: \"ff5e61bf-01e1-4eb9-93fd-89ac38c3932d\") " pod="openshift-marketplace/community-operators-5dgzc" Jan 03 05:44:58 crc kubenswrapper[4854]: I0103 05:44:58.029534 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff5e61bf-01e1-4eb9-93fd-89ac38c3932d-catalog-content\") pod \"community-operators-5dgzc\" (UID: \"ff5e61bf-01e1-4eb9-93fd-89ac38c3932d\") " pod="openshift-marketplace/community-operators-5dgzc" Jan 03 05:44:58 crc kubenswrapper[4854]: I0103 05:44:58.029619 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff5e61bf-01e1-4eb9-93fd-89ac38c3932d-utilities\") pod \"community-operators-5dgzc\" (UID: \"ff5e61bf-01e1-4eb9-93fd-89ac38c3932d\") " pod="openshift-marketplace/community-operators-5dgzc" Jan 03 05:44:58 crc kubenswrapper[4854]: I0103 05:44:58.052562 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dsls\" (UniqueName: \"kubernetes.io/projected/ff5e61bf-01e1-4eb9-93fd-89ac38c3932d-kube-api-access-9dsls\") pod \"community-operators-5dgzc\" (UID: \"ff5e61bf-01e1-4eb9-93fd-89ac38c3932d\") " pod="openshift-marketplace/community-operators-5dgzc" Jan 03 05:44:58 crc kubenswrapper[4854]: I0103 05:44:58.199492 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5dgzc" Jan 03 05:44:58 crc kubenswrapper[4854]: I0103 05:44:58.366577 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 03 05:44:59 crc kubenswrapper[4854]: I0103 05:44:58.667788 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5dgzc"] Jan 03 05:44:59 crc kubenswrapper[4854]: I0103 05:44:58.995281 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9lzwf"] Jan 03 05:44:59 crc kubenswrapper[4854]: I0103 05:44:58.996903 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9lzwf" Jan 03 05:44:59 crc kubenswrapper[4854]: I0103 05:44:59.001973 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 03 05:44:59 crc kubenswrapper[4854]: I0103 05:44:59.009676 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9lzwf"] Jan 03 05:44:59 crc kubenswrapper[4854]: I0103 05:44:59.040145 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e9a4f28-3133-4df6-9ed3-fbae3e03d777-catalog-content\") pod \"redhat-operators-9lzwf\" (UID: \"7e9a4f28-3133-4df6-9ed3-fbae3e03d777\") " pod="openshift-marketplace/redhat-operators-9lzwf" Jan 03 05:44:59 crc kubenswrapper[4854]: I0103 05:44:59.040209 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e9a4f28-3133-4df6-9ed3-fbae3e03d777-utilities\") pod \"redhat-operators-9lzwf\" (UID: \"7e9a4f28-3133-4df6-9ed3-fbae3e03d777\") " pod="openshift-marketplace/redhat-operators-9lzwf" Jan 03 05:44:59 crc kubenswrapper[4854]: I0103 05:44:59.040262 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cpgp\" (UniqueName: \"kubernetes.io/projected/7e9a4f28-3133-4df6-9ed3-fbae3e03d777-kube-api-access-2cpgp\") pod \"redhat-operators-9lzwf\" (UID: \"7e9a4f28-3133-4df6-9ed3-fbae3e03d777\") " pod="openshift-marketplace/redhat-operators-9lzwf" Jan 03 05:44:59 crc kubenswrapper[4854]: I0103 05:44:59.142265 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e9a4f28-3133-4df6-9ed3-fbae3e03d777-catalog-content\") pod \"redhat-operators-9lzwf\" (UID: \"7e9a4f28-3133-4df6-9ed3-fbae3e03d777\") " pod="openshift-marketplace/redhat-operators-9lzwf" Jan 03 05:44:59 crc kubenswrapper[4854]: I0103 05:44:59.142347 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e9a4f28-3133-4df6-9ed3-fbae3e03d777-utilities\") pod \"redhat-operators-9lzwf\" (UID: \"7e9a4f28-3133-4df6-9ed3-fbae3e03d777\") " pod="openshift-marketplace/redhat-operators-9lzwf" Jan 03 05:44:59 crc kubenswrapper[4854]: I0103 05:44:59.142385 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cpgp\" (UniqueName: \"kubernetes.io/projected/7e9a4f28-3133-4df6-9ed3-fbae3e03d777-kube-api-access-2cpgp\") pod \"redhat-operators-9lzwf\" (UID: \"7e9a4f28-3133-4df6-9ed3-fbae3e03d777\") " pod="openshift-marketplace/redhat-operators-9lzwf" Jan 03 05:44:59 crc kubenswrapper[4854]: I0103 05:44:59.143226 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e9a4f28-3133-4df6-9ed3-fbae3e03d777-utilities\") pod \"redhat-operators-9lzwf\" (UID: \"7e9a4f28-3133-4df6-9ed3-fbae3e03d777\") " pod="openshift-marketplace/redhat-operators-9lzwf" Jan 03 05:44:59 crc kubenswrapper[4854]: I0103 05:44:59.143229 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e9a4f28-3133-4df6-9ed3-fbae3e03d777-catalog-content\") pod \"redhat-operators-9lzwf\" (UID: \"7e9a4f28-3133-4df6-9ed3-fbae3e03d777\") " pod="openshift-marketplace/redhat-operators-9lzwf" Jan 03 05:44:59 crc kubenswrapper[4854]: I0103 05:44:59.164890 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cpgp\" (UniqueName: \"kubernetes.io/projected/7e9a4f28-3133-4df6-9ed3-fbae3e03d777-kube-api-access-2cpgp\") pod \"redhat-operators-9lzwf\" (UID: \"7e9a4f28-3133-4df6-9ed3-fbae3e03d777\") " pod="openshift-marketplace/redhat-operators-9lzwf" Jan 03 05:44:59 crc kubenswrapper[4854]: I0103 05:44:59.313588 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9lzwf" Jan 03 05:44:59 crc kubenswrapper[4854]: I0103 05:44:59.447202 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 03 05:44:59 crc kubenswrapper[4854]: I0103 05:44:59.532415 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 03 05:44:59 crc kubenswrapper[4854]: I0103 05:44:59.571133 4854 generic.go:334] "Generic (PLEG): container finished" podID="ff5e61bf-01e1-4eb9-93fd-89ac38c3932d" containerID="ced53a9960cd13e78f3ac4bbe4ed04116667a8b8b4fcf8dd75e7590593936639" exitCode=0 Jan 03 05:44:59 crc kubenswrapper[4854]: I0103 05:44:59.571193 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5dgzc" event={"ID":"ff5e61bf-01e1-4eb9-93fd-89ac38c3932d","Type":"ContainerDied","Data":"ced53a9960cd13e78f3ac4bbe4ed04116667a8b8b4fcf8dd75e7590593936639"} Jan 03 05:44:59 crc kubenswrapper[4854]: I0103 05:44:59.571231 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5dgzc" event={"ID":"ff5e61bf-01e1-4eb9-93fd-89ac38c3932d","Type":"ContainerStarted","Data":"5db31a81547c7efb2d754b7e82fb164cc210c6be342d754ee7f929049c05691d"} Jan 03 05:44:59 crc kubenswrapper[4854]: I0103 05:44:59.770626 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9lzwf"] Jan 03 05:44:59 crc kubenswrapper[4854]: W0103 05:44:59.775947 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e9a4f28_3133_4df6_9ed3_fbae3e03d777.slice/crio-5ac4ea82f03eaa03cd5430bc670ff85d7c146c20d62b6126a11e0de468821de0 WatchSource:0}: Error finding container 5ac4ea82f03eaa03cd5430bc670ff85d7c146c20d62b6126a11e0de468821de0: Status 404 returned error can't find the container with id 5ac4ea82f03eaa03cd5430bc670ff85d7c146c20d62b6126a11e0de468821de0 Jan 03 05:44:59 crc kubenswrapper[4854]: I0103 05:44:59.913101 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 03 05:45:00 crc kubenswrapper[4854]: I0103 05:45:00.040693 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 03 05:45:00 crc kubenswrapper[4854]: I0103 05:45:00.172171 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 03 05:45:00 crc kubenswrapper[4854]: I0103 05:45:00.172221 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29456985-29drk"] Jan 03 05:45:00 crc kubenswrapper[4854]: I0103 05:45:00.173228 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29456985-29drk" Jan 03 05:45:00 crc kubenswrapper[4854]: I0103 05:45:00.179766 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 03 05:45:00 crc kubenswrapper[4854]: I0103 05:45:00.179916 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 03 05:45:00 crc kubenswrapper[4854]: I0103 05:45:00.183381 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29456985-29drk"] Jan 03 05:45:00 crc kubenswrapper[4854]: I0103 05:45:00.221741 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 03 05:45:00 crc kubenswrapper[4854]: I0103 05:45:00.256247 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/00ca6b9b-1bf3-4fa5-b787-449b8f8bbfec-config-volume\") pod \"collect-profiles-29456985-29drk\" (UID: \"00ca6b9b-1bf3-4fa5-b787-449b8f8bbfec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456985-29drk" Jan 03 05:45:00 crc kubenswrapper[4854]: I0103 05:45:00.256305 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/00ca6b9b-1bf3-4fa5-b787-449b8f8bbfec-secret-volume\") pod \"collect-profiles-29456985-29drk\" (UID: \"00ca6b9b-1bf3-4fa5-b787-449b8f8bbfec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456985-29drk" Jan 03 05:45:00 crc kubenswrapper[4854]: I0103 05:45:00.256357 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6mh6\" (UniqueName: \"kubernetes.io/projected/00ca6b9b-1bf3-4fa5-b787-449b8f8bbfec-kube-api-access-k6mh6\") pod \"collect-profiles-29456985-29drk\" (UID: \"00ca6b9b-1bf3-4fa5-b787-449b8f8bbfec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456985-29drk" Jan 03 05:45:00 crc kubenswrapper[4854]: I0103 05:45:00.356977 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6mh6\" (UniqueName: \"kubernetes.io/projected/00ca6b9b-1bf3-4fa5-b787-449b8f8bbfec-kube-api-access-k6mh6\") pod \"collect-profiles-29456985-29drk\" (UID: \"00ca6b9b-1bf3-4fa5-b787-449b8f8bbfec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456985-29drk" Jan 03 05:45:00 crc kubenswrapper[4854]: I0103 05:45:00.357049 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/00ca6b9b-1bf3-4fa5-b787-449b8f8bbfec-config-volume\") pod \"collect-profiles-29456985-29drk\" (UID: \"00ca6b9b-1bf3-4fa5-b787-449b8f8bbfec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456985-29drk" Jan 03 05:45:00 crc kubenswrapper[4854]: I0103 05:45:00.357076 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/00ca6b9b-1bf3-4fa5-b787-449b8f8bbfec-secret-volume\") pod \"collect-profiles-29456985-29drk\" (UID: \"00ca6b9b-1bf3-4fa5-b787-449b8f8bbfec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456985-29drk" Jan 03 05:45:00 crc kubenswrapper[4854]: I0103 05:45:00.358812 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/00ca6b9b-1bf3-4fa5-b787-449b8f8bbfec-config-volume\") pod \"collect-profiles-29456985-29drk\" (UID: \"00ca6b9b-1bf3-4fa5-b787-449b8f8bbfec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456985-29drk" Jan 03 05:45:00 crc kubenswrapper[4854]: I0103 05:45:00.366949 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/00ca6b9b-1bf3-4fa5-b787-449b8f8bbfec-secret-volume\") pod \"collect-profiles-29456985-29drk\" (UID: \"00ca6b9b-1bf3-4fa5-b787-449b8f8bbfec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456985-29drk" Jan 03 05:45:00 crc kubenswrapper[4854]: I0103 05:45:00.374545 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6mh6\" (UniqueName: \"kubernetes.io/projected/00ca6b9b-1bf3-4fa5-b787-449b8f8bbfec-kube-api-access-k6mh6\") pod \"collect-profiles-29456985-29drk\" (UID: \"00ca6b9b-1bf3-4fa5-b787-449b8f8bbfec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456985-29drk" Jan 03 05:45:00 crc kubenswrapper[4854]: I0103 05:45:00.477436 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6pvh8"] Jan 03 05:45:00 crc kubenswrapper[4854]: I0103 05:45:00.479651 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6pvh8" Jan 03 05:45:00 crc kubenswrapper[4854]: I0103 05:45:00.481641 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 03 05:45:00 crc kubenswrapper[4854]: I0103 05:45:00.483509 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6pvh8"] Jan 03 05:45:00 crc kubenswrapper[4854]: I0103 05:45:00.498713 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29456985-29drk" Jan 03 05:45:00 crc kubenswrapper[4854]: I0103 05:45:00.559292 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03a2de93-c858-46e8-ae42-a34d1d776b7c-catalog-content\") pod \"certified-operators-6pvh8\" (UID: \"03a2de93-c858-46e8-ae42-a34d1d776b7c\") " pod="openshift-marketplace/certified-operators-6pvh8" Jan 03 05:45:00 crc kubenswrapper[4854]: I0103 05:45:00.559393 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03a2de93-c858-46e8-ae42-a34d1d776b7c-utilities\") pod \"certified-operators-6pvh8\" (UID: \"03a2de93-c858-46e8-ae42-a34d1d776b7c\") " pod="openshift-marketplace/certified-operators-6pvh8" Jan 03 05:45:00 crc kubenswrapper[4854]: I0103 05:45:00.559484 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtrn9\" (UniqueName: \"kubernetes.io/projected/03a2de93-c858-46e8-ae42-a34d1d776b7c-kube-api-access-qtrn9\") pod \"certified-operators-6pvh8\" (UID: \"03a2de93-c858-46e8-ae42-a34d1d776b7c\") " pod="openshift-marketplace/certified-operators-6pvh8" Jan 03 05:45:00 crc kubenswrapper[4854]: I0103 05:45:00.595894 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5dgzc" event={"ID":"ff5e61bf-01e1-4eb9-93fd-89ac38c3932d","Type":"ContainerStarted","Data":"c4ea397908a7c07b111569c8e0104d4fe1722b8a9a6aabedf655948c947af0bc"} Jan 03 05:45:00 crc kubenswrapper[4854]: I0103 05:45:00.600819 4854 generic.go:334] "Generic (PLEG): container finished" podID="7e9a4f28-3133-4df6-9ed3-fbae3e03d777" containerID="b5e769e3875830adf7a7cdb77a5062832de4dd452d282354f18cc0d1f97c5a80" exitCode=0 Jan 03 05:45:00 crc kubenswrapper[4854]: I0103 05:45:00.600863 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9lzwf" event={"ID":"7e9a4f28-3133-4df6-9ed3-fbae3e03d777","Type":"ContainerDied","Data":"b5e769e3875830adf7a7cdb77a5062832de4dd452d282354f18cc0d1f97c5a80"} Jan 03 05:45:00 crc kubenswrapper[4854]: I0103 05:45:00.600912 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9lzwf" event={"ID":"7e9a4f28-3133-4df6-9ed3-fbae3e03d777","Type":"ContainerStarted","Data":"5ac4ea82f03eaa03cd5430bc670ff85d7c146c20d62b6126a11e0de468821de0"} Jan 03 05:45:00 crc kubenswrapper[4854]: I0103 05:45:00.661562 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtrn9\" (UniqueName: \"kubernetes.io/projected/03a2de93-c858-46e8-ae42-a34d1d776b7c-kube-api-access-qtrn9\") pod \"certified-operators-6pvh8\" (UID: \"03a2de93-c858-46e8-ae42-a34d1d776b7c\") " pod="openshift-marketplace/certified-operators-6pvh8" Jan 03 05:45:00 crc kubenswrapper[4854]: I0103 05:45:00.661639 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03a2de93-c858-46e8-ae42-a34d1d776b7c-catalog-content\") pod \"certified-operators-6pvh8\" (UID: \"03a2de93-c858-46e8-ae42-a34d1d776b7c\") " pod="openshift-marketplace/certified-operators-6pvh8" Jan 03 05:45:00 crc kubenswrapper[4854]: I0103 05:45:00.661675 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03a2de93-c858-46e8-ae42-a34d1d776b7c-utilities\") pod \"certified-operators-6pvh8\" (UID: \"03a2de93-c858-46e8-ae42-a34d1d776b7c\") " pod="openshift-marketplace/certified-operators-6pvh8" Jan 03 05:45:00 crc kubenswrapper[4854]: I0103 05:45:00.662474 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03a2de93-c858-46e8-ae42-a34d1d776b7c-utilities\") pod \"certified-operators-6pvh8\" (UID: \"03a2de93-c858-46e8-ae42-a34d1d776b7c\") " pod="openshift-marketplace/certified-operators-6pvh8" Jan 03 05:45:00 crc kubenswrapper[4854]: I0103 05:45:00.662924 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03a2de93-c858-46e8-ae42-a34d1d776b7c-catalog-content\") pod \"certified-operators-6pvh8\" (UID: \"03a2de93-c858-46e8-ae42-a34d1d776b7c\") " pod="openshift-marketplace/certified-operators-6pvh8" Jan 03 05:45:00 crc kubenswrapper[4854]: I0103 05:45:00.679951 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtrn9\" (UniqueName: \"kubernetes.io/projected/03a2de93-c858-46e8-ae42-a34d1d776b7c-kube-api-access-qtrn9\") pod \"certified-operators-6pvh8\" (UID: \"03a2de93-c858-46e8-ae42-a34d1d776b7c\") " pod="openshift-marketplace/certified-operators-6pvh8" Jan 03 05:45:00 crc kubenswrapper[4854]: I0103 05:45:00.799830 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6pvh8" Jan 03 05:45:00 crc kubenswrapper[4854]: I0103 05:45:00.829986 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 03 05:45:00 crc kubenswrapper[4854]: I0103 05:45:00.868209 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8f5jd"] Jan 03 05:45:00 crc kubenswrapper[4854]: I0103 05:45:00.869323 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8f5jd" Jan 03 05:45:00 crc kubenswrapper[4854]: I0103 05:45:00.871113 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 03 05:45:00 crc kubenswrapper[4854]: I0103 05:45:00.877501 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8f5jd"] Jan 03 05:45:00 crc kubenswrapper[4854]: I0103 05:45:00.926677 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 03 05:45:00 crc kubenswrapper[4854]: I0103 05:45:00.936670 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29456985-29drk"] Jan 03 05:45:00 crc kubenswrapper[4854]: W0103 05:45:00.940937 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00ca6b9b_1bf3_4fa5_b787_449b8f8bbfec.slice/crio-0e182ac6064d906e8108fd38e70a77d225ddb86de4b34737424cde33c03d72bb WatchSource:0}: Error finding container 0e182ac6064d906e8108fd38e70a77d225ddb86de4b34737424cde33c03d72bb: Status 404 returned error can't find the container with id 0e182ac6064d906e8108fd38e70a77d225ddb86de4b34737424cde33c03d72bb Jan 03 05:45:00 crc kubenswrapper[4854]: I0103 05:45:00.965786 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mcxc\" (UniqueName: \"kubernetes.io/projected/a0902db0-b7a6-496e-955c-c6f6bb3429c6-kube-api-access-6mcxc\") pod \"redhat-marketplace-8f5jd\" (UID: \"a0902db0-b7a6-496e-955c-c6f6bb3429c6\") " pod="openshift-marketplace/redhat-marketplace-8f5jd" Jan 03 05:45:00 crc kubenswrapper[4854]: I0103 05:45:00.965836 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0902db0-b7a6-496e-955c-c6f6bb3429c6-utilities\") pod \"redhat-marketplace-8f5jd\" (UID: \"a0902db0-b7a6-496e-955c-c6f6bb3429c6\") " pod="openshift-marketplace/redhat-marketplace-8f5jd" Jan 03 05:45:00 crc kubenswrapper[4854]: I0103 05:45:00.965934 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0902db0-b7a6-496e-955c-c6f6bb3429c6-catalog-content\") pod \"redhat-marketplace-8f5jd\" (UID: \"a0902db0-b7a6-496e-955c-c6f6bb3429c6\") " pod="openshift-marketplace/redhat-marketplace-8f5jd" Jan 03 05:45:01 crc kubenswrapper[4854]: I0103 05:45:01.066992 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mcxc\" (UniqueName: \"kubernetes.io/projected/a0902db0-b7a6-496e-955c-c6f6bb3429c6-kube-api-access-6mcxc\") pod \"redhat-marketplace-8f5jd\" (UID: \"a0902db0-b7a6-496e-955c-c6f6bb3429c6\") " pod="openshift-marketplace/redhat-marketplace-8f5jd" Jan 03 05:45:01 crc kubenswrapper[4854]: I0103 05:45:01.067054 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0902db0-b7a6-496e-955c-c6f6bb3429c6-utilities\") pod \"redhat-marketplace-8f5jd\" (UID: \"a0902db0-b7a6-496e-955c-c6f6bb3429c6\") " pod="openshift-marketplace/redhat-marketplace-8f5jd" Jan 03 05:45:01 crc kubenswrapper[4854]: I0103 05:45:01.067219 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0902db0-b7a6-496e-955c-c6f6bb3429c6-catalog-content\") pod \"redhat-marketplace-8f5jd\" (UID: \"a0902db0-b7a6-496e-955c-c6f6bb3429c6\") " pod="openshift-marketplace/redhat-marketplace-8f5jd" Jan 03 05:45:01 crc kubenswrapper[4854]: I0103 05:45:01.067832 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0902db0-b7a6-496e-955c-c6f6bb3429c6-utilities\") pod \"redhat-marketplace-8f5jd\" (UID: \"a0902db0-b7a6-496e-955c-c6f6bb3429c6\") " pod="openshift-marketplace/redhat-marketplace-8f5jd" Jan 03 05:45:01 crc kubenswrapper[4854]: I0103 05:45:01.067868 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0902db0-b7a6-496e-955c-c6f6bb3429c6-catalog-content\") pod \"redhat-marketplace-8f5jd\" (UID: \"a0902db0-b7a6-496e-955c-c6f6bb3429c6\") " pod="openshift-marketplace/redhat-marketplace-8f5jd" Jan 03 05:45:01 crc kubenswrapper[4854]: I0103 05:45:01.085775 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mcxc\" (UniqueName: \"kubernetes.io/projected/a0902db0-b7a6-496e-955c-c6f6bb3429c6-kube-api-access-6mcxc\") pod \"redhat-marketplace-8f5jd\" (UID: \"a0902db0-b7a6-496e-955c-c6f6bb3429c6\") " pod="openshift-marketplace/redhat-marketplace-8f5jd" Jan 03 05:45:01 crc kubenswrapper[4854]: I0103 05:45:01.199577 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8f5jd" Jan 03 05:45:01 crc kubenswrapper[4854]: I0103 05:45:01.211725 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 03 05:45:01 crc kubenswrapper[4854]: I0103 05:45:01.235326 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6pvh8"] Jan 03 05:45:01 crc kubenswrapper[4854]: I0103 05:45:01.605389 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8f5jd"] Jan 03 05:45:01 crc kubenswrapper[4854]: I0103 05:45:01.608750 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9lzwf" event={"ID":"7e9a4f28-3133-4df6-9ed3-fbae3e03d777","Type":"ContainerStarted","Data":"2ae720edda670cadcdf083e91159219761b6a8b270fd0b25903a4b9db967a0c2"} Jan 03 05:45:01 crc kubenswrapper[4854]: I0103 05:45:01.610665 4854 generic.go:334] "Generic (PLEG): container finished" podID="00ca6b9b-1bf3-4fa5-b787-449b8f8bbfec" containerID="5c0d1907442a34cea381a800b574e65c8acedc1c4956e5e0bc228ca9a747b07e" exitCode=0 Jan 03 05:45:01 crc kubenswrapper[4854]: I0103 05:45:01.610716 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29456985-29drk" event={"ID":"00ca6b9b-1bf3-4fa5-b787-449b8f8bbfec","Type":"ContainerDied","Data":"5c0d1907442a34cea381a800b574e65c8acedc1c4956e5e0bc228ca9a747b07e"} Jan 03 05:45:01 crc kubenswrapper[4854]: I0103 05:45:01.610736 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29456985-29drk" event={"ID":"00ca6b9b-1bf3-4fa5-b787-449b8f8bbfec","Type":"ContainerStarted","Data":"0e182ac6064d906e8108fd38e70a77d225ddb86de4b34737424cde33c03d72bb"} Jan 03 05:45:01 crc kubenswrapper[4854]: I0103 05:45:01.612334 4854 generic.go:334] "Generic (PLEG): container finished" podID="03a2de93-c858-46e8-ae42-a34d1d776b7c" containerID="757450f57046fea166e1b9fe5570ad5e280baf80a80ec16595bc33474ee3058c" exitCode=0 Jan 03 05:45:01 crc kubenswrapper[4854]: I0103 05:45:01.612393 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6pvh8" event={"ID":"03a2de93-c858-46e8-ae42-a34d1d776b7c","Type":"ContainerDied","Data":"757450f57046fea166e1b9fe5570ad5e280baf80a80ec16595bc33474ee3058c"} Jan 03 05:45:01 crc kubenswrapper[4854]: I0103 05:45:01.612412 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6pvh8" event={"ID":"03a2de93-c858-46e8-ae42-a34d1d776b7c","Type":"ContainerStarted","Data":"c7bf2acba275242d4464bd011165a8144a3c4dece9c8c708553410658c482bc4"} Jan 03 05:45:01 crc kubenswrapper[4854]: I0103 05:45:01.616844 4854 generic.go:334] "Generic (PLEG): container finished" podID="ff5e61bf-01e1-4eb9-93fd-89ac38c3932d" containerID="c4ea397908a7c07b111569c8e0104d4fe1722b8a9a6aabedf655948c947af0bc" exitCode=0 Jan 03 05:45:01 crc kubenswrapper[4854]: I0103 05:45:01.616912 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5dgzc" event={"ID":"ff5e61bf-01e1-4eb9-93fd-89ac38c3932d","Type":"ContainerDied","Data":"c4ea397908a7c07b111569c8e0104d4fe1722b8a9a6aabedf655948c947af0bc"} Jan 03 05:45:01 crc kubenswrapper[4854]: I0103 05:45:01.616954 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5dgzc" event={"ID":"ff5e61bf-01e1-4eb9-93fd-89ac38c3932d","Type":"ContainerStarted","Data":"96652880f3451c0b6b3acae039a18348100116918f04cc23b505025360e35b4d"} Jan 03 05:45:01 crc kubenswrapper[4854]: I0103 05:45:01.682151 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5dgzc" podStartSLOduration=2.945109538 podStartE2EDuration="4.682135989s" podCreationTimestamp="2026-01-03 05:44:57 +0000 UTC" firstStartedPulling="2026-01-03 05:44:59.573395985 +0000 UTC m=+277.899972597" lastFinishedPulling="2026-01-03 05:45:01.310422476 +0000 UTC m=+279.636999048" observedRunningTime="2026-01-03 05:45:01.679561923 +0000 UTC m=+280.006138495" watchObservedRunningTime="2026-01-03 05:45:01.682135989 +0000 UTC m=+280.008712561" Jan 03 05:45:01 crc kubenswrapper[4854]: I0103 05:45:01.856733 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 03 05:45:02 crc kubenswrapper[4854]: I0103 05:45:02.375622 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 03 05:45:02 crc kubenswrapper[4854]: I0103 05:45:02.623409 4854 generic.go:334] "Generic (PLEG): container finished" podID="7e9a4f28-3133-4df6-9ed3-fbae3e03d777" containerID="2ae720edda670cadcdf083e91159219761b6a8b270fd0b25903a4b9db967a0c2" exitCode=0 Jan 03 05:45:02 crc kubenswrapper[4854]: I0103 05:45:02.623459 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9lzwf" event={"ID":"7e9a4f28-3133-4df6-9ed3-fbae3e03d777","Type":"ContainerDied","Data":"2ae720edda670cadcdf083e91159219761b6a8b270fd0b25903a4b9db967a0c2"} Jan 03 05:45:02 crc kubenswrapper[4854]: I0103 05:45:02.624893 4854 generic.go:334] "Generic (PLEG): container finished" podID="a0902db0-b7a6-496e-955c-c6f6bb3429c6" containerID="2a0177617e5424c980268b41d5a9ea02d6c90235f8c782415328a42948445bd3" exitCode=0 Jan 03 05:45:02 crc kubenswrapper[4854]: I0103 05:45:02.624956 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8f5jd" event={"ID":"a0902db0-b7a6-496e-955c-c6f6bb3429c6","Type":"ContainerDied","Data":"2a0177617e5424c980268b41d5a9ea02d6c90235f8c782415328a42948445bd3"} Jan 03 05:45:02 crc kubenswrapper[4854]: I0103 05:45:02.624987 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8f5jd" event={"ID":"a0902db0-b7a6-496e-955c-c6f6bb3429c6","Type":"ContainerStarted","Data":"8af78c64ed567469b3e0d078cbeac2fb67f7e5fe8427c9d86e683979ce04bee6"} Jan 03 05:45:02 crc kubenswrapper[4854]: I0103 05:45:02.917477 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 03 05:45:02 crc kubenswrapper[4854]: I0103 05:45:02.969306 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29456985-29drk" Jan 03 05:45:02 crc kubenswrapper[4854]: I0103 05:45:02.988570 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/00ca6b9b-1bf3-4fa5-b787-449b8f8bbfec-secret-volume\") pod \"00ca6b9b-1bf3-4fa5-b787-449b8f8bbfec\" (UID: \"00ca6b9b-1bf3-4fa5-b787-449b8f8bbfec\") " Jan 03 05:45:02 crc kubenswrapper[4854]: I0103 05:45:02.988626 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6mh6\" (UniqueName: \"kubernetes.io/projected/00ca6b9b-1bf3-4fa5-b787-449b8f8bbfec-kube-api-access-k6mh6\") pod \"00ca6b9b-1bf3-4fa5-b787-449b8f8bbfec\" (UID: \"00ca6b9b-1bf3-4fa5-b787-449b8f8bbfec\") " Jan 03 05:45:02 crc kubenswrapper[4854]: I0103 05:45:02.988655 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/00ca6b9b-1bf3-4fa5-b787-449b8f8bbfec-config-volume\") pod \"00ca6b9b-1bf3-4fa5-b787-449b8f8bbfec\" (UID: \"00ca6b9b-1bf3-4fa5-b787-449b8f8bbfec\") " Jan 03 05:45:02 crc kubenswrapper[4854]: I0103 05:45:02.989975 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00ca6b9b-1bf3-4fa5-b787-449b8f8bbfec-config-volume" (OuterVolumeSpecName: "config-volume") pod "00ca6b9b-1bf3-4fa5-b787-449b8f8bbfec" (UID: "00ca6b9b-1bf3-4fa5-b787-449b8f8bbfec"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:45:02 crc kubenswrapper[4854]: I0103 05:45:02.995817 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00ca6b9b-1bf3-4fa5-b787-449b8f8bbfec-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "00ca6b9b-1bf3-4fa5-b787-449b8f8bbfec" (UID: "00ca6b9b-1bf3-4fa5-b787-449b8f8bbfec"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:45:02 crc kubenswrapper[4854]: I0103 05:45:02.996318 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00ca6b9b-1bf3-4fa5-b787-449b8f8bbfec-kube-api-access-k6mh6" (OuterVolumeSpecName: "kube-api-access-k6mh6") pod "00ca6b9b-1bf3-4fa5-b787-449b8f8bbfec" (UID: "00ca6b9b-1bf3-4fa5-b787-449b8f8bbfec"). InnerVolumeSpecName "kube-api-access-k6mh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:45:03 crc kubenswrapper[4854]: I0103 05:45:03.090678 4854 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/00ca6b9b-1bf3-4fa5-b787-449b8f8bbfec-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 03 05:45:03 crc kubenswrapper[4854]: I0103 05:45:03.090716 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6mh6\" (UniqueName: \"kubernetes.io/projected/00ca6b9b-1bf3-4fa5-b787-449b8f8bbfec-kube-api-access-k6mh6\") on node \"crc\" DevicePath \"\"" Jan 03 05:45:03 crc kubenswrapper[4854]: I0103 05:45:03.090729 4854 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/00ca6b9b-1bf3-4fa5-b787-449b8f8bbfec-config-volume\") on node \"crc\" DevicePath \"\"" Jan 03 05:45:03 crc kubenswrapper[4854]: I0103 05:45:03.350475 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8596db5b-95z6q"] Jan 03 05:45:03 crc kubenswrapper[4854]: I0103 05:45:03.351154 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-8596db5b-95z6q" podUID="a7ab4e78-6d39-46bb-bb00-cd098e009115" containerName="route-controller-manager" containerID="cri-o://ad936cc3a4ec9bd5774af87fc83e8145bcf85213f2a2d73afc36a8ce4549f79c" gracePeriod=30 Jan 03 05:45:03 crc kubenswrapper[4854]: I0103 05:45:03.356907 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 03 05:45:03 crc kubenswrapper[4854]: I0103 05:45:03.632312 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9lzwf" event={"ID":"7e9a4f28-3133-4df6-9ed3-fbae3e03d777","Type":"ContainerStarted","Data":"a0936da6f25419621b27d0418a43e754bb7970fc15020eb69cec2bc27be795a1"} Jan 03 05:45:03 crc kubenswrapper[4854]: I0103 05:45:03.634303 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29456985-29drk" event={"ID":"00ca6b9b-1bf3-4fa5-b787-449b8f8bbfec","Type":"ContainerDied","Data":"0e182ac6064d906e8108fd38e70a77d225ddb86de4b34737424cde33c03d72bb"} Jan 03 05:45:03 crc kubenswrapper[4854]: I0103 05:45:03.634338 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e182ac6064d906e8108fd38e70a77d225ddb86de4b34737424cde33c03d72bb" Jan 03 05:45:03 crc kubenswrapper[4854]: I0103 05:45:03.634386 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29456985-29drk" Jan 03 05:45:03 crc kubenswrapper[4854]: I0103 05:45:03.652261 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9lzwf" podStartSLOduration=3.20846024 podStartE2EDuration="5.65224096s" podCreationTimestamp="2026-01-03 05:44:58 +0000 UTC" firstStartedPulling="2026-01-03 05:45:00.602379985 +0000 UTC m=+278.928956557" lastFinishedPulling="2026-01-03 05:45:03.046160705 +0000 UTC m=+281.372737277" observedRunningTime="2026-01-03 05:45:03.651811908 +0000 UTC m=+281.978388500" watchObservedRunningTime="2026-01-03 05:45:03.65224096 +0000 UTC m=+281.978817542" Jan 03 05:45:03 crc kubenswrapper[4854]: I0103 05:45:03.656798 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8f5jd" event={"ID":"a0902db0-b7a6-496e-955c-c6f6bb3429c6","Type":"ContainerStarted","Data":"ae9d48fc83f76435a9cbaba74d6f91288dc078847c3816bfaf85451cb31511a2"} Jan 03 05:45:03 crc kubenswrapper[4854]: I0103 05:45:03.658942 4854 generic.go:334] "Generic (PLEG): container finished" podID="a7ab4e78-6d39-46bb-bb00-cd098e009115" containerID="ad936cc3a4ec9bd5774af87fc83e8145bcf85213f2a2d73afc36a8ce4549f79c" exitCode=0 Jan 03 05:45:03 crc kubenswrapper[4854]: I0103 05:45:03.659008 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8596db5b-95z6q" event={"ID":"a7ab4e78-6d39-46bb-bb00-cd098e009115","Type":"ContainerDied","Data":"ad936cc3a4ec9bd5774af87fc83e8145bcf85213f2a2d73afc36a8ce4549f79c"} Jan 03 05:45:03 crc kubenswrapper[4854]: I0103 05:45:03.661107 4854 generic.go:334] "Generic (PLEG): container finished" podID="03a2de93-c858-46e8-ae42-a34d1d776b7c" containerID="9b853a571ee7b2178cb440347d4feac2ae628791ea32fa3eef251f90ac04596b" exitCode=0 Jan 03 05:45:03 crc kubenswrapper[4854]: I0103 05:45:03.661154 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6pvh8" event={"ID":"03a2de93-c858-46e8-ae42-a34d1d776b7c","Type":"ContainerDied","Data":"9b853a571ee7b2178cb440347d4feac2ae628791ea32fa3eef251f90ac04596b"} Jan 03 05:45:03 crc kubenswrapper[4854]: I0103 05:45:03.741654 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 03 05:45:03 crc kubenswrapper[4854]: I0103 05:45:03.785374 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8596db5b-95z6q" Jan 03 05:45:03 crc kubenswrapper[4854]: I0103 05:45:03.797459 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5s2w4\" (UniqueName: \"kubernetes.io/projected/a7ab4e78-6d39-46bb-bb00-cd098e009115-kube-api-access-5s2w4\") pod \"a7ab4e78-6d39-46bb-bb00-cd098e009115\" (UID: \"a7ab4e78-6d39-46bb-bb00-cd098e009115\") " Jan 03 05:45:03 crc kubenswrapper[4854]: I0103 05:45:03.797531 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7ab4e78-6d39-46bb-bb00-cd098e009115-config\") pod \"a7ab4e78-6d39-46bb-bb00-cd098e009115\" (UID: \"a7ab4e78-6d39-46bb-bb00-cd098e009115\") " Jan 03 05:45:03 crc kubenswrapper[4854]: I0103 05:45:03.797632 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a7ab4e78-6d39-46bb-bb00-cd098e009115-client-ca\") pod \"a7ab4e78-6d39-46bb-bb00-cd098e009115\" (UID: \"a7ab4e78-6d39-46bb-bb00-cd098e009115\") " Jan 03 05:45:03 crc kubenswrapper[4854]: I0103 05:45:03.797667 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7ab4e78-6d39-46bb-bb00-cd098e009115-serving-cert\") pod \"a7ab4e78-6d39-46bb-bb00-cd098e009115\" (UID: \"a7ab4e78-6d39-46bb-bb00-cd098e009115\") " Jan 03 05:45:03 crc kubenswrapper[4854]: I0103 05:45:03.798685 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7ab4e78-6d39-46bb-bb00-cd098e009115-client-ca" (OuterVolumeSpecName: "client-ca") pod "a7ab4e78-6d39-46bb-bb00-cd098e009115" (UID: "a7ab4e78-6d39-46bb-bb00-cd098e009115"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:45:03 crc kubenswrapper[4854]: I0103 05:45:03.798761 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7ab4e78-6d39-46bb-bb00-cd098e009115-config" (OuterVolumeSpecName: "config") pod "a7ab4e78-6d39-46bb-bb00-cd098e009115" (UID: "a7ab4e78-6d39-46bb-bb00-cd098e009115"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:45:03 crc kubenswrapper[4854]: I0103 05:45:03.805510 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7ab4e78-6d39-46bb-bb00-cd098e009115-kube-api-access-5s2w4" (OuterVolumeSpecName: "kube-api-access-5s2w4") pod "a7ab4e78-6d39-46bb-bb00-cd098e009115" (UID: "a7ab4e78-6d39-46bb-bb00-cd098e009115"). InnerVolumeSpecName "kube-api-access-5s2w4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:45:03 crc kubenswrapper[4854]: I0103 05:45:03.805522 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7ab4e78-6d39-46bb-bb00-cd098e009115-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a7ab4e78-6d39-46bb-bb00-cd098e009115" (UID: "a7ab4e78-6d39-46bb-bb00-cd098e009115"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:45:03 crc kubenswrapper[4854]: I0103 05:45:03.898978 4854 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a7ab4e78-6d39-46bb-bb00-cd098e009115-client-ca\") on node \"crc\" DevicePath \"\"" Jan 03 05:45:03 crc kubenswrapper[4854]: I0103 05:45:03.899020 4854 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7ab4e78-6d39-46bb-bb00-cd098e009115-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 05:45:03 crc kubenswrapper[4854]: I0103 05:45:03.899030 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5s2w4\" (UniqueName: \"kubernetes.io/projected/a7ab4e78-6d39-46bb-bb00-cd098e009115-kube-api-access-5s2w4\") on node \"crc\" DevicePath \"\"" Jan 03 05:45:03 crc kubenswrapper[4854]: I0103 05:45:03.899040 4854 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7ab4e78-6d39-46bb-bb00-cd098e009115-config\") on node \"crc\" DevicePath \"\"" Jan 03 05:45:04 crc kubenswrapper[4854]: I0103 05:45:04.667400 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8596db5b-95z6q" event={"ID":"a7ab4e78-6d39-46bb-bb00-cd098e009115","Type":"ContainerDied","Data":"a0de1a67b8d3faff9426842cbcbfb04a466305cbfb5ff5b1b668e0c9f0ea4a0b"} Jan 03 05:45:04 crc kubenswrapper[4854]: I0103 05:45:04.667735 4854 scope.go:117] "RemoveContainer" containerID="ad936cc3a4ec9bd5774af87fc83e8145bcf85213f2a2d73afc36a8ce4549f79c" Jan 03 05:45:04 crc kubenswrapper[4854]: I0103 05:45:04.667440 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8596db5b-95z6q" Jan 03 05:45:04 crc kubenswrapper[4854]: I0103 05:45:04.672335 4854 generic.go:334] "Generic (PLEG): container finished" podID="a0902db0-b7a6-496e-955c-c6f6bb3429c6" containerID="ae9d48fc83f76435a9cbaba74d6f91288dc078847c3816bfaf85451cb31511a2" exitCode=0 Jan 03 05:45:04 crc kubenswrapper[4854]: I0103 05:45:04.672514 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8f5jd" event={"ID":"a0902db0-b7a6-496e-955c-c6f6bb3429c6","Type":"ContainerDied","Data":"ae9d48fc83f76435a9cbaba74d6f91288dc078847c3816bfaf85451cb31511a2"} Jan 03 05:45:04 crc kubenswrapper[4854]: I0103 05:45:04.723187 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8596db5b-95z6q"] Jan 03 05:45:04 crc kubenswrapper[4854]: I0103 05:45:04.727757 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8596db5b-95z6q"] Jan 03 05:45:05 crc kubenswrapper[4854]: I0103 05:45:05.435758 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-799fd78b6c-wqs5s"] Jan 03 05:45:05 crc kubenswrapper[4854]: E0103 05:45:05.436001 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7ab4e78-6d39-46bb-bb00-cd098e009115" containerName="route-controller-manager" Jan 03 05:45:05 crc kubenswrapper[4854]: I0103 05:45:05.436015 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7ab4e78-6d39-46bb-bb00-cd098e009115" containerName="route-controller-manager" Jan 03 05:45:05 crc kubenswrapper[4854]: E0103 05:45:05.436032 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00ca6b9b-1bf3-4fa5-b787-449b8f8bbfec" containerName="collect-profiles" Jan 03 05:45:05 crc kubenswrapper[4854]: I0103 05:45:05.436040 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="00ca6b9b-1bf3-4fa5-b787-449b8f8bbfec" containerName="collect-profiles" Jan 03 05:45:05 crc kubenswrapper[4854]: I0103 05:45:05.436170 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7ab4e78-6d39-46bb-bb00-cd098e009115" containerName="route-controller-manager" Jan 03 05:45:05 crc kubenswrapper[4854]: I0103 05:45:05.436188 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="00ca6b9b-1bf3-4fa5-b787-449b8f8bbfec" containerName="collect-profiles" Jan 03 05:45:05 crc kubenswrapper[4854]: I0103 05:45:05.436650 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-wqs5s" Jan 03 05:45:05 crc kubenswrapper[4854]: I0103 05:45:05.441598 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 03 05:45:05 crc kubenswrapper[4854]: I0103 05:45:05.442510 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 03 05:45:05 crc kubenswrapper[4854]: I0103 05:45:05.442697 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 03 05:45:05 crc kubenswrapper[4854]: I0103 05:45:05.447238 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-799fd78b6c-wqs5s"] Jan 03 05:45:05 crc kubenswrapper[4854]: I0103 05:45:05.447471 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 03 05:45:05 crc kubenswrapper[4854]: I0103 05:45:05.447665 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 03 05:45:05 crc kubenswrapper[4854]: I0103 05:45:05.447775 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 03 05:45:05 crc kubenswrapper[4854]: I0103 05:45:05.527794 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0f4370d7-e178-42cc-99ec-fdfeca5fb5f8-client-ca\") pod \"route-controller-manager-799fd78b6c-wqs5s\" (UID: \"0f4370d7-e178-42cc-99ec-fdfeca5fb5f8\") " pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-wqs5s" Jan 03 05:45:05 crc kubenswrapper[4854]: I0103 05:45:05.527875 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m42c\" (UniqueName: \"kubernetes.io/projected/0f4370d7-e178-42cc-99ec-fdfeca5fb5f8-kube-api-access-8m42c\") pod \"route-controller-manager-799fd78b6c-wqs5s\" (UID: \"0f4370d7-e178-42cc-99ec-fdfeca5fb5f8\") " pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-wqs5s" Jan 03 05:45:05 crc kubenswrapper[4854]: I0103 05:45:05.527909 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f4370d7-e178-42cc-99ec-fdfeca5fb5f8-config\") pod \"route-controller-manager-799fd78b6c-wqs5s\" (UID: \"0f4370d7-e178-42cc-99ec-fdfeca5fb5f8\") " pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-wqs5s" Jan 03 05:45:05 crc kubenswrapper[4854]: I0103 05:45:05.527947 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f4370d7-e178-42cc-99ec-fdfeca5fb5f8-serving-cert\") pod \"route-controller-manager-799fd78b6c-wqs5s\" (UID: \"0f4370d7-e178-42cc-99ec-fdfeca5fb5f8\") " pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-wqs5s" Jan 03 05:45:05 crc kubenswrapper[4854]: I0103 05:45:05.628825 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0f4370d7-e178-42cc-99ec-fdfeca5fb5f8-client-ca\") pod \"route-controller-manager-799fd78b6c-wqs5s\" (UID: \"0f4370d7-e178-42cc-99ec-fdfeca5fb5f8\") " pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-wqs5s" Jan 03 05:45:05 crc kubenswrapper[4854]: I0103 05:45:05.629241 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8m42c\" (UniqueName: \"kubernetes.io/projected/0f4370d7-e178-42cc-99ec-fdfeca5fb5f8-kube-api-access-8m42c\") pod \"route-controller-manager-799fd78b6c-wqs5s\" (UID: \"0f4370d7-e178-42cc-99ec-fdfeca5fb5f8\") " pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-wqs5s" Jan 03 05:45:05 crc kubenswrapper[4854]: I0103 05:45:05.629418 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f4370d7-e178-42cc-99ec-fdfeca5fb5f8-config\") pod \"route-controller-manager-799fd78b6c-wqs5s\" (UID: \"0f4370d7-e178-42cc-99ec-fdfeca5fb5f8\") " pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-wqs5s" Jan 03 05:45:05 crc kubenswrapper[4854]: I0103 05:45:05.629583 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f4370d7-e178-42cc-99ec-fdfeca5fb5f8-serving-cert\") pod \"route-controller-manager-799fd78b6c-wqs5s\" (UID: \"0f4370d7-e178-42cc-99ec-fdfeca5fb5f8\") " pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-wqs5s" Jan 03 05:45:05 crc kubenswrapper[4854]: I0103 05:45:05.629787 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0f4370d7-e178-42cc-99ec-fdfeca5fb5f8-client-ca\") pod \"route-controller-manager-799fd78b6c-wqs5s\" (UID: \"0f4370d7-e178-42cc-99ec-fdfeca5fb5f8\") " pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-wqs5s" Jan 03 05:45:05 crc kubenswrapper[4854]: I0103 05:45:05.630821 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f4370d7-e178-42cc-99ec-fdfeca5fb5f8-config\") pod \"route-controller-manager-799fd78b6c-wqs5s\" (UID: \"0f4370d7-e178-42cc-99ec-fdfeca5fb5f8\") " pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-wqs5s" Jan 03 05:45:05 crc kubenswrapper[4854]: I0103 05:45:05.635213 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f4370d7-e178-42cc-99ec-fdfeca5fb5f8-serving-cert\") pod \"route-controller-manager-799fd78b6c-wqs5s\" (UID: \"0f4370d7-e178-42cc-99ec-fdfeca5fb5f8\") " pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-wqs5s" Jan 03 05:45:05 crc kubenswrapper[4854]: I0103 05:45:05.645022 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8m42c\" (UniqueName: \"kubernetes.io/projected/0f4370d7-e178-42cc-99ec-fdfeca5fb5f8-kube-api-access-8m42c\") pod \"route-controller-manager-799fd78b6c-wqs5s\" (UID: \"0f4370d7-e178-42cc-99ec-fdfeca5fb5f8\") " pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-wqs5s" Jan 03 05:45:05 crc kubenswrapper[4854]: I0103 05:45:05.679846 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6pvh8" event={"ID":"03a2de93-c858-46e8-ae42-a34d1d776b7c","Type":"ContainerStarted","Data":"bbe82c0b1f68f3994e20a0b03a43efbc04ceb2ca503b4bedb9b84659f17e79ca"} Jan 03 05:45:05 crc kubenswrapper[4854]: I0103 05:45:05.693434 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 03 05:45:05 crc kubenswrapper[4854]: I0103 05:45:05.700953 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6pvh8" podStartSLOduration=2.826435183 podStartE2EDuration="5.700934841s" podCreationTimestamp="2026-01-03 05:45:00 +0000 UTC" firstStartedPulling="2026-01-03 05:45:01.613422613 +0000 UTC m=+279.939999205" lastFinishedPulling="2026-01-03 05:45:04.487922271 +0000 UTC m=+282.814498863" observedRunningTime="2026-01-03 05:45:05.697657036 +0000 UTC m=+284.024233608" watchObservedRunningTime="2026-01-03 05:45:05.700934841 +0000 UTC m=+284.027511423" Jan 03 05:45:05 crc kubenswrapper[4854]: I0103 05:45:05.774794 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-wqs5s" Jan 03 05:45:05 crc kubenswrapper[4854]: I0103 05:45:05.893750 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 03 05:45:06 crc kubenswrapper[4854]: I0103 05:45:06.124004 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7ab4e78-6d39-46bb-bb00-cd098e009115" path="/var/lib/kubelet/pods/a7ab4e78-6d39-46bb-bb00-cd098e009115/volumes" Jan 03 05:45:06 crc kubenswrapper[4854]: I0103 05:45:06.207558 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-799fd78b6c-wqs5s"] Jan 03 05:45:06 crc kubenswrapper[4854]: I0103 05:45:06.685938 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-wqs5s" event={"ID":"0f4370d7-e178-42cc-99ec-fdfeca5fb5f8","Type":"ContainerStarted","Data":"a46e69ea1564a4df5f0ce9fac29eb7606702441f48e10f77e7fcabd1ed399622"} Jan 03 05:45:07 crc kubenswrapper[4854]: I0103 05:45:07.255380 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 03 05:45:07 crc kubenswrapper[4854]: I0103 05:45:07.438640 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 03 05:45:07 crc kubenswrapper[4854]: I0103 05:45:07.694294 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-wqs5s" event={"ID":"0f4370d7-e178-42cc-99ec-fdfeca5fb5f8","Type":"ContainerStarted","Data":"8ba167167a7457c4d989953c93e58c0a961916861f9a13e0bb90cacb5956b991"} Jan 03 05:45:08 crc kubenswrapper[4854]: I0103 05:45:08.200826 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5dgzc" Jan 03 05:45:08 crc kubenswrapper[4854]: I0103 05:45:08.200876 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5dgzc" Jan 03 05:45:08 crc kubenswrapper[4854]: I0103 05:45:08.238719 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5dgzc" Jan 03 05:45:08 crc kubenswrapper[4854]: I0103 05:45:08.319104 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 03 05:45:08 crc kubenswrapper[4854]: I0103 05:45:08.699628 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-wqs5s" Jan 03 05:45:08 crc kubenswrapper[4854]: I0103 05:45:08.704452 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-wqs5s" Jan 03 05:45:08 crc kubenswrapper[4854]: I0103 05:45:08.716005 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-wqs5s" podStartSLOduration=5.715991774 podStartE2EDuration="5.715991774s" podCreationTimestamp="2026-01-03 05:45:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:45:08.713940491 +0000 UTC m=+287.040517073" watchObservedRunningTime="2026-01-03 05:45:08.715991774 +0000 UTC m=+287.042568346" Jan 03 05:45:08 crc kubenswrapper[4854]: I0103 05:45:08.757927 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5dgzc" Jan 03 05:45:09 crc kubenswrapper[4854]: I0103 05:45:09.314545 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9lzwf" Jan 03 05:45:09 crc kubenswrapper[4854]: I0103 05:45:09.314921 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9lzwf" Jan 03 05:45:09 crc kubenswrapper[4854]: I0103 05:45:09.365520 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9lzwf" Jan 03 05:45:09 crc kubenswrapper[4854]: I0103 05:45:09.708924 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8f5jd" event={"ID":"a0902db0-b7a6-496e-955c-c6f6bb3429c6","Type":"ContainerStarted","Data":"beefc6c9c7cd45bbcbb076ad45c39976e498a1e4adab4caf810e7c2791e0ac1e"} Jan 03 05:45:09 crc kubenswrapper[4854]: I0103 05:45:09.730137 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8f5jd" podStartSLOduration=3.486994876 podStartE2EDuration="9.730123271s" podCreationTimestamp="2026-01-03 05:45:00 +0000 UTC" firstStartedPulling="2026-01-03 05:45:02.629437218 +0000 UTC m=+280.956013800" lastFinishedPulling="2026-01-03 05:45:08.872565623 +0000 UTC m=+287.199142195" observedRunningTime="2026-01-03 05:45:09.728827198 +0000 UTC m=+288.055403790" watchObservedRunningTime="2026-01-03 05:45:09.730123271 +0000 UTC m=+288.056699853" Jan 03 05:45:09 crc kubenswrapper[4854]: I0103 05:45:09.750250 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9lzwf" Jan 03 05:45:10 crc kubenswrapper[4854]: I0103 05:45:10.800129 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6pvh8" Jan 03 05:45:10 crc kubenswrapper[4854]: I0103 05:45:10.800206 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6pvh8" Jan 03 05:45:10 crc kubenswrapper[4854]: I0103 05:45:10.874193 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6pvh8" Jan 03 05:45:11 crc kubenswrapper[4854]: I0103 05:45:11.160559 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 03 05:45:11 crc kubenswrapper[4854]: I0103 05:45:11.199832 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8f5jd" Jan 03 05:45:11 crc kubenswrapper[4854]: I0103 05:45:11.199881 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8f5jd" Jan 03 05:45:11 crc kubenswrapper[4854]: I0103 05:45:11.248377 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8f5jd" Jan 03 05:45:11 crc kubenswrapper[4854]: I0103 05:45:11.594835 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 03 05:45:11 crc kubenswrapper[4854]: I0103 05:45:11.784147 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6pvh8" Jan 03 05:45:12 crc kubenswrapper[4854]: I0103 05:45:12.630435 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 03 05:45:13 crc kubenswrapper[4854]: I0103 05:45:13.192013 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 03 05:45:13 crc kubenswrapper[4854]: I0103 05:45:13.330883 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-9g6wd"] Jan 03 05:45:13 crc kubenswrapper[4854]: I0103 05:45:13.332009 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9g6wd" Jan 03 05:45:13 crc kubenswrapper[4854]: I0103 05:45:13.337311 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Jan 03 05:45:13 crc kubenswrapper[4854]: I0103 05:45:13.338426 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Jan 03 05:45:13 crc kubenswrapper[4854]: I0103 05:45:13.339349 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Jan 03 05:45:13 crc kubenswrapper[4854]: I0103 05:45:13.339381 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Jan 03 05:45:13 crc kubenswrapper[4854]: I0103 05:45:13.339367 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Jan 03 05:45:13 crc kubenswrapper[4854]: I0103 05:45:13.347508 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-9g6wd"] Jan 03 05:45:13 crc kubenswrapper[4854]: I0103 05:45:13.426195 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/971e494a-25cb-40e3-971e-edaaabe80e6f-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-9g6wd\" (UID: \"971e494a-25cb-40e3-971e-edaaabe80e6f\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9g6wd" Jan 03 05:45:13 crc kubenswrapper[4854]: I0103 05:45:13.426350 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/971e494a-25cb-40e3-971e-edaaabe80e6f-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-9g6wd\" (UID: \"971e494a-25cb-40e3-971e-edaaabe80e6f\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9g6wd" Jan 03 05:45:13 crc kubenswrapper[4854]: I0103 05:45:13.426392 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwqzh\" (UniqueName: \"kubernetes.io/projected/971e494a-25cb-40e3-971e-edaaabe80e6f-kube-api-access-lwqzh\") pod \"cluster-monitoring-operator-6d5b84845-9g6wd\" (UID: \"971e494a-25cb-40e3-971e-edaaabe80e6f\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9g6wd" Jan 03 05:45:13 crc kubenswrapper[4854]: I0103 05:45:13.527834 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/971e494a-25cb-40e3-971e-edaaabe80e6f-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-9g6wd\" (UID: \"971e494a-25cb-40e3-971e-edaaabe80e6f\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9g6wd" Jan 03 05:45:13 crc kubenswrapper[4854]: I0103 05:45:13.527976 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/971e494a-25cb-40e3-971e-edaaabe80e6f-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-9g6wd\" (UID: \"971e494a-25cb-40e3-971e-edaaabe80e6f\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9g6wd" Jan 03 05:45:13 crc kubenswrapper[4854]: I0103 05:45:13.527999 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwqzh\" (UniqueName: \"kubernetes.io/projected/971e494a-25cb-40e3-971e-edaaabe80e6f-kube-api-access-lwqzh\") pod \"cluster-monitoring-operator-6d5b84845-9g6wd\" (UID: \"971e494a-25cb-40e3-971e-edaaabe80e6f\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9g6wd" Jan 03 05:45:13 crc kubenswrapper[4854]: I0103 05:45:13.528950 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/971e494a-25cb-40e3-971e-edaaabe80e6f-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-9g6wd\" (UID: \"971e494a-25cb-40e3-971e-edaaabe80e6f\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9g6wd" Jan 03 05:45:13 crc kubenswrapper[4854]: I0103 05:45:13.539358 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/971e494a-25cb-40e3-971e-edaaabe80e6f-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-9g6wd\" (UID: \"971e494a-25cb-40e3-971e-edaaabe80e6f\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9g6wd" Jan 03 05:45:13 crc kubenswrapper[4854]: I0103 05:45:13.548234 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwqzh\" (UniqueName: \"kubernetes.io/projected/971e494a-25cb-40e3-971e-edaaabe80e6f-kube-api-access-lwqzh\") pod \"cluster-monitoring-operator-6d5b84845-9g6wd\" (UID: \"971e494a-25cb-40e3-971e-edaaabe80e6f\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9g6wd" Jan 03 05:45:13 crc kubenswrapper[4854]: I0103 05:45:13.648252 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9g6wd" Jan 03 05:45:14 crc kubenswrapper[4854]: I0103 05:45:14.070812 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-9g6wd"] Jan 03 05:45:14 crc kubenswrapper[4854]: W0103 05:45:14.079002 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod971e494a_25cb_40e3_971e_edaaabe80e6f.slice/crio-34d182f82a8a37d123db3c038cf66a89974c50b22485c8e1c4c2308db2882e73 WatchSource:0}: Error finding container 34d182f82a8a37d123db3c038cf66a89974c50b22485c8e1c4c2308db2882e73: Status 404 returned error can't find the container with id 34d182f82a8a37d123db3c038cf66a89974c50b22485c8e1c4c2308db2882e73 Jan 03 05:45:14 crc kubenswrapper[4854]: I0103 05:45:14.270810 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 03 05:45:14 crc kubenswrapper[4854]: I0103 05:45:14.753784 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9g6wd" event={"ID":"971e494a-25cb-40e3-971e-edaaabe80e6f","Type":"ContainerStarted","Data":"34d182f82a8a37d123db3c038cf66a89974c50b22485c8e1c4c2308db2882e73"} Jan 03 05:45:21 crc kubenswrapper[4854]: I0103 05:45:21.263260 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8f5jd" Jan 03 05:45:23 crc kubenswrapper[4854]: I0103 05:45:23.346419 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-68ff5b985d-c4tlf"] Jan 03 05:45:23 crc kubenswrapper[4854]: I0103 05:45:23.347271 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-68ff5b985d-c4tlf" podUID="20f336e4-1795-44b6-bde4-e614b5bee120" containerName="controller-manager" containerID="cri-o://0578ac453395e979a1316b50402ba660b646103a4ccce294c2f4164820cea48e" gracePeriod=30 Jan 03 05:45:23 crc kubenswrapper[4854]: I0103 05:45:23.386867 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-q6v5f"] Jan 03 05:45:23 crc kubenswrapper[4854]: I0103 05:45:23.387512 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-q6v5f" Jan 03 05:45:23 crc kubenswrapper[4854]: I0103 05:45:23.389632 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-fw4tn" Jan 03 05:45:23 crc kubenswrapper[4854]: I0103 05:45:23.390177 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Jan 03 05:45:23 crc kubenswrapper[4854]: I0103 05:45:23.401172 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-q6v5f"] Jan 03 05:45:23 crc kubenswrapper[4854]: I0103 05:45:23.427776 4854 patch_prober.go:28] interesting pod/controller-manager-68ff5b985d-c4tlf container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": dial tcp 10.217.0.64:8443: connect: connection refused" start-of-body= Jan 03 05:45:23 crc kubenswrapper[4854]: I0103 05:45:23.427825 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-68ff5b985d-c4tlf" podUID="20f336e4-1795-44b6-bde4-e614b5bee120" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": dial tcp 10.217.0.64:8443: connect: connection refused" Jan 03 05:45:23 crc kubenswrapper[4854]: I0103 05:45:23.476170 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/e1f91a20-c61d-488f-98ab-f966174f3764-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-q6v5f\" (UID: \"e1f91a20-c61d-488f-98ab-f966174f3764\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-q6v5f" Jan 03 05:45:23 crc kubenswrapper[4854]: I0103 05:45:23.577563 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/e1f91a20-c61d-488f-98ab-f966174f3764-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-q6v5f\" (UID: \"e1f91a20-c61d-488f-98ab-f966174f3764\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-q6v5f" Jan 03 05:45:23 crc kubenswrapper[4854]: E0103 05:45:23.577743 4854 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Jan 03 05:45:23 crc kubenswrapper[4854]: E0103 05:45:23.577850 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1f91a20-c61d-488f-98ab-f966174f3764-tls-certificates podName:e1f91a20-c61d-488f-98ab-f966174f3764 nodeName:}" failed. No retries permitted until 2026-01-03 05:45:24.07781964 +0000 UTC m=+302.404396242 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/e1f91a20-c61d-488f-98ab-f966174f3764-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-q6v5f" (UID: "e1f91a20-c61d-488f-98ab-f966174f3764") : secret "prometheus-operator-admission-webhook-tls" not found Jan 03 05:45:23 crc kubenswrapper[4854]: I0103 05:45:23.814268 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9g6wd" event={"ID":"971e494a-25cb-40e3-971e-edaaabe80e6f","Type":"ContainerStarted","Data":"8862c606b5467c4d6af7a28192cefbafb6d4a679d538b7e36486d04cf8ec5f4f"} Jan 03 05:45:23 crc kubenswrapper[4854]: I0103 05:45:23.829938 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9g6wd" podStartSLOduration=2.129245809 podStartE2EDuration="10.829913469s" podCreationTimestamp="2026-01-03 05:45:13 +0000 UTC" firstStartedPulling="2026-01-03 05:45:14.081992116 +0000 UTC m=+292.408568728" lastFinishedPulling="2026-01-03 05:45:22.782659816 +0000 UTC m=+301.109236388" observedRunningTime="2026-01-03 05:45:23.829143089 +0000 UTC m=+302.155719701" watchObservedRunningTime="2026-01-03 05:45:23.829913469 +0000 UTC m=+302.156490081" Jan 03 05:45:24 crc kubenswrapper[4854]: I0103 05:45:24.084492 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/e1f91a20-c61d-488f-98ab-f966174f3764-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-q6v5f\" (UID: \"e1f91a20-c61d-488f-98ab-f966174f3764\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-q6v5f" Jan 03 05:45:24 crc kubenswrapper[4854]: I0103 05:45:24.093740 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/e1f91a20-c61d-488f-98ab-f966174f3764-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-q6v5f\" (UID: \"e1f91a20-c61d-488f-98ab-f966174f3764\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-q6v5f" Jan 03 05:45:24 crc kubenswrapper[4854]: I0103 05:45:24.300596 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-q6v5f" Jan 03 05:45:24 crc kubenswrapper[4854]: I0103 05:45:24.555250 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-q6v5f"] Jan 03 05:45:24 crc kubenswrapper[4854]: W0103 05:45:24.563497 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1f91a20_c61d_488f_98ab_f966174f3764.slice/crio-ec5348af85c575748e0102afb9393356b7889d4ca761ea65ba6c9bc37587fd9c WatchSource:0}: Error finding container ec5348af85c575748e0102afb9393356b7889d4ca761ea65ba6c9bc37587fd9c: Status 404 returned error can't find the container with id ec5348af85c575748e0102afb9393356b7889d4ca761ea65ba6c9bc37587fd9c Jan 03 05:45:24 crc kubenswrapper[4854]: I0103 05:45:24.820532 4854 generic.go:334] "Generic (PLEG): container finished" podID="20f336e4-1795-44b6-bde4-e614b5bee120" containerID="0578ac453395e979a1316b50402ba660b646103a4ccce294c2f4164820cea48e" exitCode=0 Jan 03 05:45:24 crc kubenswrapper[4854]: I0103 05:45:24.820640 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68ff5b985d-c4tlf" event={"ID":"20f336e4-1795-44b6-bde4-e614b5bee120","Type":"ContainerDied","Data":"0578ac453395e979a1316b50402ba660b646103a4ccce294c2f4164820cea48e"} Jan 03 05:45:24 crc kubenswrapper[4854]: I0103 05:45:24.820860 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68ff5b985d-c4tlf" event={"ID":"20f336e4-1795-44b6-bde4-e614b5bee120","Type":"ContainerDied","Data":"cb6d3001da420da3cab59d26267b7798517060cc8de9272617cb516a3947043e"} Jan 03 05:45:24 crc kubenswrapper[4854]: I0103 05:45:24.820880 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb6d3001da420da3cab59d26267b7798517060cc8de9272617cb516a3947043e" Jan 03 05:45:24 crc kubenswrapper[4854]: I0103 05:45:24.821831 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-q6v5f" event={"ID":"e1f91a20-c61d-488f-98ab-f966174f3764","Type":"ContainerStarted","Data":"ec5348af85c575748e0102afb9393356b7889d4ca761ea65ba6c9bc37587fd9c"} Jan 03 05:45:24 crc kubenswrapper[4854]: I0103 05:45:24.835422 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68ff5b985d-c4tlf" Jan 03 05:45:24 crc kubenswrapper[4854]: I0103 05:45:24.869645 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7ff6f7c9f7-lfv4z"] Jan 03 05:45:24 crc kubenswrapper[4854]: E0103 05:45:24.870178 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20f336e4-1795-44b6-bde4-e614b5bee120" containerName="controller-manager" Jan 03 05:45:24 crc kubenswrapper[4854]: I0103 05:45:24.870204 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="20f336e4-1795-44b6-bde4-e614b5bee120" containerName="controller-manager" Jan 03 05:45:24 crc kubenswrapper[4854]: I0103 05:45:24.870438 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="20f336e4-1795-44b6-bde4-e614b5bee120" containerName="controller-manager" Jan 03 05:45:24 crc kubenswrapper[4854]: I0103 05:45:24.871324 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-lfv4z" Jan 03 05:45:24 crc kubenswrapper[4854]: I0103 05:45:24.879021 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7ff6f7c9f7-lfv4z"] Jan 03 05:45:24 crc kubenswrapper[4854]: I0103 05:45:24.999763 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20f336e4-1795-44b6-bde4-e614b5bee120-config\") pod \"20f336e4-1795-44b6-bde4-e614b5bee120\" (UID: \"20f336e4-1795-44b6-bde4-e614b5bee120\") " Jan 03 05:45:24 crc kubenswrapper[4854]: I0103 05:45:24.999804 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20f336e4-1795-44b6-bde4-e614b5bee120-serving-cert\") pod \"20f336e4-1795-44b6-bde4-e614b5bee120\" (UID: \"20f336e4-1795-44b6-bde4-e614b5bee120\") " Jan 03 05:45:25 crc kubenswrapper[4854]: I0103 05:45:24.999819 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/20f336e4-1795-44b6-bde4-e614b5bee120-proxy-ca-bundles\") pod \"20f336e4-1795-44b6-bde4-e614b5bee120\" (UID: \"20f336e4-1795-44b6-bde4-e614b5bee120\") " Jan 03 05:45:25 crc kubenswrapper[4854]: I0103 05:45:24.999837 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20f336e4-1795-44b6-bde4-e614b5bee120-client-ca\") pod \"20f336e4-1795-44b6-bde4-e614b5bee120\" (UID: \"20f336e4-1795-44b6-bde4-e614b5bee120\") " Jan 03 05:45:25 crc kubenswrapper[4854]: I0103 05:45:24.999876 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7l5t6\" (UniqueName: \"kubernetes.io/projected/20f336e4-1795-44b6-bde4-e614b5bee120-kube-api-access-7l5t6\") pod \"20f336e4-1795-44b6-bde4-e614b5bee120\" (UID: \"20f336e4-1795-44b6-bde4-e614b5bee120\") " Jan 03 05:45:25 crc kubenswrapper[4854]: I0103 05:45:25.000171 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwmrn\" (UniqueName: \"kubernetes.io/projected/82dcb747-3603-42a5-82ca-f7664d5d9027-kube-api-access-dwmrn\") pod \"controller-manager-7ff6f7c9f7-lfv4z\" (UID: \"82dcb747-3603-42a5-82ca-f7664d5d9027\") " pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-lfv4z" Jan 03 05:45:25 crc kubenswrapper[4854]: I0103 05:45:25.000201 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82dcb747-3603-42a5-82ca-f7664d5d9027-serving-cert\") pod \"controller-manager-7ff6f7c9f7-lfv4z\" (UID: \"82dcb747-3603-42a5-82ca-f7664d5d9027\") " pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-lfv4z" Jan 03 05:45:25 crc kubenswrapper[4854]: I0103 05:45:25.000276 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/82dcb747-3603-42a5-82ca-f7664d5d9027-client-ca\") pod \"controller-manager-7ff6f7c9f7-lfv4z\" (UID: \"82dcb747-3603-42a5-82ca-f7664d5d9027\") " pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-lfv4z" Jan 03 05:45:25 crc kubenswrapper[4854]: I0103 05:45:25.000337 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/82dcb747-3603-42a5-82ca-f7664d5d9027-proxy-ca-bundles\") pod \"controller-manager-7ff6f7c9f7-lfv4z\" (UID: \"82dcb747-3603-42a5-82ca-f7664d5d9027\") " pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-lfv4z" Jan 03 05:45:25 crc kubenswrapper[4854]: I0103 05:45:25.000366 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82dcb747-3603-42a5-82ca-f7664d5d9027-config\") pod \"controller-manager-7ff6f7c9f7-lfv4z\" (UID: \"82dcb747-3603-42a5-82ca-f7664d5d9027\") " pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-lfv4z" Jan 03 05:45:25 crc kubenswrapper[4854]: I0103 05:45:25.001202 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20f336e4-1795-44b6-bde4-e614b5bee120-client-ca" (OuterVolumeSpecName: "client-ca") pod "20f336e4-1795-44b6-bde4-e614b5bee120" (UID: "20f336e4-1795-44b6-bde4-e614b5bee120"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:45:25 crc kubenswrapper[4854]: I0103 05:45:25.001394 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20f336e4-1795-44b6-bde4-e614b5bee120-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "20f336e4-1795-44b6-bde4-e614b5bee120" (UID: "20f336e4-1795-44b6-bde4-e614b5bee120"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:45:25 crc kubenswrapper[4854]: I0103 05:45:25.001706 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20f336e4-1795-44b6-bde4-e614b5bee120-config" (OuterVolumeSpecName: "config") pod "20f336e4-1795-44b6-bde4-e614b5bee120" (UID: "20f336e4-1795-44b6-bde4-e614b5bee120"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:45:25 crc kubenswrapper[4854]: I0103 05:45:25.006070 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20f336e4-1795-44b6-bde4-e614b5bee120-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "20f336e4-1795-44b6-bde4-e614b5bee120" (UID: "20f336e4-1795-44b6-bde4-e614b5bee120"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:45:25 crc kubenswrapper[4854]: I0103 05:45:25.007264 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20f336e4-1795-44b6-bde4-e614b5bee120-kube-api-access-7l5t6" (OuterVolumeSpecName: "kube-api-access-7l5t6") pod "20f336e4-1795-44b6-bde4-e614b5bee120" (UID: "20f336e4-1795-44b6-bde4-e614b5bee120"). InnerVolumeSpecName "kube-api-access-7l5t6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:45:25 crc kubenswrapper[4854]: I0103 05:45:25.101803 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/82dcb747-3603-42a5-82ca-f7664d5d9027-client-ca\") pod \"controller-manager-7ff6f7c9f7-lfv4z\" (UID: \"82dcb747-3603-42a5-82ca-f7664d5d9027\") " pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-lfv4z" Jan 03 05:45:25 crc kubenswrapper[4854]: I0103 05:45:25.101917 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/82dcb747-3603-42a5-82ca-f7664d5d9027-proxy-ca-bundles\") pod \"controller-manager-7ff6f7c9f7-lfv4z\" (UID: \"82dcb747-3603-42a5-82ca-f7664d5d9027\") " pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-lfv4z" Jan 03 05:45:25 crc kubenswrapper[4854]: I0103 05:45:25.101962 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82dcb747-3603-42a5-82ca-f7664d5d9027-config\") pod \"controller-manager-7ff6f7c9f7-lfv4z\" (UID: \"82dcb747-3603-42a5-82ca-f7664d5d9027\") " pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-lfv4z" Jan 03 05:45:25 crc kubenswrapper[4854]: I0103 05:45:25.102043 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwmrn\" (UniqueName: \"kubernetes.io/projected/82dcb747-3603-42a5-82ca-f7664d5d9027-kube-api-access-dwmrn\") pod \"controller-manager-7ff6f7c9f7-lfv4z\" (UID: \"82dcb747-3603-42a5-82ca-f7664d5d9027\") " pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-lfv4z" Jan 03 05:45:25 crc kubenswrapper[4854]: I0103 05:45:25.102139 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82dcb747-3603-42a5-82ca-f7664d5d9027-serving-cert\") pod \"controller-manager-7ff6f7c9f7-lfv4z\" (UID: \"82dcb747-3603-42a5-82ca-f7664d5d9027\") " pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-lfv4z" Jan 03 05:45:25 crc kubenswrapper[4854]: I0103 05:45:25.102256 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7l5t6\" (UniqueName: \"kubernetes.io/projected/20f336e4-1795-44b6-bde4-e614b5bee120-kube-api-access-7l5t6\") on node \"crc\" DevicePath \"\"" Jan 03 05:45:25 crc kubenswrapper[4854]: I0103 05:45:25.102279 4854 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20f336e4-1795-44b6-bde4-e614b5bee120-config\") on node \"crc\" DevicePath \"\"" Jan 03 05:45:25 crc kubenswrapper[4854]: I0103 05:45:25.102298 4854 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20f336e4-1795-44b6-bde4-e614b5bee120-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 05:45:25 crc kubenswrapper[4854]: I0103 05:45:25.102315 4854 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/20f336e4-1795-44b6-bde4-e614b5bee120-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 03 05:45:25 crc kubenswrapper[4854]: I0103 05:45:25.102331 4854 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20f336e4-1795-44b6-bde4-e614b5bee120-client-ca\") on node \"crc\" DevicePath \"\"" Jan 03 05:45:25 crc kubenswrapper[4854]: I0103 05:45:25.102923 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/82dcb747-3603-42a5-82ca-f7664d5d9027-client-ca\") pod \"controller-manager-7ff6f7c9f7-lfv4z\" (UID: \"82dcb747-3603-42a5-82ca-f7664d5d9027\") " pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-lfv4z" Jan 03 05:45:25 crc kubenswrapper[4854]: I0103 05:45:25.103759 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/82dcb747-3603-42a5-82ca-f7664d5d9027-proxy-ca-bundles\") pod \"controller-manager-7ff6f7c9f7-lfv4z\" (UID: \"82dcb747-3603-42a5-82ca-f7664d5d9027\") " pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-lfv4z" Jan 03 05:45:25 crc kubenswrapper[4854]: I0103 05:45:25.104360 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82dcb747-3603-42a5-82ca-f7664d5d9027-config\") pod \"controller-manager-7ff6f7c9f7-lfv4z\" (UID: \"82dcb747-3603-42a5-82ca-f7664d5d9027\") " pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-lfv4z" Jan 03 05:45:25 crc kubenswrapper[4854]: I0103 05:45:25.107596 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82dcb747-3603-42a5-82ca-f7664d5d9027-serving-cert\") pod \"controller-manager-7ff6f7c9f7-lfv4z\" (UID: \"82dcb747-3603-42a5-82ca-f7664d5d9027\") " pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-lfv4z" Jan 03 05:45:25 crc kubenswrapper[4854]: I0103 05:45:25.121241 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwmrn\" (UniqueName: \"kubernetes.io/projected/82dcb747-3603-42a5-82ca-f7664d5d9027-kube-api-access-dwmrn\") pod \"controller-manager-7ff6f7c9f7-lfv4z\" (UID: \"82dcb747-3603-42a5-82ca-f7664d5d9027\") " pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-lfv4z" Jan 03 05:45:25 crc kubenswrapper[4854]: I0103 05:45:25.192749 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-lfv4z" Jan 03 05:45:25 crc kubenswrapper[4854]: I0103 05:45:25.603158 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7ff6f7c9f7-lfv4z"] Jan 03 05:45:25 crc kubenswrapper[4854]: W0103 05:45:25.610099 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82dcb747_3603_42a5_82ca_f7664d5d9027.slice/crio-43341a8f650f0092d5dc178108656af60f6e9e0ad871eb7693dadda59e8f1dde WatchSource:0}: Error finding container 43341a8f650f0092d5dc178108656af60f6e9e0ad871eb7693dadda59e8f1dde: Status 404 returned error can't find the container with id 43341a8f650f0092d5dc178108656af60f6e9e0ad871eb7693dadda59e8f1dde Jan 03 05:45:25 crc kubenswrapper[4854]: I0103 05:45:25.827878 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68ff5b985d-c4tlf" Jan 03 05:45:25 crc kubenswrapper[4854]: I0103 05:45:25.829479 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-lfv4z" event={"ID":"82dcb747-3603-42a5-82ca-f7664d5d9027","Type":"ContainerStarted","Data":"51cf1354e8866c019109dd0689ead62267930f50c8e279fc80a89946e66485df"} Jan 03 05:45:25 crc kubenswrapper[4854]: I0103 05:45:25.829535 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-lfv4z" event={"ID":"82dcb747-3603-42a5-82ca-f7664d5d9027","Type":"ContainerStarted","Data":"43341a8f650f0092d5dc178108656af60f6e9e0ad871eb7693dadda59e8f1dde"} Jan 03 05:45:25 crc kubenswrapper[4854]: I0103 05:45:25.865850 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-68ff5b985d-c4tlf"] Jan 03 05:45:25 crc kubenswrapper[4854]: I0103 05:45:25.870244 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-68ff5b985d-c4tlf"] Jan 03 05:45:26 crc kubenswrapper[4854]: I0103 05:45:26.124895 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20f336e4-1795-44b6-bde4-e614b5bee120" path="/var/lib/kubelet/pods/20f336e4-1795-44b6-bde4-e614b5bee120/volumes" Jan 03 05:45:26 crc kubenswrapper[4854]: I0103 05:45:26.833632 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-lfv4z" Jan 03 05:45:26 crc kubenswrapper[4854]: I0103 05:45:26.842095 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-lfv4z" Jan 03 05:45:26 crc kubenswrapper[4854]: I0103 05:45:26.863282 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-lfv4z" podStartSLOduration=3.8632598849999997 podStartE2EDuration="3.863259885s" podCreationTimestamp="2026-01-03 05:45:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:45:26.855286689 +0000 UTC m=+305.181863351" watchObservedRunningTime="2026-01-03 05:45:26.863259885 +0000 UTC m=+305.189836477" Jan 03 05:45:27 crc kubenswrapper[4854]: I0103 05:45:27.844652 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-q6v5f" event={"ID":"e1f91a20-c61d-488f-98ab-f966174f3764","Type":"ContainerStarted","Data":"cf53552d47c3573d6b6c388776a4079b91daadea1b72bad72d69acf59404441c"} Jan 03 05:45:27 crc kubenswrapper[4854]: I0103 05:45:27.889887 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-q6v5f" podStartSLOduration=2.378479108 podStartE2EDuration="4.889861195s" podCreationTimestamp="2026-01-03 05:45:23 +0000 UTC" firstStartedPulling="2026-01-03 05:45:24.567388622 +0000 UTC m=+302.893965204" lastFinishedPulling="2026-01-03 05:45:27.078770709 +0000 UTC m=+305.405347291" observedRunningTime="2026-01-03 05:45:27.888838128 +0000 UTC m=+306.215414710" watchObservedRunningTime="2026-01-03 05:45:27.889861195 +0000 UTC m=+306.216437797" Jan 03 05:45:28 crc kubenswrapper[4854]: I0103 05:45:28.850818 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-q6v5f" Jan 03 05:45:28 crc kubenswrapper[4854]: I0103 05:45:28.855775 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-q6v5f" Jan 03 05:45:29 crc kubenswrapper[4854]: I0103 05:45:29.487728 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-twpx4"] Jan 03 05:45:29 crc kubenswrapper[4854]: I0103 05:45:29.488849 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-twpx4" Jan 03 05:45:29 crc kubenswrapper[4854]: I0103 05:45:29.492045 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Jan 03 05:45:29 crc kubenswrapper[4854]: I0103 05:45:29.492237 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-gs7fv" Jan 03 05:45:29 crc kubenswrapper[4854]: I0103 05:45:29.492563 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Jan 03 05:45:29 crc kubenswrapper[4854]: I0103 05:45:29.492980 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Jan 03 05:45:29 crc kubenswrapper[4854]: I0103 05:45:29.504888 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-twpx4"] Jan 03 05:45:29 crc kubenswrapper[4854]: I0103 05:45:29.661691 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/166f2903-bd97-4aa9-b66b-14826bafdc8d-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-twpx4\" (UID: \"166f2903-bd97-4aa9-b66b-14826bafdc8d\") " pod="openshift-monitoring/prometheus-operator-db54df47d-twpx4" Jan 03 05:45:29 crc kubenswrapper[4854]: I0103 05:45:29.661750 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/166f2903-bd97-4aa9-b66b-14826bafdc8d-metrics-client-ca\") pod \"prometheus-operator-db54df47d-twpx4\" (UID: \"166f2903-bd97-4aa9-b66b-14826bafdc8d\") " pod="openshift-monitoring/prometheus-operator-db54df47d-twpx4" Jan 03 05:45:29 crc kubenswrapper[4854]: I0103 05:45:29.661796 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5vmp\" (UniqueName: \"kubernetes.io/projected/166f2903-bd97-4aa9-b66b-14826bafdc8d-kube-api-access-h5vmp\") pod \"prometheus-operator-db54df47d-twpx4\" (UID: \"166f2903-bd97-4aa9-b66b-14826bafdc8d\") " pod="openshift-monitoring/prometheus-operator-db54df47d-twpx4" Jan 03 05:45:29 crc kubenswrapper[4854]: I0103 05:45:29.661857 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/166f2903-bd97-4aa9-b66b-14826bafdc8d-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-twpx4\" (UID: \"166f2903-bd97-4aa9-b66b-14826bafdc8d\") " pod="openshift-monitoring/prometheus-operator-db54df47d-twpx4" Jan 03 05:45:29 crc kubenswrapper[4854]: I0103 05:45:29.763144 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/166f2903-bd97-4aa9-b66b-14826bafdc8d-metrics-client-ca\") pod \"prometheus-operator-db54df47d-twpx4\" (UID: \"166f2903-bd97-4aa9-b66b-14826bafdc8d\") " pod="openshift-monitoring/prometheus-operator-db54df47d-twpx4" Jan 03 05:45:29 crc kubenswrapper[4854]: I0103 05:45:29.763189 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5vmp\" (UniqueName: \"kubernetes.io/projected/166f2903-bd97-4aa9-b66b-14826bafdc8d-kube-api-access-h5vmp\") pod \"prometheus-operator-db54df47d-twpx4\" (UID: \"166f2903-bd97-4aa9-b66b-14826bafdc8d\") " pod="openshift-monitoring/prometheus-operator-db54df47d-twpx4" Jan 03 05:45:29 crc kubenswrapper[4854]: I0103 05:45:29.763228 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/166f2903-bd97-4aa9-b66b-14826bafdc8d-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-twpx4\" (UID: \"166f2903-bd97-4aa9-b66b-14826bafdc8d\") " pod="openshift-monitoring/prometheus-operator-db54df47d-twpx4" Jan 03 05:45:29 crc kubenswrapper[4854]: I0103 05:45:29.763281 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/166f2903-bd97-4aa9-b66b-14826bafdc8d-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-twpx4\" (UID: \"166f2903-bd97-4aa9-b66b-14826bafdc8d\") " pod="openshift-monitoring/prometheus-operator-db54df47d-twpx4" Jan 03 05:45:29 crc kubenswrapper[4854]: I0103 05:45:29.763992 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/166f2903-bd97-4aa9-b66b-14826bafdc8d-metrics-client-ca\") pod \"prometheus-operator-db54df47d-twpx4\" (UID: \"166f2903-bd97-4aa9-b66b-14826bafdc8d\") " pod="openshift-monitoring/prometheus-operator-db54df47d-twpx4" Jan 03 05:45:29 crc kubenswrapper[4854]: I0103 05:45:29.772872 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/166f2903-bd97-4aa9-b66b-14826bafdc8d-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-twpx4\" (UID: \"166f2903-bd97-4aa9-b66b-14826bafdc8d\") " pod="openshift-monitoring/prometheus-operator-db54df47d-twpx4" Jan 03 05:45:29 crc kubenswrapper[4854]: I0103 05:45:29.774132 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/166f2903-bd97-4aa9-b66b-14826bafdc8d-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-twpx4\" (UID: \"166f2903-bd97-4aa9-b66b-14826bafdc8d\") " pod="openshift-monitoring/prometheus-operator-db54df47d-twpx4" Jan 03 05:45:29 crc kubenswrapper[4854]: I0103 05:45:29.791966 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5vmp\" (UniqueName: \"kubernetes.io/projected/166f2903-bd97-4aa9-b66b-14826bafdc8d-kube-api-access-h5vmp\") pod \"prometheus-operator-db54df47d-twpx4\" (UID: \"166f2903-bd97-4aa9-b66b-14826bafdc8d\") " pod="openshift-monitoring/prometheus-operator-db54df47d-twpx4" Jan 03 05:45:29 crc kubenswrapper[4854]: I0103 05:45:29.809128 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-twpx4" Jan 03 05:45:30 crc kubenswrapper[4854]: I0103 05:45:30.267768 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-twpx4"] Jan 03 05:45:30 crc kubenswrapper[4854]: I0103 05:45:30.865710 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-twpx4" event={"ID":"166f2903-bd97-4aa9-b66b-14826bafdc8d","Type":"ContainerStarted","Data":"f9ab03a4659cc7ccb99d8a0a91231776a2c5723e5b47054b5f39c3594a477711"} Jan 03 05:45:34 crc kubenswrapper[4854]: I0103 05:45:34.898685 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-twpx4" event={"ID":"166f2903-bd97-4aa9-b66b-14826bafdc8d","Type":"ContainerStarted","Data":"cbbe957a1f0157c1b8166a1c94efdf7bfdaf593ab88844692dd165b25f2343b3"} Jan 03 05:45:34 crc kubenswrapper[4854]: I0103 05:45:34.899473 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-twpx4" event={"ID":"166f2903-bd97-4aa9-b66b-14826bafdc8d","Type":"ContainerStarted","Data":"1969fbc017bdd73055205ce558598fd3743ba0309f71fcc57e5750e411c798d9"} Jan 03 05:45:34 crc kubenswrapper[4854]: I0103 05:45:34.937038 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-twpx4" podStartSLOduration=2.364722169 podStartE2EDuration="5.936964222s" podCreationTimestamp="2026-01-03 05:45:29 +0000 UTC" firstStartedPulling="2026-01-03 05:45:30.281239229 +0000 UTC m=+308.607815801" lastFinishedPulling="2026-01-03 05:45:33.853481272 +0000 UTC m=+312.180057854" observedRunningTime="2026-01-03 05:45:34.92800735 +0000 UTC m=+313.254583982" watchObservedRunningTime="2026-01-03 05:45:34.936964222 +0000 UTC m=+313.263540874" Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.855527 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-k9cwc"] Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.856484 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-k9cwc" Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.858204 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-htmcx" Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.858385 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.858481 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.859438 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.867778 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-s74p5"] Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.869002 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-s74p5" Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.870454 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.870659 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-xmxjs" Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.870958 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.874960 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-k9cwc"] Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.885662 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-mf6g9"] Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.886815 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-mf6g9" Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.891216 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.891394 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-tjsxb" Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.891585 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.891757 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-s74p5"] Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.894132 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/b63db84d-d2f0-418e-978a-fbe97b3effbe-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-s74p5\" (UID: \"b63db84d-d2f0-418e-978a-fbe97b3effbe\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-s74p5" Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.894169 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rghl9\" (UniqueName: \"kubernetes.io/projected/8da9ff59-acf5-4881-9d4c-3e0292c88de8-kube-api-access-rghl9\") pod \"kube-state-metrics-777cb5bd5d-k9cwc\" (UID: \"8da9ff59-acf5-4881-9d4c-3e0292c88de8\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-k9cwc" Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.894188 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8da9ff59-acf5-4881-9d4c-3e0292c88de8-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-k9cwc\" (UID: \"8da9ff59-acf5-4881-9d4c-3e0292c88de8\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-k9cwc" Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.894204 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/3c7a8c7d-d011-4565-b546-fab08bf723f9-root\") pod \"node-exporter-mf6g9\" (UID: \"3c7a8c7d-d011-4565-b546-fab08bf723f9\") " pod="openshift-monitoring/node-exporter-mf6g9" Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.894219 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3c7a8c7d-d011-4565-b546-fab08bf723f9-metrics-client-ca\") pod \"node-exporter-mf6g9\" (UID: \"3c7a8c7d-d011-4565-b546-fab08bf723f9\") " pod="openshift-monitoring/node-exporter-mf6g9" Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.894242 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/3c7a8c7d-d011-4565-b546-fab08bf723f9-node-exporter-tls\") pod \"node-exporter-mf6g9\" (UID: \"3c7a8c7d-d011-4565-b546-fab08bf723f9\") " pod="openshift-monitoring/node-exporter-mf6g9" Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.894256 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/8da9ff59-acf5-4881-9d4c-3e0292c88de8-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-k9cwc\" (UID: \"8da9ff59-acf5-4881-9d4c-3e0292c88de8\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-k9cwc" Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.894275 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8da9ff59-acf5-4881-9d4c-3e0292c88de8-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-k9cwc\" (UID: \"8da9ff59-acf5-4881-9d4c-3e0292c88de8\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-k9cwc" Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.894291 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsk2s\" (UniqueName: \"kubernetes.io/projected/b63db84d-d2f0-418e-978a-fbe97b3effbe-kube-api-access-tsk2s\") pod \"openshift-state-metrics-566fddb674-s74p5\" (UID: \"b63db84d-d2f0-418e-978a-fbe97b3effbe\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-s74p5" Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.894312 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m959q\" (UniqueName: \"kubernetes.io/projected/3c7a8c7d-d011-4565-b546-fab08bf723f9-kube-api-access-m959q\") pod \"node-exporter-mf6g9\" (UID: \"3c7a8c7d-d011-4565-b546-fab08bf723f9\") " pod="openshift-monitoring/node-exporter-mf6g9" Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.894333 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8da9ff59-acf5-4881-9d4c-3e0292c88de8-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-k9cwc\" (UID: \"8da9ff59-acf5-4881-9d4c-3e0292c88de8\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-k9cwc" Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.894350 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/3c7a8c7d-d011-4565-b546-fab08bf723f9-node-exporter-wtmp\") pod \"node-exporter-mf6g9\" (UID: \"3c7a8c7d-d011-4565-b546-fab08bf723f9\") " pod="openshift-monitoring/node-exporter-mf6g9" Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.894382 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b63db84d-d2f0-418e-978a-fbe97b3effbe-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-s74p5\" (UID: \"b63db84d-d2f0-418e-978a-fbe97b3effbe\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-s74p5" Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.894399 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3c7a8c7d-d011-4565-b546-fab08bf723f9-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-mf6g9\" (UID: \"3c7a8c7d-d011-4565-b546-fab08bf723f9\") " pod="openshift-monitoring/node-exporter-mf6g9" Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.894413 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3c7a8c7d-d011-4565-b546-fab08bf723f9-sys\") pod \"node-exporter-mf6g9\" (UID: \"3c7a8c7d-d011-4565-b546-fab08bf723f9\") " pod="openshift-monitoring/node-exporter-mf6g9" Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.894433 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/8da9ff59-acf5-4881-9d4c-3e0292c88de8-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-k9cwc\" (UID: \"8da9ff59-acf5-4881-9d4c-3e0292c88de8\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-k9cwc" Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.894454 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/3c7a8c7d-d011-4565-b546-fab08bf723f9-node-exporter-textfile\") pod \"node-exporter-mf6g9\" (UID: \"3c7a8c7d-d011-4565-b546-fab08bf723f9\") " pod="openshift-monitoring/node-exporter-mf6g9" Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.894473 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b63db84d-d2f0-418e-978a-fbe97b3effbe-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-s74p5\" (UID: \"b63db84d-d2f0-418e-978a-fbe97b3effbe\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-s74p5" Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.995848 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rghl9\" (UniqueName: \"kubernetes.io/projected/8da9ff59-acf5-4881-9d4c-3e0292c88de8-kube-api-access-rghl9\") pod \"kube-state-metrics-777cb5bd5d-k9cwc\" (UID: \"8da9ff59-acf5-4881-9d4c-3e0292c88de8\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-k9cwc" Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.996115 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8da9ff59-acf5-4881-9d4c-3e0292c88de8-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-k9cwc\" (UID: \"8da9ff59-acf5-4881-9d4c-3e0292c88de8\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-k9cwc" Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.996199 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/3c7a8c7d-d011-4565-b546-fab08bf723f9-root\") pod \"node-exporter-mf6g9\" (UID: \"3c7a8c7d-d011-4565-b546-fab08bf723f9\") " pod="openshift-monitoring/node-exporter-mf6g9" Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.996306 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3c7a8c7d-d011-4565-b546-fab08bf723f9-metrics-client-ca\") pod \"node-exporter-mf6g9\" (UID: \"3c7a8c7d-d011-4565-b546-fab08bf723f9\") " pod="openshift-monitoring/node-exporter-mf6g9" Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.996387 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/3c7a8c7d-d011-4565-b546-fab08bf723f9-node-exporter-tls\") pod \"node-exporter-mf6g9\" (UID: \"3c7a8c7d-d011-4565-b546-fab08bf723f9\") " pod="openshift-monitoring/node-exporter-mf6g9" Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.996350 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/3c7a8c7d-d011-4565-b546-fab08bf723f9-root\") pod \"node-exporter-mf6g9\" (UID: \"3c7a8c7d-d011-4565-b546-fab08bf723f9\") " pod="openshift-monitoring/node-exporter-mf6g9" Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.996463 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/8da9ff59-acf5-4881-9d4c-3e0292c88de8-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-k9cwc\" (UID: \"8da9ff59-acf5-4881-9d4c-3e0292c88de8\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-k9cwc" Jan 03 05:45:36 crc kubenswrapper[4854]: E0103 05:45:36.996515 4854 secret.go:188] Couldn't get secret openshift-monitoring/node-exporter-tls: secret "node-exporter-tls" not found Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.996560 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8da9ff59-acf5-4881-9d4c-3e0292c88de8-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-k9cwc\" (UID: \"8da9ff59-acf5-4881-9d4c-3e0292c88de8\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-k9cwc" Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.996603 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsk2s\" (UniqueName: \"kubernetes.io/projected/b63db84d-d2f0-418e-978a-fbe97b3effbe-kube-api-access-tsk2s\") pod \"openshift-state-metrics-566fddb674-s74p5\" (UID: \"b63db84d-d2f0-418e-978a-fbe97b3effbe\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-s74p5" Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.996696 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m959q\" (UniqueName: \"kubernetes.io/projected/3c7a8c7d-d011-4565-b546-fab08bf723f9-kube-api-access-m959q\") pod \"node-exporter-mf6g9\" (UID: \"3c7a8c7d-d011-4565-b546-fab08bf723f9\") " pod="openshift-monitoring/node-exporter-mf6g9" Jan 03 05:45:36 crc kubenswrapper[4854]: E0103 05:45:36.996779 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c7a8c7d-d011-4565-b546-fab08bf723f9-node-exporter-tls podName:3c7a8c7d-d011-4565-b546-fab08bf723f9 nodeName:}" failed. No retries permitted until 2026-01-03 05:45:37.49674444 +0000 UTC m=+315.823321102 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/3c7a8c7d-d011-4565-b546-fab08bf723f9-node-exporter-tls") pod "node-exporter-mf6g9" (UID: "3c7a8c7d-d011-4565-b546-fab08bf723f9") : secret "node-exporter-tls" not found Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.996836 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8da9ff59-acf5-4881-9d4c-3e0292c88de8-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-k9cwc\" (UID: \"8da9ff59-acf5-4881-9d4c-3e0292c88de8\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-k9cwc" Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.996909 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/3c7a8c7d-d011-4565-b546-fab08bf723f9-node-exporter-wtmp\") pod \"node-exporter-mf6g9\" (UID: \"3c7a8c7d-d011-4565-b546-fab08bf723f9\") " pod="openshift-monitoring/node-exporter-mf6g9" Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.997100 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8da9ff59-acf5-4881-9d4c-3e0292c88de8-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-k9cwc\" (UID: \"8da9ff59-acf5-4881-9d4c-3e0292c88de8\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-k9cwc" Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.997168 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/3c7a8c7d-d011-4565-b546-fab08bf723f9-node-exporter-wtmp\") pod \"node-exporter-mf6g9\" (UID: \"3c7a8c7d-d011-4565-b546-fab08bf723f9\") " pod="openshift-monitoring/node-exporter-mf6g9" Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.997323 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b63db84d-d2f0-418e-978a-fbe97b3effbe-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-s74p5\" (UID: \"b63db84d-d2f0-418e-978a-fbe97b3effbe\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-s74p5" Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.997348 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3c7a8c7d-d011-4565-b546-fab08bf723f9-metrics-client-ca\") pod \"node-exporter-mf6g9\" (UID: \"3c7a8c7d-d011-4565-b546-fab08bf723f9\") " pod="openshift-monitoring/node-exporter-mf6g9" Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.997358 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3c7a8c7d-d011-4565-b546-fab08bf723f9-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-mf6g9\" (UID: \"3c7a8c7d-d011-4565-b546-fab08bf723f9\") " pod="openshift-monitoring/node-exporter-mf6g9" Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.997406 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3c7a8c7d-d011-4565-b546-fab08bf723f9-sys\") pod \"node-exporter-mf6g9\" (UID: \"3c7a8c7d-d011-4565-b546-fab08bf723f9\") " pod="openshift-monitoring/node-exporter-mf6g9" Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.997552 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3c7a8c7d-d011-4565-b546-fab08bf723f9-sys\") pod \"node-exporter-mf6g9\" (UID: \"3c7a8c7d-d011-4565-b546-fab08bf723f9\") " pod="openshift-monitoring/node-exporter-mf6g9" Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.997913 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/8da9ff59-acf5-4881-9d4c-3e0292c88de8-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-k9cwc\" (UID: \"8da9ff59-acf5-4881-9d4c-3e0292c88de8\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-k9cwc" Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.997957 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/8da9ff59-acf5-4881-9d4c-3e0292c88de8-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-k9cwc\" (UID: \"8da9ff59-acf5-4881-9d4c-3e0292c88de8\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-k9cwc" Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.998115 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/3c7a8c7d-d011-4565-b546-fab08bf723f9-node-exporter-textfile\") pod \"node-exporter-mf6g9\" (UID: \"3c7a8c7d-d011-4565-b546-fab08bf723f9\") " pod="openshift-monitoring/node-exporter-mf6g9" Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.998212 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b63db84d-d2f0-418e-978a-fbe97b3effbe-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-s74p5\" (UID: \"b63db84d-d2f0-418e-978a-fbe97b3effbe\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-s74p5" Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.998292 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/b63db84d-d2f0-418e-978a-fbe97b3effbe-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-s74p5\" (UID: \"b63db84d-d2f0-418e-978a-fbe97b3effbe\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-s74p5" Jan 03 05:45:36 crc kubenswrapper[4854]: E0103 05:45:36.998427 4854 secret.go:188] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: secret "openshift-state-metrics-tls" not found Jan 03 05:45:36 crc kubenswrapper[4854]: E0103 05:45:36.998558 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b63db84d-d2f0-418e-978a-fbe97b3effbe-openshift-state-metrics-tls podName:b63db84d-d2f0-418e-978a-fbe97b3effbe nodeName:}" failed. No retries permitted until 2026-01-03 05:45:37.498544877 +0000 UTC m=+315.825121449 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/b63db84d-d2f0-418e-978a-fbe97b3effbe-openshift-state-metrics-tls") pod "openshift-state-metrics-566fddb674-s74p5" (UID: "b63db84d-d2f0-418e-978a-fbe97b3effbe") : secret "openshift-state-metrics-tls" not found Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.998715 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/8da9ff59-acf5-4881-9d4c-3e0292c88de8-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-k9cwc\" (UID: \"8da9ff59-acf5-4881-9d4c-3e0292c88de8\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-k9cwc" Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.998912 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/3c7a8c7d-d011-4565-b546-fab08bf723f9-node-exporter-textfile\") pod \"node-exporter-mf6g9\" (UID: \"3c7a8c7d-d011-4565-b546-fab08bf723f9\") " pod="openshift-monitoring/node-exporter-mf6g9" Jan 03 05:45:36 crc kubenswrapper[4854]: I0103 05:45:36.999479 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b63db84d-d2f0-418e-978a-fbe97b3effbe-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-s74p5\" (UID: \"b63db84d-d2f0-418e-978a-fbe97b3effbe\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-s74p5" Jan 03 05:45:37 crc kubenswrapper[4854]: I0103 05:45:37.005902 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b63db84d-d2f0-418e-978a-fbe97b3effbe-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-s74p5\" (UID: \"b63db84d-d2f0-418e-978a-fbe97b3effbe\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-s74p5" Jan 03 05:45:37 crc kubenswrapper[4854]: I0103 05:45:37.006108 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8da9ff59-acf5-4881-9d4c-3e0292c88de8-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-k9cwc\" (UID: \"8da9ff59-acf5-4881-9d4c-3e0292c88de8\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-k9cwc" Jan 03 05:45:37 crc kubenswrapper[4854]: I0103 05:45:37.006404 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8da9ff59-acf5-4881-9d4c-3e0292c88de8-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-k9cwc\" (UID: \"8da9ff59-acf5-4881-9d4c-3e0292c88de8\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-k9cwc" Jan 03 05:45:37 crc kubenswrapper[4854]: I0103 05:45:37.016983 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsk2s\" (UniqueName: \"kubernetes.io/projected/b63db84d-d2f0-418e-978a-fbe97b3effbe-kube-api-access-tsk2s\") pod \"openshift-state-metrics-566fddb674-s74p5\" (UID: \"b63db84d-d2f0-418e-978a-fbe97b3effbe\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-s74p5" Jan 03 05:45:37 crc kubenswrapper[4854]: I0103 05:45:37.017839 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rghl9\" (UniqueName: \"kubernetes.io/projected/8da9ff59-acf5-4881-9d4c-3e0292c88de8-kube-api-access-rghl9\") pod \"kube-state-metrics-777cb5bd5d-k9cwc\" (UID: \"8da9ff59-acf5-4881-9d4c-3e0292c88de8\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-k9cwc" Jan 03 05:45:37 crc kubenswrapper[4854]: I0103 05:45:37.023777 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3c7a8c7d-d011-4565-b546-fab08bf723f9-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-mf6g9\" (UID: \"3c7a8c7d-d011-4565-b546-fab08bf723f9\") " pod="openshift-monitoring/node-exporter-mf6g9" Jan 03 05:45:37 crc kubenswrapper[4854]: I0103 05:45:37.026170 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m959q\" (UniqueName: \"kubernetes.io/projected/3c7a8c7d-d011-4565-b546-fab08bf723f9-kube-api-access-m959q\") pod \"node-exporter-mf6g9\" (UID: \"3c7a8c7d-d011-4565-b546-fab08bf723f9\") " pod="openshift-monitoring/node-exporter-mf6g9" Jan 03 05:45:37 crc kubenswrapper[4854]: I0103 05:45:37.180674 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-k9cwc" Jan 03 05:45:37 crc kubenswrapper[4854]: I0103 05:45:37.505589 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/3c7a8c7d-d011-4565-b546-fab08bf723f9-node-exporter-tls\") pod \"node-exporter-mf6g9\" (UID: \"3c7a8c7d-d011-4565-b546-fab08bf723f9\") " pod="openshift-monitoring/node-exporter-mf6g9" Jan 03 05:45:37 crc kubenswrapper[4854]: I0103 05:45:37.505999 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/b63db84d-d2f0-418e-978a-fbe97b3effbe-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-s74p5\" (UID: \"b63db84d-d2f0-418e-978a-fbe97b3effbe\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-s74p5" Jan 03 05:45:37 crc kubenswrapper[4854]: I0103 05:45:37.509691 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/3c7a8c7d-d011-4565-b546-fab08bf723f9-node-exporter-tls\") pod \"node-exporter-mf6g9\" (UID: \"3c7a8c7d-d011-4565-b546-fab08bf723f9\") " pod="openshift-monitoring/node-exporter-mf6g9" Jan 03 05:45:37 crc kubenswrapper[4854]: I0103 05:45:37.510470 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/b63db84d-d2f0-418e-978a-fbe97b3effbe-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-s74p5\" (UID: \"b63db84d-d2f0-418e-978a-fbe97b3effbe\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-s74p5" Jan 03 05:45:37 crc kubenswrapper[4854]: I0103 05:45:37.573598 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-k9cwc"] Jan 03 05:45:37 crc kubenswrapper[4854]: W0103 05:45:37.582041 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8da9ff59_acf5_4881_9d4c_3e0292c88de8.slice/crio-ebdd2bdd42de967933668e4119479c9c8a53539c8fdf977a47ce5a73ae91d14d WatchSource:0}: Error finding container ebdd2bdd42de967933668e4119479c9c8a53539c8fdf977a47ce5a73ae91d14d: Status 404 returned error can't find the container with id ebdd2bdd42de967933668e4119479c9c8a53539c8fdf977a47ce5a73ae91d14d Jan 03 05:45:37 crc kubenswrapper[4854]: I0103 05:45:37.791878 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-s74p5" Jan 03 05:45:37 crc kubenswrapper[4854]: I0103 05:45:37.806424 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-mf6g9" Jan 03 05:45:37 crc kubenswrapper[4854]: W0103 05:45:37.839635 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c7a8c7d_d011_4565_b546_fab08bf723f9.slice/crio-7737a8b998ffb478f161725906726b462d7b5020b33099b9ad549de12d6f27b8 WatchSource:0}: Error finding container 7737a8b998ffb478f161725906726b462d7b5020b33099b9ad549de12d6f27b8: Status 404 returned error can't find the container with id 7737a8b998ffb478f161725906726b462d7b5020b33099b9ad549de12d6f27b8 Jan 03 05:45:37 crc kubenswrapper[4854]: I0103 05:45:37.917831 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-mf6g9" event={"ID":"3c7a8c7d-d011-4565-b546-fab08bf723f9","Type":"ContainerStarted","Data":"7737a8b998ffb478f161725906726b462d7b5020b33099b9ad549de12d6f27b8"} Jan 03 05:45:37 crc kubenswrapper[4854]: I0103 05:45:37.918700 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-k9cwc" event={"ID":"8da9ff59-acf5-4881-9d4c-3e0292c88de8","Type":"ContainerStarted","Data":"ebdd2bdd42de967933668e4119479c9c8a53539c8fdf977a47ce5a73ae91d14d"} Jan 03 05:45:37 crc kubenswrapper[4854]: I0103 05:45:37.945303 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Jan 03 05:45:37 crc kubenswrapper[4854]: I0103 05:45:37.946922 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Jan 03 05:45:37 crc kubenswrapper[4854]: I0103 05:45:37.950273 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Jan 03 05:45:37 crc kubenswrapper[4854]: I0103 05:45:37.950418 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Jan 03 05:45:37 crc kubenswrapper[4854]: I0103 05:45:37.950518 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Jan 03 05:45:37 crc kubenswrapper[4854]: I0103 05:45:37.950645 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Jan 03 05:45:37 crc kubenswrapper[4854]: I0103 05:45:37.952955 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Jan 03 05:45:37 crc kubenswrapper[4854]: I0103 05:45:37.953567 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Jan 03 05:45:37 crc kubenswrapper[4854]: I0103 05:45:37.953991 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Jan 03 05:45:37 crc kubenswrapper[4854]: I0103 05:45:37.954130 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-jgk88" Jan 03 05:45:37 crc kubenswrapper[4854]: I0103 05:45:37.957598 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Jan 03 05:45:37 crc kubenswrapper[4854]: I0103 05:45:37.958627 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Jan 03 05:45:38 crc kubenswrapper[4854]: I0103 05:45:38.015243 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d7fd4c04-a71d-4912-ae27-3e3ee6c03edb-tls-assets\") pod \"alertmanager-main-0\" (UID: \"d7fd4c04-a71d-4912-ae27-3e3ee6c03edb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 03 05:45:38 crc kubenswrapper[4854]: I0103 05:45:38.015293 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/d7fd4c04-a71d-4912-ae27-3e3ee6c03edb-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"d7fd4c04-a71d-4912-ae27-3e3ee6c03edb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 03 05:45:38 crc kubenswrapper[4854]: I0103 05:45:38.015329 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7fd4c04-a71d-4912-ae27-3e3ee6c03edb-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"d7fd4c04-a71d-4912-ae27-3e3ee6c03edb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 03 05:45:38 crc kubenswrapper[4854]: I0103 05:45:38.015350 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/d7fd4c04-a71d-4912-ae27-3e3ee6c03edb-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"d7fd4c04-a71d-4912-ae27-3e3ee6c03edb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 03 05:45:38 crc kubenswrapper[4854]: I0103 05:45:38.015440 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n8bn\" (UniqueName: \"kubernetes.io/projected/d7fd4c04-a71d-4912-ae27-3e3ee6c03edb-kube-api-access-7n8bn\") pod \"alertmanager-main-0\" (UID: \"d7fd4c04-a71d-4912-ae27-3e3ee6c03edb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 03 05:45:38 crc kubenswrapper[4854]: I0103 05:45:38.015537 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d7fd4c04-a71d-4912-ae27-3e3ee6c03edb-web-config\") pod \"alertmanager-main-0\" (UID: \"d7fd4c04-a71d-4912-ae27-3e3ee6c03edb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 03 05:45:38 crc kubenswrapper[4854]: I0103 05:45:38.015594 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d7fd4c04-a71d-4912-ae27-3e3ee6c03edb-config-out\") pod \"alertmanager-main-0\" (UID: \"d7fd4c04-a71d-4912-ae27-3e3ee6c03edb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 03 05:45:38 crc kubenswrapper[4854]: I0103 05:45:38.015617 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/d7fd4c04-a71d-4912-ae27-3e3ee6c03edb-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"d7fd4c04-a71d-4912-ae27-3e3ee6c03edb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 03 05:45:38 crc kubenswrapper[4854]: I0103 05:45:38.015634 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d7fd4c04-a71d-4912-ae27-3e3ee6c03edb-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"d7fd4c04-a71d-4912-ae27-3e3ee6c03edb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 03 05:45:38 crc kubenswrapper[4854]: I0103 05:45:38.015665 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/d7fd4c04-a71d-4912-ae27-3e3ee6c03edb-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"d7fd4c04-a71d-4912-ae27-3e3ee6c03edb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 03 05:45:38 crc kubenswrapper[4854]: I0103 05:45:38.015704 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/d7fd4c04-a71d-4912-ae27-3e3ee6c03edb-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"d7fd4c04-a71d-4912-ae27-3e3ee6c03edb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 03 05:45:38 crc kubenswrapper[4854]: I0103 05:45:38.015768 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/d7fd4c04-a71d-4912-ae27-3e3ee6c03edb-config-volume\") pod \"alertmanager-main-0\" (UID: \"d7fd4c04-a71d-4912-ae27-3e3ee6c03edb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 03 05:45:38 crc kubenswrapper[4854]: I0103 05:45:38.117039 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/d7fd4c04-a71d-4912-ae27-3e3ee6c03edb-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"d7fd4c04-a71d-4912-ae27-3e3ee6c03edb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 03 05:45:38 crc kubenswrapper[4854]: I0103 05:45:38.117141 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/d7fd4c04-a71d-4912-ae27-3e3ee6c03edb-config-volume\") pod \"alertmanager-main-0\" (UID: \"d7fd4c04-a71d-4912-ae27-3e3ee6c03edb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 03 05:45:38 crc kubenswrapper[4854]: I0103 05:45:38.117176 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d7fd4c04-a71d-4912-ae27-3e3ee6c03edb-tls-assets\") pod \"alertmanager-main-0\" (UID: \"d7fd4c04-a71d-4912-ae27-3e3ee6c03edb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 03 05:45:38 crc kubenswrapper[4854]: I0103 05:45:38.117201 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/d7fd4c04-a71d-4912-ae27-3e3ee6c03edb-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"d7fd4c04-a71d-4912-ae27-3e3ee6c03edb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 03 05:45:38 crc kubenswrapper[4854]: I0103 05:45:38.117235 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7fd4c04-a71d-4912-ae27-3e3ee6c03edb-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"d7fd4c04-a71d-4912-ae27-3e3ee6c03edb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 03 05:45:38 crc kubenswrapper[4854]: I0103 05:45:38.117267 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/d7fd4c04-a71d-4912-ae27-3e3ee6c03edb-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"d7fd4c04-a71d-4912-ae27-3e3ee6c03edb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 03 05:45:38 crc kubenswrapper[4854]: I0103 05:45:38.117304 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n8bn\" (UniqueName: \"kubernetes.io/projected/d7fd4c04-a71d-4912-ae27-3e3ee6c03edb-kube-api-access-7n8bn\") pod \"alertmanager-main-0\" (UID: \"d7fd4c04-a71d-4912-ae27-3e3ee6c03edb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 03 05:45:38 crc kubenswrapper[4854]: I0103 05:45:38.117328 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d7fd4c04-a71d-4912-ae27-3e3ee6c03edb-web-config\") pod \"alertmanager-main-0\" (UID: \"d7fd4c04-a71d-4912-ae27-3e3ee6c03edb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 03 05:45:38 crc kubenswrapper[4854]: I0103 05:45:38.117726 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d7fd4c04-a71d-4912-ae27-3e3ee6c03edb-config-out\") pod \"alertmanager-main-0\" (UID: \"d7fd4c04-a71d-4912-ae27-3e3ee6c03edb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 03 05:45:38 crc kubenswrapper[4854]: I0103 05:45:38.117752 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/d7fd4c04-a71d-4912-ae27-3e3ee6c03edb-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"d7fd4c04-a71d-4912-ae27-3e3ee6c03edb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 03 05:45:38 crc kubenswrapper[4854]: I0103 05:45:38.117848 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d7fd4c04-a71d-4912-ae27-3e3ee6c03edb-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"d7fd4c04-a71d-4912-ae27-3e3ee6c03edb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 03 05:45:38 crc kubenswrapper[4854]: I0103 05:45:38.117882 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/d7fd4c04-a71d-4912-ae27-3e3ee6c03edb-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"d7fd4c04-a71d-4912-ae27-3e3ee6c03edb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 03 05:45:38 crc kubenswrapper[4854]: I0103 05:45:38.118618 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7fd4c04-a71d-4912-ae27-3e3ee6c03edb-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"d7fd4c04-a71d-4912-ae27-3e3ee6c03edb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 03 05:45:38 crc kubenswrapper[4854]: I0103 05:45:38.119211 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d7fd4c04-a71d-4912-ae27-3e3ee6c03edb-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"d7fd4c04-a71d-4912-ae27-3e3ee6c03edb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 03 05:45:38 crc kubenswrapper[4854]: I0103 05:45:38.119600 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/d7fd4c04-a71d-4912-ae27-3e3ee6c03edb-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"d7fd4c04-a71d-4912-ae27-3e3ee6c03edb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 03 05:45:38 crc kubenswrapper[4854]: I0103 05:45:38.122494 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d7fd4c04-a71d-4912-ae27-3e3ee6c03edb-config-out\") pod \"alertmanager-main-0\" (UID: \"d7fd4c04-a71d-4912-ae27-3e3ee6c03edb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 03 05:45:38 crc kubenswrapper[4854]: I0103 05:45:38.122871 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/d7fd4c04-a71d-4912-ae27-3e3ee6c03edb-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"d7fd4c04-a71d-4912-ae27-3e3ee6c03edb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 03 05:45:38 crc kubenswrapper[4854]: I0103 05:45:38.122988 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d7fd4c04-a71d-4912-ae27-3e3ee6c03edb-web-config\") pod \"alertmanager-main-0\" (UID: \"d7fd4c04-a71d-4912-ae27-3e3ee6c03edb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 03 05:45:38 crc kubenswrapper[4854]: I0103 05:45:38.123309 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/d7fd4c04-a71d-4912-ae27-3e3ee6c03edb-config-volume\") pod \"alertmanager-main-0\" (UID: \"d7fd4c04-a71d-4912-ae27-3e3ee6c03edb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 03 05:45:38 crc kubenswrapper[4854]: I0103 05:45:38.123391 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/d7fd4c04-a71d-4912-ae27-3e3ee6c03edb-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"d7fd4c04-a71d-4912-ae27-3e3ee6c03edb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 03 05:45:38 crc kubenswrapper[4854]: I0103 05:45:38.131586 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/d7fd4c04-a71d-4912-ae27-3e3ee6c03edb-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"d7fd4c04-a71d-4912-ae27-3e3ee6c03edb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 03 05:45:38 crc kubenswrapper[4854]: I0103 05:45:38.139370 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d7fd4c04-a71d-4912-ae27-3e3ee6c03edb-tls-assets\") pod \"alertmanager-main-0\" (UID: \"d7fd4c04-a71d-4912-ae27-3e3ee6c03edb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 03 05:45:38 crc kubenswrapper[4854]: I0103 05:45:38.141658 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/d7fd4c04-a71d-4912-ae27-3e3ee6c03edb-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"d7fd4c04-a71d-4912-ae27-3e3ee6c03edb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 03 05:45:38 crc kubenswrapper[4854]: I0103 05:45:38.152856 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n8bn\" (UniqueName: \"kubernetes.io/projected/d7fd4c04-a71d-4912-ae27-3e3ee6c03edb-kube-api-access-7n8bn\") pod \"alertmanager-main-0\" (UID: \"d7fd4c04-a71d-4912-ae27-3e3ee6c03edb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 03 05:45:38 crc kubenswrapper[4854]: I0103 05:45:38.235223 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-s74p5"] Jan 03 05:45:38 crc kubenswrapper[4854]: I0103 05:45:38.284897 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Jan 03 05:45:38 crc kubenswrapper[4854]: I0103 05:45:38.729781 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Jan 03 05:45:38 crc kubenswrapper[4854]: I0103 05:45:38.924852 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-s74p5" event={"ID":"b63db84d-d2f0-418e-978a-fbe97b3effbe","Type":"ContainerStarted","Data":"d6a839321dbc00e2983ab876b2243ae5d6746ff00abcca3b22f9038173b105d4"} Jan 03 05:45:38 crc kubenswrapper[4854]: I0103 05:45:38.925948 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"d7fd4c04-a71d-4912-ae27-3e3ee6c03edb","Type":"ContainerStarted","Data":"a44bd9d94ef992591877e5962ddce48b3634778f404bb734a693313f4e4dc965"} Jan 03 05:45:38 crc kubenswrapper[4854]: I0103 05:45:38.948626 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-5b7f7948f-gfss8"] Jan 03 05:45:38 crc kubenswrapper[4854]: I0103 05:45:38.950247 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-5b7f7948f-gfss8" Jan 03 05:45:38 crc kubenswrapper[4854]: I0103 05:45:38.952273 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Jan 03 05:45:38 crc kubenswrapper[4854]: I0103 05:45:38.952490 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Jan 03 05:45:38 crc kubenswrapper[4854]: I0103 05:45:38.952670 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-66qmc" Jan 03 05:45:38 crc kubenswrapper[4854]: I0103 05:45:38.955193 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Jan 03 05:45:38 crc kubenswrapper[4854]: I0103 05:45:38.955283 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Jan 03 05:45:38 crc kubenswrapper[4854]: I0103 05:45:38.956002 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Jan 03 05:45:38 crc kubenswrapper[4854]: I0103 05:45:38.957157 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-7t49bl2j6tj2f" Jan 03 05:45:38 crc kubenswrapper[4854]: I0103 05:45:38.971025 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-5b7f7948f-gfss8"] Jan 03 05:45:39 crc kubenswrapper[4854]: I0103 05:45:39.032369 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/0ee90900-26e8-4d06-b2b4-f646a1570746-secret-thanos-querier-tls\") pod \"thanos-querier-5b7f7948f-gfss8\" (UID: \"0ee90900-26e8-4d06-b2b4-f646a1570746\") " pod="openshift-monitoring/thanos-querier-5b7f7948f-gfss8" Jan 03 05:45:39 crc kubenswrapper[4854]: I0103 05:45:39.032442 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0ee90900-26e8-4d06-b2b4-f646a1570746-metrics-client-ca\") pod \"thanos-querier-5b7f7948f-gfss8\" (UID: \"0ee90900-26e8-4d06-b2b4-f646a1570746\") " pod="openshift-monitoring/thanos-querier-5b7f7948f-gfss8" Jan 03 05:45:39 crc kubenswrapper[4854]: I0103 05:45:39.032481 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/0ee90900-26e8-4d06-b2b4-f646a1570746-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-5b7f7948f-gfss8\" (UID: \"0ee90900-26e8-4d06-b2b4-f646a1570746\") " pod="openshift-monitoring/thanos-querier-5b7f7948f-gfss8" Jan 03 05:45:39 crc kubenswrapper[4854]: I0103 05:45:39.032511 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/0ee90900-26e8-4d06-b2b4-f646a1570746-secret-grpc-tls\") pod \"thanos-querier-5b7f7948f-gfss8\" (UID: \"0ee90900-26e8-4d06-b2b4-f646a1570746\") " pod="openshift-monitoring/thanos-querier-5b7f7948f-gfss8" Jan 03 05:45:39 crc kubenswrapper[4854]: I0103 05:45:39.032542 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/0ee90900-26e8-4d06-b2b4-f646a1570746-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-5b7f7948f-gfss8\" (UID: \"0ee90900-26e8-4d06-b2b4-f646a1570746\") " pod="openshift-monitoring/thanos-querier-5b7f7948f-gfss8" Jan 03 05:45:39 crc kubenswrapper[4854]: I0103 05:45:39.032567 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/0ee90900-26e8-4d06-b2b4-f646a1570746-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-5b7f7948f-gfss8\" (UID: \"0ee90900-26e8-4d06-b2b4-f646a1570746\") " pod="openshift-monitoring/thanos-querier-5b7f7948f-gfss8" Jan 03 05:45:39 crc kubenswrapper[4854]: I0103 05:45:39.032619 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4bmp\" (UniqueName: \"kubernetes.io/projected/0ee90900-26e8-4d06-b2b4-f646a1570746-kube-api-access-m4bmp\") pod \"thanos-querier-5b7f7948f-gfss8\" (UID: \"0ee90900-26e8-4d06-b2b4-f646a1570746\") " pod="openshift-monitoring/thanos-querier-5b7f7948f-gfss8" Jan 03 05:45:39 crc kubenswrapper[4854]: I0103 05:45:39.032645 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/0ee90900-26e8-4d06-b2b4-f646a1570746-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-5b7f7948f-gfss8\" (UID: \"0ee90900-26e8-4d06-b2b4-f646a1570746\") " pod="openshift-monitoring/thanos-querier-5b7f7948f-gfss8" Jan 03 05:45:39 crc kubenswrapper[4854]: I0103 05:45:39.133597 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/0ee90900-26e8-4d06-b2b4-f646a1570746-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-5b7f7948f-gfss8\" (UID: \"0ee90900-26e8-4d06-b2b4-f646a1570746\") " pod="openshift-monitoring/thanos-querier-5b7f7948f-gfss8" Jan 03 05:45:39 crc kubenswrapper[4854]: I0103 05:45:39.133664 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/0ee90900-26e8-4d06-b2b4-f646a1570746-secret-grpc-tls\") pod \"thanos-querier-5b7f7948f-gfss8\" (UID: \"0ee90900-26e8-4d06-b2b4-f646a1570746\") " pod="openshift-monitoring/thanos-querier-5b7f7948f-gfss8" Jan 03 05:45:39 crc kubenswrapper[4854]: I0103 05:45:39.133708 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/0ee90900-26e8-4d06-b2b4-f646a1570746-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-5b7f7948f-gfss8\" (UID: \"0ee90900-26e8-4d06-b2b4-f646a1570746\") " pod="openshift-monitoring/thanos-querier-5b7f7948f-gfss8" Jan 03 05:45:39 crc kubenswrapper[4854]: I0103 05:45:39.133744 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/0ee90900-26e8-4d06-b2b4-f646a1570746-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-5b7f7948f-gfss8\" (UID: \"0ee90900-26e8-4d06-b2b4-f646a1570746\") " pod="openshift-monitoring/thanos-querier-5b7f7948f-gfss8" Jan 03 05:45:39 crc kubenswrapper[4854]: I0103 05:45:39.133821 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4bmp\" (UniqueName: \"kubernetes.io/projected/0ee90900-26e8-4d06-b2b4-f646a1570746-kube-api-access-m4bmp\") pod \"thanos-querier-5b7f7948f-gfss8\" (UID: \"0ee90900-26e8-4d06-b2b4-f646a1570746\") " pod="openshift-monitoring/thanos-querier-5b7f7948f-gfss8" Jan 03 05:45:39 crc kubenswrapper[4854]: I0103 05:45:39.133866 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/0ee90900-26e8-4d06-b2b4-f646a1570746-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-5b7f7948f-gfss8\" (UID: \"0ee90900-26e8-4d06-b2b4-f646a1570746\") " pod="openshift-monitoring/thanos-querier-5b7f7948f-gfss8" Jan 03 05:45:39 crc kubenswrapper[4854]: I0103 05:45:39.133928 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/0ee90900-26e8-4d06-b2b4-f646a1570746-secret-thanos-querier-tls\") pod \"thanos-querier-5b7f7948f-gfss8\" (UID: \"0ee90900-26e8-4d06-b2b4-f646a1570746\") " pod="openshift-monitoring/thanos-querier-5b7f7948f-gfss8" Jan 03 05:45:39 crc kubenswrapper[4854]: I0103 05:45:39.133989 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0ee90900-26e8-4d06-b2b4-f646a1570746-metrics-client-ca\") pod \"thanos-querier-5b7f7948f-gfss8\" (UID: \"0ee90900-26e8-4d06-b2b4-f646a1570746\") " pod="openshift-monitoring/thanos-querier-5b7f7948f-gfss8" Jan 03 05:45:39 crc kubenswrapper[4854]: I0103 05:45:39.135800 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0ee90900-26e8-4d06-b2b4-f646a1570746-metrics-client-ca\") pod \"thanos-querier-5b7f7948f-gfss8\" (UID: \"0ee90900-26e8-4d06-b2b4-f646a1570746\") " pod="openshift-monitoring/thanos-querier-5b7f7948f-gfss8" Jan 03 05:45:39 crc kubenswrapper[4854]: I0103 05:45:39.140011 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/0ee90900-26e8-4d06-b2b4-f646a1570746-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-5b7f7948f-gfss8\" (UID: \"0ee90900-26e8-4d06-b2b4-f646a1570746\") " pod="openshift-monitoring/thanos-querier-5b7f7948f-gfss8" Jan 03 05:45:39 crc kubenswrapper[4854]: I0103 05:45:39.140806 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/0ee90900-26e8-4d06-b2b4-f646a1570746-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-5b7f7948f-gfss8\" (UID: \"0ee90900-26e8-4d06-b2b4-f646a1570746\") " pod="openshift-monitoring/thanos-querier-5b7f7948f-gfss8" Jan 03 05:45:39 crc kubenswrapper[4854]: I0103 05:45:39.140913 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/0ee90900-26e8-4d06-b2b4-f646a1570746-secret-thanos-querier-tls\") pod \"thanos-querier-5b7f7948f-gfss8\" (UID: \"0ee90900-26e8-4d06-b2b4-f646a1570746\") " pod="openshift-monitoring/thanos-querier-5b7f7948f-gfss8" Jan 03 05:45:39 crc kubenswrapper[4854]: I0103 05:45:39.141552 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/0ee90900-26e8-4d06-b2b4-f646a1570746-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-5b7f7948f-gfss8\" (UID: \"0ee90900-26e8-4d06-b2b4-f646a1570746\") " pod="openshift-monitoring/thanos-querier-5b7f7948f-gfss8" Jan 03 05:45:39 crc kubenswrapper[4854]: I0103 05:45:39.141789 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/0ee90900-26e8-4d06-b2b4-f646a1570746-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-5b7f7948f-gfss8\" (UID: \"0ee90900-26e8-4d06-b2b4-f646a1570746\") " pod="openshift-monitoring/thanos-querier-5b7f7948f-gfss8" Jan 03 05:45:39 crc kubenswrapper[4854]: I0103 05:45:39.142927 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/0ee90900-26e8-4d06-b2b4-f646a1570746-secret-grpc-tls\") pod \"thanos-querier-5b7f7948f-gfss8\" (UID: \"0ee90900-26e8-4d06-b2b4-f646a1570746\") " pod="openshift-monitoring/thanos-querier-5b7f7948f-gfss8" Jan 03 05:45:39 crc kubenswrapper[4854]: I0103 05:45:39.165538 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4bmp\" (UniqueName: \"kubernetes.io/projected/0ee90900-26e8-4d06-b2b4-f646a1570746-kube-api-access-m4bmp\") pod \"thanos-querier-5b7f7948f-gfss8\" (UID: \"0ee90900-26e8-4d06-b2b4-f646a1570746\") " pod="openshift-monitoring/thanos-querier-5b7f7948f-gfss8" Jan 03 05:45:39 crc kubenswrapper[4854]: I0103 05:45:39.382594 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-5b7f7948f-gfss8" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:39.890533 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-5b7f7948f-gfss8"] Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:39.935071 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-s74p5" event={"ID":"b63db84d-d2f0-418e-978a-fbe97b3effbe","Type":"ContainerStarted","Data":"5f52799e2789f692b6c975a6d03c43283a1aa2fc7614cdf1536b918c2b00c48f"} Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:39.936289 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5b7f7948f-gfss8" event={"ID":"0ee90900-26e8-4d06-b2b4-f646a1570746","Type":"ContainerStarted","Data":"1e2ad82c434ccc46c8e15ee0f25dc45469f3dccc246f2a23e7eebf5f181a1e97"} Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:42.183767 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-665fcf668f-65wrt"] Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:42.185305 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-665fcf668f-65wrt" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:42.187462 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:42.188014 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-bzsjh" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:42.191625 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:42.191866 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:42.192013 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:42.192382 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-3nsihhgn91170" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:42.273354 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-665fcf668f-65wrt"] Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:42.379667 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5899ebcd-eec0-44ae-9e07-98b443d209c1-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-665fcf668f-65wrt\" (UID: \"5899ebcd-eec0-44ae-9e07-98b443d209c1\") " pod="openshift-monitoring/metrics-server-665fcf668f-65wrt" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:42.380219 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/5899ebcd-eec0-44ae-9e07-98b443d209c1-metrics-server-audit-profiles\") pod \"metrics-server-665fcf668f-65wrt\" (UID: \"5899ebcd-eec0-44ae-9e07-98b443d209c1\") " pod="openshift-monitoring/metrics-server-665fcf668f-65wrt" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:42.380279 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/5899ebcd-eec0-44ae-9e07-98b443d209c1-secret-metrics-client-certs\") pod \"metrics-server-665fcf668f-65wrt\" (UID: \"5899ebcd-eec0-44ae-9e07-98b443d209c1\") " pod="openshift-monitoring/metrics-server-665fcf668f-65wrt" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:42.380365 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5899ebcd-eec0-44ae-9e07-98b443d209c1-client-ca-bundle\") pod \"metrics-server-665fcf668f-65wrt\" (UID: \"5899ebcd-eec0-44ae-9e07-98b443d209c1\") " pod="openshift-monitoring/metrics-server-665fcf668f-65wrt" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:42.380427 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sppjw\" (UniqueName: \"kubernetes.io/projected/5899ebcd-eec0-44ae-9e07-98b443d209c1-kube-api-access-sppjw\") pod \"metrics-server-665fcf668f-65wrt\" (UID: \"5899ebcd-eec0-44ae-9e07-98b443d209c1\") " pod="openshift-monitoring/metrics-server-665fcf668f-65wrt" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:42.380502 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/5899ebcd-eec0-44ae-9e07-98b443d209c1-audit-log\") pod \"metrics-server-665fcf668f-65wrt\" (UID: \"5899ebcd-eec0-44ae-9e07-98b443d209c1\") " pod="openshift-monitoring/metrics-server-665fcf668f-65wrt" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:42.380678 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/5899ebcd-eec0-44ae-9e07-98b443d209c1-secret-metrics-server-tls\") pod \"metrics-server-665fcf668f-65wrt\" (UID: \"5899ebcd-eec0-44ae-9e07-98b443d209c1\") " pod="openshift-monitoring/metrics-server-665fcf668f-65wrt" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:42.482194 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5899ebcd-eec0-44ae-9e07-98b443d209c1-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-665fcf668f-65wrt\" (UID: \"5899ebcd-eec0-44ae-9e07-98b443d209c1\") " pod="openshift-monitoring/metrics-server-665fcf668f-65wrt" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:42.482268 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/5899ebcd-eec0-44ae-9e07-98b443d209c1-metrics-server-audit-profiles\") pod \"metrics-server-665fcf668f-65wrt\" (UID: \"5899ebcd-eec0-44ae-9e07-98b443d209c1\") " pod="openshift-monitoring/metrics-server-665fcf668f-65wrt" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:42.482330 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/5899ebcd-eec0-44ae-9e07-98b443d209c1-secret-metrics-client-certs\") pod \"metrics-server-665fcf668f-65wrt\" (UID: \"5899ebcd-eec0-44ae-9e07-98b443d209c1\") " pod="openshift-monitoring/metrics-server-665fcf668f-65wrt" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:42.482386 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5899ebcd-eec0-44ae-9e07-98b443d209c1-client-ca-bundle\") pod \"metrics-server-665fcf668f-65wrt\" (UID: \"5899ebcd-eec0-44ae-9e07-98b443d209c1\") " pod="openshift-monitoring/metrics-server-665fcf668f-65wrt" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:42.482424 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sppjw\" (UniqueName: \"kubernetes.io/projected/5899ebcd-eec0-44ae-9e07-98b443d209c1-kube-api-access-sppjw\") pod \"metrics-server-665fcf668f-65wrt\" (UID: \"5899ebcd-eec0-44ae-9e07-98b443d209c1\") " pod="openshift-monitoring/metrics-server-665fcf668f-65wrt" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:42.482469 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/5899ebcd-eec0-44ae-9e07-98b443d209c1-audit-log\") pod \"metrics-server-665fcf668f-65wrt\" (UID: \"5899ebcd-eec0-44ae-9e07-98b443d209c1\") " pod="openshift-monitoring/metrics-server-665fcf668f-65wrt" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:42.483185 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/5899ebcd-eec0-44ae-9e07-98b443d209c1-secret-metrics-server-tls\") pod \"metrics-server-665fcf668f-65wrt\" (UID: \"5899ebcd-eec0-44ae-9e07-98b443d209c1\") " pod="openshift-monitoring/metrics-server-665fcf668f-65wrt" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:42.484012 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5899ebcd-eec0-44ae-9e07-98b443d209c1-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-665fcf668f-65wrt\" (UID: \"5899ebcd-eec0-44ae-9e07-98b443d209c1\") " pod="openshift-monitoring/metrics-server-665fcf668f-65wrt" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:42.484246 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/5899ebcd-eec0-44ae-9e07-98b443d209c1-audit-log\") pod \"metrics-server-665fcf668f-65wrt\" (UID: \"5899ebcd-eec0-44ae-9e07-98b443d209c1\") " pod="openshift-monitoring/metrics-server-665fcf668f-65wrt" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:42.487829 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/5899ebcd-eec0-44ae-9e07-98b443d209c1-metrics-server-audit-profiles\") pod \"metrics-server-665fcf668f-65wrt\" (UID: \"5899ebcd-eec0-44ae-9e07-98b443d209c1\") " pod="openshift-monitoring/metrics-server-665fcf668f-65wrt" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:42.491751 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5899ebcd-eec0-44ae-9e07-98b443d209c1-client-ca-bundle\") pod \"metrics-server-665fcf668f-65wrt\" (UID: \"5899ebcd-eec0-44ae-9e07-98b443d209c1\") " pod="openshift-monitoring/metrics-server-665fcf668f-65wrt" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:42.492190 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/5899ebcd-eec0-44ae-9e07-98b443d209c1-secret-metrics-client-certs\") pod \"metrics-server-665fcf668f-65wrt\" (UID: \"5899ebcd-eec0-44ae-9e07-98b443d209c1\") " pod="openshift-monitoring/metrics-server-665fcf668f-65wrt" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:42.492654 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/5899ebcd-eec0-44ae-9e07-98b443d209c1-secret-metrics-server-tls\") pod \"metrics-server-665fcf668f-65wrt\" (UID: \"5899ebcd-eec0-44ae-9e07-98b443d209c1\") " pod="openshift-monitoring/metrics-server-665fcf668f-65wrt" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:42.516283 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sppjw\" (UniqueName: \"kubernetes.io/projected/5899ebcd-eec0-44ae-9e07-98b443d209c1-kube-api-access-sppjw\") pod \"metrics-server-665fcf668f-65wrt\" (UID: \"5899ebcd-eec0-44ae-9e07-98b443d209c1\") " pod="openshift-monitoring/metrics-server-665fcf668f-65wrt" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:42.527577 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-665fcf668f-65wrt" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:42.722789 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-57f57bb94b-jb8qx"] Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:42.724360 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-57f57bb94b-jb8qx" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:42.726264 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:42.731692 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:42.732054 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-57f57bb94b-jb8qx"] Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:42.786907 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/a6f05342-5fbe-4b7a-b222-e52b87c7e754-monitoring-plugin-cert\") pod \"monitoring-plugin-57f57bb94b-jb8qx\" (UID: \"a6f05342-5fbe-4b7a-b222-e52b87c7e754\") " pod="openshift-monitoring/monitoring-plugin-57f57bb94b-jb8qx" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:42.888372 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/a6f05342-5fbe-4b7a-b222-e52b87c7e754-monitoring-plugin-cert\") pod \"monitoring-plugin-57f57bb94b-jb8qx\" (UID: \"a6f05342-5fbe-4b7a-b222-e52b87c7e754\") " pod="openshift-monitoring/monitoring-plugin-57f57bb94b-jb8qx" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:42.893895 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/a6f05342-5fbe-4b7a-b222-e52b87c7e754-monitoring-plugin-cert\") pod \"monitoring-plugin-57f57bb94b-jb8qx\" (UID: \"a6f05342-5fbe-4b7a-b222-e52b87c7e754\") " pod="openshift-monitoring/monitoring-plugin-57f57bb94b-jb8qx" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:42.956829 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-s74p5" event={"ID":"b63db84d-d2f0-418e-978a-fbe97b3effbe","Type":"ContainerStarted","Data":"0af7bb4ec550e4c86245cc2198e4d7451ba1e8f32b923b3ed0439c56c87cfea6"} Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.050236 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-57f57bb94b-jb8qx" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.090650 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5446bc98d5-fqrs8"] Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.091448 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5446bc98d5-fqrs8" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.114659 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5446bc98d5-fqrs8"] Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.285185 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.287189 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.297920 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d8578134-f9bd-481a-909a-297dd5b66076-console-oauth-config\") pod \"console-5446bc98d5-fqrs8\" (UID: \"d8578134-f9bd-481a-909a-297dd5b66076\") " pod="openshift-console/console-5446bc98d5-fqrs8" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.297977 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fp54\" (UniqueName: \"kubernetes.io/projected/d8578134-f9bd-481a-909a-297dd5b66076-kube-api-access-8fp54\") pod \"console-5446bc98d5-fqrs8\" (UID: \"d8578134-f9bd-481a-909a-297dd5b66076\") " pod="openshift-console/console-5446bc98d5-fqrs8" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.298003 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8578134-f9bd-481a-909a-297dd5b66076-trusted-ca-bundle\") pod \"console-5446bc98d5-fqrs8\" (UID: \"d8578134-f9bd-481a-909a-297dd5b66076\") " pod="openshift-console/console-5446bc98d5-fqrs8" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.298029 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d8578134-f9bd-481a-909a-297dd5b66076-oauth-serving-cert\") pod \"console-5446bc98d5-fqrs8\" (UID: \"d8578134-f9bd-481a-909a-297dd5b66076\") " pod="openshift-console/console-5446bc98d5-fqrs8" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.298292 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d8578134-f9bd-481a-909a-297dd5b66076-console-config\") pod \"console-5446bc98d5-fqrs8\" (UID: \"d8578134-f9bd-481a-909a-297dd5b66076\") " pod="openshift-console/console-5446bc98d5-fqrs8" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.298337 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d8578134-f9bd-481a-909a-297dd5b66076-service-ca\") pod \"console-5446bc98d5-fqrs8\" (UID: \"d8578134-f9bd-481a-909a-297dd5b66076\") " pod="openshift-console/console-5446bc98d5-fqrs8" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.298413 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d8578134-f9bd-481a-909a-297dd5b66076-console-serving-cert\") pod \"console-5446bc98d5-fqrs8\" (UID: \"d8578134-f9bd-481a-909a-297dd5b66076\") " pod="openshift-console/console-5446bc98d5-fqrs8" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.300244 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.300391 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.300478 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-bzzpr" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.300556 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.301017 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-cecvmcubaiini" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.301168 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.301270 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.301369 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.301479 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.302381 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.302585 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.305050 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.306481 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.313850 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.399971 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9afcc108-879e-4244-a52b-1c5720d08571-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"9afcc108-879e-4244-a52b-1c5720d08571\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.400023 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fp54\" (UniqueName: \"kubernetes.io/projected/d8578134-f9bd-481a-909a-297dd5b66076-kube-api-access-8fp54\") pod \"console-5446bc98d5-fqrs8\" (UID: \"d8578134-f9bd-481a-909a-297dd5b66076\") " pod="openshift-console/console-5446bc98d5-fqrs8" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.400049 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8578134-f9bd-481a-909a-297dd5b66076-trusted-ca-bundle\") pod \"console-5446bc98d5-fqrs8\" (UID: \"d8578134-f9bd-481a-909a-297dd5b66076\") " pod="openshift-console/console-5446bc98d5-fqrs8" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.400072 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9afcc108-879e-4244-a52b-1c5720d08571-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"9afcc108-879e-4244-a52b-1c5720d08571\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.400108 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d8578134-f9bd-481a-909a-297dd5b66076-oauth-serving-cert\") pod \"console-5446bc98d5-fqrs8\" (UID: \"d8578134-f9bd-481a-909a-297dd5b66076\") " pod="openshift-console/console-5446bc98d5-fqrs8" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.400127 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/9afcc108-879e-4244-a52b-1c5720d08571-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"9afcc108-879e-4244-a52b-1c5720d08571\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.400155 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/9afcc108-879e-4244-a52b-1c5720d08571-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"9afcc108-879e-4244-a52b-1c5720d08571\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.400175 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9afcc108-879e-4244-a52b-1c5720d08571-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"9afcc108-879e-4244-a52b-1c5720d08571\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.400196 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9afcc108-879e-4244-a52b-1c5720d08571-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"9afcc108-879e-4244-a52b-1c5720d08571\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.400211 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/9afcc108-879e-4244-a52b-1c5720d08571-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"9afcc108-879e-4244-a52b-1c5720d08571\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.400229 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d8578134-f9bd-481a-909a-297dd5b66076-console-config\") pod \"console-5446bc98d5-fqrs8\" (UID: \"d8578134-f9bd-481a-909a-297dd5b66076\") " pod="openshift-console/console-5446bc98d5-fqrs8" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.400246 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d8578134-f9bd-481a-909a-297dd5b66076-service-ca\") pod \"console-5446bc98d5-fqrs8\" (UID: \"d8578134-f9bd-481a-909a-297dd5b66076\") " pod="openshift-console/console-5446bc98d5-fqrs8" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.400265 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/9afcc108-879e-4244-a52b-1c5720d08571-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"9afcc108-879e-4244-a52b-1c5720d08571\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.400279 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9afcc108-879e-4244-a52b-1c5720d08571-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"9afcc108-879e-4244-a52b-1c5720d08571\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.400294 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9afcc108-879e-4244-a52b-1c5720d08571-config\") pod \"prometheus-k8s-0\" (UID: \"9afcc108-879e-4244-a52b-1c5720d08571\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.400311 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9afcc108-879e-4244-a52b-1c5720d08571-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"9afcc108-879e-4244-a52b-1c5720d08571\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.400328 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9afcc108-879e-4244-a52b-1c5720d08571-web-config\") pod \"prometheus-k8s-0\" (UID: \"9afcc108-879e-4244-a52b-1c5720d08571\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.400345 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmfrr\" (UniqueName: \"kubernetes.io/projected/9afcc108-879e-4244-a52b-1c5720d08571-kube-api-access-bmfrr\") pod \"prometheus-k8s-0\" (UID: \"9afcc108-879e-4244-a52b-1c5720d08571\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.400362 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d8578134-f9bd-481a-909a-297dd5b66076-console-serving-cert\") pod \"console-5446bc98d5-fqrs8\" (UID: \"d8578134-f9bd-481a-909a-297dd5b66076\") " pod="openshift-console/console-5446bc98d5-fqrs8" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.400380 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9afcc108-879e-4244-a52b-1c5720d08571-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"9afcc108-879e-4244-a52b-1c5720d08571\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.400398 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/9afcc108-879e-4244-a52b-1c5720d08571-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"9afcc108-879e-4244-a52b-1c5720d08571\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.400415 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/9afcc108-879e-4244-a52b-1c5720d08571-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"9afcc108-879e-4244-a52b-1c5720d08571\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.400436 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9afcc108-879e-4244-a52b-1c5720d08571-config-out\") pod \"prometheus-k8s-0\" (UID: \"9afcc108-879e-4244-a52b-1c5720d08571\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.400448 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/9afcc108-879e-4244-a52b-1c5720d08571-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"9afcc108-879e-4244-a52b-1c5720d08571\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.400466 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d8578134-f9bd-481a-909a-297dd5b66076-console-oauth-config\") pod \"console-5446bc98d5-fqrs8\" (UID: \"d8578134-f9bd-481a-909a-297dd5b66076\") " pod="openshift-console/console-5446bc98d5-fqrs8" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.402456 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d8578134-f9bd-481a-909a-297dd5b66076-service-ca\") pod \"console-5446bc98d5-fqrs8\" (UID: \"d8578134-f9bd-481a-909a-297dd5b66076\") " pod="openshift-console/console-5446bc98d5-fqrs8" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.402775 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d8578134-f9bd-481a-909a-297dd5b66076-oauth-serving-cert\") pod \"console-5446bc98d5-fqrs8\" (UID: \"d8578134-f9bd-481a-909a-297dd5b66076\") " pod="openshift-console/console-5446bc98d5-fqrs8" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.403208 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d8578134-f9bd-481a-909a-297dd5b66076-console-config\") pod \"console-5446bc98d5-fqrs8\" (UID: \"d8578134-f9bd-481a-909a-297dd5b66076\") " pod="openshift-console/console-5446bc98d5-fqrs8" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.404300 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8578134-f9bd-481a-909a-297dd5b66076-trusted-ca-bundle\") pod \"console-5446bc98d5-fqrs8\" (UID: \"d8578134-f9bd-481a-909a-297dd5b66076\") " pod="openshift-console/console-5446bc98d5-fqrs8" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.408626 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d8578134-f9bd-481a-909a-297dd5b66076-console-oauth-config\") pod \"console-5446bc98d5-fqrs8\" (UID: \"d8578134-f9bd-481a-909a-297dd5b66076\") " pod="openshift-console/console-5446bc98d5-fqrs8" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.409043 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d8578134-f9bd-481a-909a-297dd5b66076-console-serving-cert\") pod \"console-5446bc98d5-fqrs8\" (UID: \"d8578134-f9bd-481a-909a-297dd5b66076\") " pod="openshift-console/console-5446bc98d5-fqrs8" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.421106 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fp54\" (UniqueName: \"kubernetes.io/projected/d8578134-f9bd-481a-909a-297dd5b66076-kube-api-access-8fp54\") pod \"console-5446bc98d5-fqrs8\" (UID: \"d8578134-f9bd-481a-909a-297dd5b66076\") " pod="openshift-console/console-5446bc98d5-fqrs8" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.501792 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9afcc108-879e-4244-a52b-1c5720d08571-config\") pod \"prometheus-k8s-0\" (UID: \"9afcc108-879e-4244-a52b-1c5720d08571\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.501842 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9afcc108-879e-4244-a52b-1c5720d08571-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"9afcc108-879e-4244-a52b-1c5720d08571\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.501862 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9afcc108-879e-4244-a52b-1c5720d08571-web-config\") pod \"prometheus-k8s-0\" (UID: \"9afcc108-879e-4244-a52b-1c5720d08571\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.501880 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmfrr\" (UniqueName: \"kubernetes.io/projected/9afcc108-879e-4244-a52b-1c5720d08571-kube-api-access-bmfrr\") pod \"prometheus-k8s-0\" (UID: \"9afcc108-879e-4244-a52b-1c5720d08571\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.501902 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9afcc108-879e-4244-a52b-1c5720d08571-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"9afcc108-879e-4244-a52b-1c5720d08571\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.501923 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/9afcc108-879e-4244-a52b-1c5720d08571-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"9afcc108-879e-4244-a52b-1c5720d08571\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.501939 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/9afcc108-879e-4244-a52b-1c5720d08571-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"9afcc108-879e-4244-a52b-1c5720d08571\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.501962 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9afcc108-879e-4244-a52b-1c5720d08571-config-out\") pod \"prometheus-k8s-0\" (UID: \"9afcc108-879e-4244-a52b-1c5720d08571\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.501975 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/9afcc108-879e-4244-a52b-1c5720d08571-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"9afcc108-879e-4244-a52b-1c5720d08571\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.502003 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9afcc108-879e-4244-a52b-1c5720d08571-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"9afcc108-879e-4244-a52b-1c5720d08571\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.502053 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9afcc108-879e-4244-a52b-1c5720d08571-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"9afcc108-879e-4244-a52b-1c5720d08571\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.502088 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/9afcc108-879e-4244-a52b-1c5720d08571-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"9afcc108-879e-4244-a52b-1c5720d08571\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.502116 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/9afcc108-879e-4244-a52b-1c5720d08571-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"9afcc108-879e-4244-a52b-1c5720d08571\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.502137 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9afcc108-879e-4244-a52b-1c5720d08571-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"9afcc108-879e-4244-a52b-1c5720d08571\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.502157 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9afcc108-879e-4244-a52b-1c5720d08571-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"9afcc108-879e-4244-a52b-1c5720d08571\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.502173 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/9afcc108-879e-4244-a52b-1c5720d08571-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"9afcc108-879e-4244-a52b-1c5720d08571\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.502205 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/9afcc108-879e-4244-a52b-1c5720d08571-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"9afcc108-879e-4244-a52b-1c5720d08571\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.502230 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9afcc108-879e-4244-a52b-1c5720d08571-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"9afcc108-879e-4244-a52b-1c5720d08571\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.503255 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9afcc108-879e-4244-a52b-1c5720d08571-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"9afcc108-879e-4244-a52b-1c5720d08571\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.504056 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9afcc108-879e-4244-a52b-1c5720d08571-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"9afcc108-879e-4244-a52b-1c5720d08571\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.505429 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/9afcc108-879e-4244-a52b-1c5720d08571-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"9afcc108-879e-4244-a52b-1c5720d08571\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.506146 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9afcc108-879e-4244-a52b-1c5720d08571-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"9afcc108-879e-4244-a52b-1c5720d08571\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.506813 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9afcc108-879e-4244-a52b-1c5720d08571-config\") pod \"prometheus-k8s-0\" (UID: \"9afcc108-879e-4244-a52b-1c5720d08571\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.507312 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9afcc108-879e-4244-a52b-1c5720d08571-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"9afcc108-879e-4244-a52b-1c5720d08571\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.508494 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9afcc108-879e-4244-a52b-1c5720d08571-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"9afcc108-879e-4244-a52b-1c5720d08571\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.508541 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/9afcc108-879e-4244-a52b-1c5720d08571-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"9afcc108-879e-4244-a52b-1c5720d08571\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.508979 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/9afcc108-879e-4244-a52b-1c5720d08571-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"9afcc108-879e-4244-a52b-1c5720d08571\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.509239 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/9afcc108-879e-4244-a52b-1c5720d08571-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"9afcc108-879e-4244-a52b-1c5720d08571\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.509735 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/9afcc108-879e-4244-a52b-1c5720d08571-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"9afcc108-879e-4244-a52b-1c5720d08571\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.510096 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9afcc108-879e-4244-a52b-1c5720d08571-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"9afcc108-879e-4244-a52b-1c5720d08571\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.510251 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9afcc108-879e-4244-a52b-1c5720d08571-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"9afcc108-879e-4244-a52b-1c5720d08571\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.510516 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9afcc108-879e-4244-a52b-1c5720d08571-config-out\") pod \"prometheus-k8s-0\" (UID: \"9afcc108-879e-4244-a52b-1c5720d08571\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.512096 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/9afcc108-879e-4244-a52b-1c5720d08571-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"9afcc108-879e-4244-a52b-1c5720d08571\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.516756 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/9afcc108-879e-4244-a52b-1c5720d08571-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"9afcc108-879e-4244-a52b-1c5720d08571\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.521513 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9afcc108-879e-4244-a52b-1c5720d08571-web-config\") pod \"prometheus-k8s-0\" (UID: \"9afcc108-879e-4244-a52b-1c5720d08571\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.521950 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmfrr\" (UniqueName: \"kubernetes.io/projected/9afcc108-879e-4244-a52b-1c5720d08571-kube-api-access-bmfrr\") pod \"prometheus-k8s-0\" (UID: \"9afcc108-879e-4244-a52b-1c5720d08571\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.604073 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:45:45 crc kubenswrapper[4854]: I0103 05:45:43.707634 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5446bc98d5-fqrs8" Jan 03 05:45:46 crc kubenswrapper[4854]: I0103 05:45:46.125113 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-57f57bb94b-jb8qx"] Jan 03 05:45:46 crc kubenswrapper[4854]: I0103 05:45:46.140007 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-665fcf668f-65wrt"] Jan 03 05:45:46 crc kubenswrapper[4854]: I0103 05:45:46.147386 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5446bc98d5-fqrs8"] Jan 03 05:45:46 crc kubenswrapper[4854]: I0103 05:45:46.153362 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Jan 03 05:45:46 crc kubenswrapper[4854]: W0103 05:45:46.374368 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8578134_f9bd_481a_909a_297dd5b66076.slice/crio-c45b7bd74568a9281ecc4fa66e6fe7fb9a354c2d8105abdf8e5e84c6c10e1415 WatchSource:0}: Error finding container c45b7bd74568a9281ecc4fa66e6fe7fb9a354c2d8105abdf8e5e84c6c10e1415: Status 404 returned error can't find the container with id c45b7bd74568a9281ecc4fa66e6fe7fb9a354c2d8105abdf8e5e84c6c10e1415 Jan 03 05:45:46 crc kubenswrapper[4854]: I0103 05:45:46.989045 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5446bc98d5-fqrs8" event={"ID":"d8578134-f9bd-481a-909a-297dd5b66076","Type":"ContainerStarted","Data":"c45b7bd74568a9281ecc4fa66e6fe7fb9a354c2d8105abdf8e5e84c6c10e1415"} Jan 03 05:45:47 crc kubenswrapper[4854]: W0103 05:45:47.066269 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6f05342_5fbe_4b7a_b222_e52b87c7e754.slice/crio-6f0454c1ece03eb6f8dbe0c43d0726dcb1a922a13d19b8f631e19e64838c6764 WatchSource:0}: Error finding container 6f0454c1ece03eb6f8dbe0c43d0726dcb1a922a13d19b8f631e19e64838c6764: Status 404 returned error can't find the container with id 6f0454c1ece03eb6f8dbe0c43d0726dcb1a922a13d19b8f631e19e64838c6764 Jan 03 05:45:47 crc kubenswrapper[4854]: I0103 05:45:47.994835 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-665fcf668f-65wrt" event={"ID":"5899ebcd-eec0-44ae-9e07-98b443d209c1","Type":"ContainerStarted","Data":"2ecd6ced180f765fecba94662913739b5e14a02a96b0782d3ea5640b58cfcead"} Jan 03 05:45:47 crc kubenswrapper[4854]: I0103 05:45:47.995868 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-57f57bb94b-jb8qx" event={"ID":"a6f05342-5fbe-4b7a-b222-e52b87c7e754","Type":"ContainerStarted","Data":"6f0454c1ece03eb6f8dbe0c43d0726dcb1a922a13d19b8f631e19e64838c6764"} Jan 03 05:45:47 crc kubenswrapper[4854]: I0103 05:45:47.996582 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"9afcc108-879e-4244-a52b-1c5720d08571","Type":"ContainerStarted","Data":"1a62a029a02e9e99e0cfefab71b6a3a40ff378ffe312acea84a8e71ca1e2ec9d"} Jan 03 05:45:51 crc kubenswrapper[4854]: I0103 05:45:51.013626 4854 generic.go:334] "Generic (PLEG): container finished" podID="3c7a8c7d-d011-4565-b546-fab08bf723f9" containerID="bed24e4d56f770d298bf83d8fcce663937b2483ed3dda60bebbaf15e13c04327" exitCode=0 Jan 03 05:45:51 crc kubenswrapper[4854]: I0103 05:45:51.013723 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-mf6g9" event={"ID":"3c7a8c7d-d011-4565-b546-fab08bf723f9","Type":"ContainerDied","Data":"bed24e4d56f770d298bf83d8fcce663937b2483ed3dda60bebbaf15e13c04327"} Jan 03 05:45:51 crc kubenswrapper[4854]: I0103 05:45:51.018905 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-s74p5" event={"ID":"b63db84d-d2f0-418e-978a-fbe97b3effbe","Type":"ContainerStarted","Data":"9a51f2ae2c1cba44c906513dc9413d8f0e235e56e4fb489496d73b954d631e33"} Jan 03 05:45:51 crc kubenswrapper[4854]: I0103 05:45:51.023321 4854 generic.go:334] "Generic (PLEG): container finished" podID="9afcc108-879e-4244-a52b-1c5720d08571" containerID="6ee19afd6c7cfb20e807e1e84a670307969ef27c8d1ce029adcc4e54e32517a8" exitCode=0 Jan 03 05:45:51 crc kubenswrapper[4854]: I0103 05:45:51.023397 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"9afcc108-879e-4244-a52b-1c5720d08571","Type":"ContainerDied","Data":"6ee19afd6c7cfb20e807e1e84a670307969ef27c8d1ce029adcc4e54e32517a8"} Jan 03 05:45:51 crc kubenswrapper[4854]: I0103 05:45:51.025610 4854 generic.go:334] "Generic (PLEG): container finished" podID="d7fd4c04-a71d-4912-ae27-3e3ee6c03edb" containerID="8bea685e5956d33b80b6e8f7c7f67ae0983232dc9b2cd1625eff66b61cf1b2b0" exitCode=0 Jan 03 05:45:51 crc kubenswrapper[4854]: I0103 05:45:51.025780 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"d7fd4c04-a71d-4912-ae27-3e3ee6c03edb","Type":"ContainerDied","Data":"8bea685e5956d33b80b6e8f7c7f67ae0983232dc9b2cd1625eff66b61cf1b2b0"} Jan 03 05:45:51 crc kubenswrapper[4854]: I0103 05:45:51.027917 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5446bc98d5-fqrs8" event={"ID":"d8578134-f9bd-481a-909a-297dd5b66076","Type":"ContainerStarted","Data":"74843480a1570e022bd2b3afedd694a9e6f6a85c290b03723da57a336247f3b9"} Jan 03 05:45:51 crc kubenswrapper[4854]: I0103 05:45:51.051480 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5b7f7948f-gfss8" event={"ID":"0ee90900-26e8-4d06-b2b4-f646a1570746","Type":"ContainerStarted","Data":"ddd7bdc779ff1e88cc2e7c07ba5bbf6adb2b901f7b9a384f5c20d5c478b8c7ba"} Jan 03 05:45:51 crc kubenswrapper[4854]: I0103 05:45:51.051539 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5b7f7948f-gfss8" event={"ID":"0ee90900-26e8-4d06-b2b4-f646a1570746","Type":"ContainerStarted","Data":"348a1bb58f0f1342f6bbb4d48dfcb9bd1b6b0adb939c78b3ae9b67ad5c2611a1"} Jan 03 05:45:51 crc kubenswrapper[4854]: I0103 05:45:51.079046 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-k9cwc" event={"ID":"8da9ff59-acf5-4881-9d4c-3e0292c88de8","Type":"ContainerStarted","Data":"43d6a0e5c8040e9882447e40395758ba8e1c78a75e35135ea29ecb3559259539"} Jan 03 05:45:51 crc kubenswrapper[4854]: I0103 05:45:51.080619 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-k9cwc" event={"ID":"8da9ff59-acf5-4881-9d4c-3e0292c88de8","Type":"ContainerStarted","Data":"2c2aaba85d35b95e1eb9aa0b6a52f6936236620cd22b16e35aa4805c366c81b6"} Jan 03 05:45:51 crc kubenswrapper[4854]: I0103 05:45:51.108550 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-s74p5" podStartSLOduration=9.877476754 podStartE2EDuration="15.108521905s" podCreationTimestamp="2026-01-03 05:45:36 +0000 UTC" firstStartedPulling="2026-01-03 05:45:42.753246551 +0000 UTC m=+321.079823123" lastFinishedPulling="2026-01-03 05:45:47.984291682 +0000 UTC m=+326.310868274" observedRunningTime="2026-01-03 05:45:51.104132652 +0000 UTC m=+329.430709244" watchObservedRunningTime="2026-01-03 05:45:51.108521905 +0000 UTC m=+329.435098477" Jan 03 05:45:51 crc kubenswrapper[4854]: I0103 05:45:51.124658 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5446bc98d5-fqrs8" podStartSLOduration=8.124637782 podStartE2EDuration="8.124637782s" podCreationTimestamp="2026-01-03 05:45:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:45:51.119647165 +0000 UTC m=+329.446223747" watchObservedRunningTime="2026-01-03 05:45:51.124637782 +0000 UTC m=+329.451214344" Jan 03 05:45:52 crc kubenswrapper[4854]: I0103 05:45:52.087613 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-k9cwc" event={"ID":"8da9ff59-acf5-4881-9d4c-3e0292c88de8","Type":"ContainerStarted","Data":"00e94dd667c7dcdbb14840e0e7690a4a14357ddbefa6fa4831a2f9dc4ecd1108"} Jan 03 05:45:52 crc kubenswrapper[4854]: I0103 05:45:52.093517 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-mf6g9" event={"ID":"3c7a8c7d-d011-4565-b546-fab08bf723f9","Type":"ContainerStarted","Data":"41054086f43a859c43d83d39c64108d0bd000a65977c01f32640349393f14a35"} Jan 03 05:45:52 crc kubenswrapper[4854]: I0103 05:45:52.093558 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-mf6g9" event={"ID":"3c7a8c7d-d011-4565-b546-fab08bf723f9","Type":"ContainerStarted","Data":"23ab626f91cc5bab3a1bf000461addd443b85c4fdefbbf8eb35a9f5134c3876a"} Jan 03 05:45:52 crc kubenswrapper[4854]: I0103 05:45:52.098025 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5b7f7948f-gfss8" event={"ID":"0ee90900-26e8-4d06-b2b4-f646a1570746","Type":"ContainerStarted","Data":"9dcd8ae9f25f086eec48d16409a55b45e313d866fd461bbd06807e26f55cabe1"} Jan 03 05:45:52 crc kubenswrapper[4854]: I0103 05:45:52.151377 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-mf6g9" podStartSLOduration=6.070852483 podStartE2EDuration="16.151351427s" podCreationTimestamp="2026-01-03 05:45:36 +0000 UTC" firstStartedPulling="2026-01-03 05:45:37.845722646 +0000 UTC m=+316.172299218" lastFinishedPulling="2026-01-03 05:45:47.92622159 +0000 UTC m=+326.252798162" observedRunningTime="2026-01-03 05:45:52.145170022 +0000 UTC m=+330.471746634" watchObservedRunningTime="2026-01-03 05:45:52.151351427 +0000 UTC m=+330.477927999" Jan 03 05:45:52 crc kubenswrapper[4854]: I0103 05:45:52.153166 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-k9cwc" podStartSLOduration=5.812043545 podStartE2EDuration="16.153158229s" podCreationTimestamp="2026-01-03 05:45:36 +0000 UTC" firstStartedPulling="2026-01-03 05:45:37.585107936 +0000 UTC m=+315.911684518" lastFinishedPulling="2026-01-03 05:45:47.92622263 +0000 UTC m=+326.252799202" observedRunningTime="2026-01-03 05:45:52.11601541 +0000 UTC m=+330.442592022" watchObservedRunningTime="2026-01-03 05:45:52.153158229 +0000 UTC m=+330.479734811" Jan 03 05:45:53 crc kubenswrapper[4854]: I0103 05:45:53.708755 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5446bc98d5-fqrs8" Jan 03 05:45:53 crc kubenswrapper[4854]: I0103 05:45:53.709156 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5446bc98d5-fqrs8" Jan 03 05:45:53 crc kubenswrapper[4854]: I0103 05:45:53.714562 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5446bc98d5-fqrs8" Jan 03 05:45:54 crc kubenswrapper[4854]: I0103 05:45:54.111916 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-665fcf668f-65wrt" event={"ID":"5899ebcd-eec0-44ae-9e07-98b443d209c1","Type":"ContainerStarted","Data":"f12fe622af614b41bd44ec6bb3c9b091e81f021cd35ea69f811ff6d066d06d2b"} Jan 03 05:45:54 crc kubenswrapper[4854]: I0103 05:45:54.114345 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-57f57bb94b-jb8qx" event={"ID":"a6f05342-5fbe-4b7a-b222-e52b87c7e754","Type":"ContainerStarted","Data":"3bd86b83592f63e0488006e3c447006830f67b288103c335ddd230a25149cc29"} Jan 03 05:45:54 crc kubenswrapper[4854]: I0103 05:45:54.135472 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5446bc98d5-fqrs8" Jan 03 05:45:54 crc kubenswrapper[4854]: I0103 05:45:54.138570 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-665fcf668f-65wrt" podStartSLOduration=6.059757356 podStartE2EDuration="12.138548067s" podCreationTimestamp="2026-01-03 05:45:42 +0000 UTC" firstStartedPulling="2026-01-03 05:45:47.14282267 +0000 UTC m=+325.469399242" lastFinishedPulling="2026-01-03 05:45:53.221613381 +0000 UTC m=+331.548189953" observedRunningTime="2026-01-03 05:45:54.130240123 +0000 UTC m=+332.456816705" watchObservedRunningTime="2026-01-03 05:45:54.138548067 +0000 UTC m=+332.465124649" Jan 03 05:45:54 crc kubenswrapper[4854]: I0103 05:45:54.153320 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-57f57bb94b-jb8qx" podStartSLOduration=6.068293926 podStartE2EDuration="12.153296122s" podCreationTimestamp="2026-01-03 05:45:42 +0000 UTC" firstStartedPulling="2026-01-03 05:45:47.142938813 +0000 UTC m=+325.469515385" lastFinishedPulling="2026-01-03 05:45:53.227941009 +0000 UTC m=+331.554517581" observedRunningTime="2026-01-03 05:45:54.142529481 +0000 UTC m=+332.469106073" watchObservedRunningTime="2026-01-03 05:45:54.153296122 +0000 UTC m=+332.479872724" Jan 03 05:45:54 crc kubenswrapper[4854]: I0103 05:45:54.192591 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-zhzlw"] Jan 03 05:45:55 crc kubenswrapper[4854]: I0103 05:45:55.120656 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-57f57bb94b-jb8qx" Jan 03 05:45:55 crc kubenswrapper[4854]: I0103 05:45:55.129076 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-57f57bb94b-jb8qx" Jan 03 05:45:59 crc kubenswrapper[4854]: I0103 05:45:59.146420 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"d7fd4c04-a71d-4912-ae27-3e3ee6c03edb","Type":"ContainerStarted","Data":"81f1811f4a0dabeae667f269858d0b44ebee21710b47f873ac7b571734eea064"} Jan 03 05:45:59 crc kubenswrapper[4854]: I0103 05:45:59.150795 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5b7f7948f-gfss8" event={"ID":"0ee90900-26e8-4d06-b2b4-f646a1570746","Type":"ContainerStarted","Data":"940e0ea716585313004899a2044ddd8f3ff0af06cbed0093b93d41c07de45be2"} Jan 03 05:46:00 crc kubenswrapper[4854]: I0103 05:46:00.161803 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"d7fd4c04-a71d-4912-ae27-3e3ee6c03edb","Type":"ContainerStarted","Data":"55901d0ef0c801034eed343171db620e28aae8f455f6a938c17ac72959a71da0"} Jan 03 05:46:00 crc kubenswrapper[4854]: I0103 05:46:00.166336 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5b7f7948f-gfss8" event={"ID":"0ee90900-26e8-4d06-b2b4-f646a1570746","Type":"ContainerStarted","Data":"0c1035117e194c21a85b2bcce0d73b20629294b7168196c8e28a703f5c44cf7c"} Jan 03 05:46:02 crc kubenswrapper[4854]: I0103 05:46:02.184534 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"9afcc108-879e-4244-a52b-1c5720d08571","Type":"ContainerStarted","Data":"126e20e41c5864ad13d00e751b2fc481ae418edc9985b5ae2aac0e5b79445f9d"} Jan 03 05:46:02 crc kubenswrapper[4854]: I0103 05:46:02.189360 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"d7fd4c04-a71d-4912-ae27-3e3ee6c03edb","Type":"ContainerStarted","Data":"31ed61fb457307dfab154d80cdbd3f949b0e190dd8c8b76488f6c1c3a169dc0f"} Jan 03 05:46:02 crc kubenswrapper[4854]: I0103 05:46:02.193652 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5b7f7948f-gfss8" event={"ID":"0ee90900-26e8-4d06-b2b4-f646a1570746","Type":"ContainerStarted","Data":"00d70a93b783c8c3751be2a84e61b3f5a8e4cec6f4b835fe0c99057a9ce6c487"} Jan 03 05:46:02 crc kubenswrapper[4854]: I0103 05:46:02.194955 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-5b7f7948f-gfss8" Jan 03 05:46:02 crc kubenswrapper[4854]: I0103 05:46:02.211602 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-5b7f7948f-gfss8" Jan 03 05:46:02 crc kubenswrapper[4854]: I0103 05:46:02.236000 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-5b7f7948f-gfss8" podStartSLOduration=5.820054043 podStartE2EDuration="24.235979575s" podCreationTimestamp="2026-01-03 05:45:38 +0000 UTC" firstStartedPulling="2026-01-03 05:45:39.912440003 +0000 UTC m=+318.239016586" lastFinishedPulling="2026-01-03 05:45:58.328365506 +0000 UTC m=+336.654942118" observedRunningTime="2026-01-03 05:46:02.231963531 +0000 UTC m=+340.558540103" watchObservedRunningTime="2026-01-03 05:46:02.235979575 +0000 UTC m=+340.562556157" Jan 03 05:46:02 crc kubenswrapper[4854]: I0103 05:46:02.527984 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-665fcf668f-65wrt" Jan 03 05:46:02 crc kubenswrapper[4854]: I0103 05:46:02.528111 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-665fcf668f-65wrt" Jan 03 05:46:03 crc kubenswrapper[4854]: I0103 05:46:03.200128 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"9afcc108-879e-4244-a52b-1c5720d08571","Type":"ContainerStarted","Data":"6c06b1fafc6ce023018f6ca46c1b6225ec60cd66baa80df08a263eaa83ee6cda"} Jan 03 05:46:03 crc kubenswrapper[4854]: I0103 05:46:03.204230 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"d7fd4c04-a71d-4912-ae27-3e3ee6c03edb","Type":"ContainerStarted","Data":"1df85d35c5f644c0c921078998eb705c763abeed56484390c6a60ee48b03d9e0"} Jan 03 05:46:04 crc kubenswrapper[4854]: I0103 05:46:04.221064 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"d7fd4c04-a71d-4912-ae27-3e3ee6c03edb","Type":"ContainerStarted","Data":"c0a92bcee4408f49cc84fb1d6ac9d2bfc3df1328b34e58fefd8880a828124d12"} Jan 03 05:46:05 crc kubenswrapper[4854]: I0103 05:46:05.237413 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"9afcc108-879e-4244-a52b-1c5720d08571","Type":"ContainerStarted","Data":"5e7e4bb97478d1cde325623c9da23770225d4fa213f36fe5768644d070563192"} Jan 03 05:46:06 crc kubenswrapper[4854]: I0103 05:46:06.251572 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"d7fd4c04-a71d-4912-ae27-3e3ee6c03edb","Type":"ContainerStarted","Data":"ca68c71352cd5032fe71215f5cccbcde1352ca72dec426249df9aa743626f8f0"} Jan 03 05:46:06 crc kubenswrapper[4854]: I0103 05:46:06.255952 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"9afcc108-879e-4244-a52b-1c5720d08571","Type":"ContainerStarted","Data":"4561c769edb8e327c2fcf397ac6e03c5948df0ed45671e858b5319cd27eaa6ec"} Jan 03 05:46:07 crc kubenswrapper[4854]: I0103 05:46:07.295565 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=10.700251863 podStartE2EDuration="30.295540697s" podCreationTimestamp="2026-01-03 05:45:37 +0000 UTC" firstStartedPulling="2026-01-03 05:45:38.744430338 +0000 UTC m=+317.071006940" lastFinishedPulling="2026-01-03 05:45:58.339719172 +0000 UTC m=+336.666295774" observedRunningTime="2026-01-03 05:46:07.289824734 +0000 UTC m=+345.616401346" watchObservedRunningTime="2026-01-03 05:46:07.295540697 +0000 UTC m=+345.622117309" Jan 03 05:46:08 crc kubenswrapper[4854]: I0103 05:46:08.270988 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"9afcc108-879e-4244-a52b-1c5720d08571","Type":"ContainerStarted","Data":"77d5ccda10527348b7276a583337f7f041c00b7e0947153aa99ca45a912395ef"} Jan 03 05:46:09 crc kubenswrapper[4854]: I0103 05:46:09.284135 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"9afcc108-879e-4244-a52b-1c5720d08571","Type":"ContainerStarted","Data":"f4c166e3bb4e4b4d2d65e761eea000088ef9abd4d6f693569c40dcd44b779118"} Jan 03 05:46:09 crc kubenswrapper[4854]: I0103 05:46:09.338897 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=12.607943039 podStartE2EDuration="26.33887194s" podCreationTimestamp="2026-01-03 05:45:43 +0000 UTC" firstStartedPulling="2026-01-03 05:45:47.143517108 +0000 UTC m=+325.470093700" lastFinishedPulling="2026-01-03 05:46:00.874446009 +0000 UTC m=+339.201022601" observedRunningTime="2026-01-03 05:46:09.334662582 +0000 UTC m=+347.661239234" watchObservedRunningTime="2026-01-03 05:46:09.33887194 +0000 UTC m=+347.665448552" Jan 03 05:46:13 crc kubenswrapper[4854]: I0103 05:46:13.604309 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:46:19 crc kubenswrapper[4854]: I0103 05:46:19.263788 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-zhzlw" podUID="ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc" containerName="console" containerID="cri-o://26fc79b066ff144cb2fbb1b2afc5e1bddcf9b6336e482cca9f1dd2f20889a365" gracePeriod=15 Jan 03 05:46:21 crc kubenswrapper[4854]: I0103 05:46:21.379336 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-zhzlw_ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc/console/0.log" Jan 03 05:46:21 crc kubenswrapper[4854]: I0103 05:46:21.379672 4854 generic.go:334] "Generic (PLEG): container finished" podID="ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc" containerID="26fc79b066ff144cb2fbb1b2afc5e1bddcf9b6336e482cca9f1dd2f20889a365" exitCode=2 Jan 03 05:46:21 crc kubenswrapper[4854]: I0103 05:46:21.379710 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zhzlw" event={"ID":"ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc","Type":"ContainerDied","Data":"26fc79b066ff144cb2fbb1b2afc5e1bddcf9b6336e482cca9f1dd2f20889a365"} Jan 03 05:46:22 crc kubenswrapper[4854]: I0103 05:46:22.537783 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-665fcf668f-65wrt" Jan 03 05:46:22 crc kubenswrapper[4854]: I0103 05:46:22.546424 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-665fcf668f-65wrt" Jan 03 05:46:28 crc kubenswrapper[4854]: I0103 05:46:28.258832 4854 patch_prober.go:28] interesting pod/console-f9d7485db-zhzlw container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 03 05:46:28 crc kubenswrapper[4854]: I0103 05:46:28.259204 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-f9d7485db-zhzlw" podUID="ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc" containerName="console" probeResult="failure" output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 03 05:46:29 crc kubenswrapper[4854]: I0103 05:46:29.607323 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-zhzlw_ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc/console/0.log" Jan 03 05:46:29 crc kubenswrapper[4854]: I0103 05:46:29.607746 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zhzlw" Jan 03 05:46:29 crc kubenswrapper[4854]: I0103 05:46:29.700717 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc-service-ca\") pod \"ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc\" (UID: \"ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc\") " Jan 03 05:46:29 crc kubenswrapper[4854]: I0103 05:46:29.700791 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc-trusted-ca-bundle\") pod \"ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc\" (UID: \"ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc\") " Jan 03 05:46:29 crc kubenswrapper[4854]: I0103 05:46:29.700892 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc-console-config\") pod \"ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc\" (UID: \"ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc\") " Jan 03 05:46:29 crc kubenswrapper[4854]: I0103 05:46:29.700939 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc-console-serving-cert\") pod \"ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc\" (UID: \"ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc\") " Jan 03 05:46:29 crc kubenswrapper[4854]: I0103 05:46:29.701012 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc-console-oauth-config\") pod \"ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc\" (UID: \"ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc\") " Jan 03 05:46:29 crc kubenswrapper[4854]: I0103 05:46:29.701041 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bbch\" (UniqueName: \"kubernetes.io/projected/ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc-kube-api-access-7bbch\") pod \"ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc\" (UID: \"ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc\") " Jan 03 05:46:29 crc kubenswrapper[4854]: I0103 05:46:29.701062 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc-oauth-serving-cert\") pod \"ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc\" (UID: \"ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc\") " Jan 03 05:46:29 crc kubenswrapper[4854]: I0103 05:46:29.702148 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc-service-ca" (OuterVolumeSpecName: "service-ca") pod "ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc" (UID: "ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:46:29 crc kubenswrapper[4854]: I0103 05:46:29.703211 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc" (UID: "ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:46:29 crc kubenswrapper[4854]: I0103 05:46:29.703791 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc-console-config" (OuterVolumeSpecName: "console-config") pod "ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc" (UID: "ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:46:29 crc kubenswrapper[4854]: I0103 05:46:29.703947 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc" (UID: "ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:46:29 crc kubenswrapper[4854]: I0103 05:46:29.716610 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc-kube-api-access-7bbch" (OuterVolumeSpecName: "kube-api-access-7bbch") pod "ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc" (UID: "ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc"). InnerVolumeSpecName "kube-api-access-7bbch". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:46:29 crc kubenswrapper[4854]: I0103 05:46:29.719607 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc" (UID: "ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:46:29 crc kubenswrapper[4854]: I0103 05:46:29.724494 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc" (UID: "ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:46:29 crc kubenswrapper[4854]: I0103 05:46:29.802980 4854 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 03 05:46:29 crc kubenswrapper[4854]: I0103 05:46:29.803039 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bbch\" (UniqueName: \"kubernetes.io/projected/ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc-kube-api-access-7bbch\") on node \"crc\" DevicePath \"\"" Jan 03 05:46:29 crc kubenswrapper[4854]: I0103 05:46:29.803061 4854 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 05:46:29 crc kubenswrapper[4854]: I0103 05:46:29.803106 4854 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc-service-ca\") on node \"crc\" DevicePath \"\"" Jan 03 05:46:29 crc kubenswrapper[4854]: I0103 05:46:29.803124 4854 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 05:46:29 crc kubenswrapper[4854]: I0103 05:46:29.803141 4854 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc-console-config\") on node \"crc\" DevicePath \"\"" Jan 03 05:46:29 crc kubenswrapper[4854]: I0103 05:46:29.803157 4854 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 05:46:30 crc kubenswrapper[4854]: I0103 05:46:30.453219 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-zhzlw_ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc/console/0.log" Jan 03 05:46:30 crc kubenswrapper[4854]: I0103 05:46:30.453300 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zhzlw" event={"ID":"ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc","Type":"ContainerDied","Data":"94f668d9c1065fe601f661008a84a0de38236b33c529e8668a457a45879bc164"} Jan 03 05:46:30 crc kubenswrapper[4854]: I0103 05:46:30.453353 4854 scope.go:117] "RemoveContainer" containerID="26fc79b066ff144cb2fbb1b2afc5e1bddcf9b6336e482cca9f1dd2f20889a365" Jan 03 05:46:30 crc kubenswrapper[4854]: I0103 05:46:30.453523 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zhzlw" Jan 03 05:46:30 crc kubenswrapper[4854]: I0103 05:46:30.488992 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-zhzlw"] Jan 03 05:46:30 crc kubenswrapper[4854]: I0103 05:46:30.503277 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-zhzlw"] Jan 03 05:46:32 crc kubenswrapper[4854]: I0103 05:46:32.133467 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc" path="/var/lib/kubelet/pods/ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc/volumes" Jan 03 05:46:41 crc kubenswrapper[4854]: I0103 05:46:41.755546 4854 patch_prober.go:28] interesting pod/machine-config-daemon-qdhfx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 05:46:41 crc kubenswrapper[4854]: I0103 05:46:41.756602 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 05:46:43 crc kubenswrapper[4854]: I0103 05:46:43.604401 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:46:43 crc kubenswrapper[4854]: I0103 05:46:43.634715 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:46:44 crc kubenswrapper[4854]: I0103 05:46:44.573047 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Jan 03 05:47:11 crc kubenswrapper[4854]: I0103 05:47:11.755790 4854 patch_prober.go:28] interesting pod/machine-config-daemon-qdhfx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 05:47:11 crc kubenswrapper[4854]: I0103 05:47:11.756403 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 05:47:18 crc kubenswrapper[4854]: I0103 05:47:18.305864 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-76775dbc85-4tdnl"] Jan 03 05:47:18 crc kubenswrapper[4854]: E0103 05:47:18.306694 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc" containerName="console" Jan 03 05:47:18 crc kubenswrapper[4854]: I0103 05:47:18.306715 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc" containerName="console" Jan 03 05:47:18 crc kubenswrapper[4854]: I0103 05:47:18.306944 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce6e5b2a-944a-4fe5-a26e-a705fa9bcedc" containerName="console" Jan 03 05:47:18 crc kubenswrapper[4854]: I0103 05:47:18.307629 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76775dbc85-4tdnl" Jan 03 05:47:18 crc kubenswrapper[4854]: I0103 05:47:18.324142 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-76775dbc85-4tdnl"] Jan 03 05:47:18 crc kubenswrapper[4854]: I0103 05:47:18.463652 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vjh7\" (UniqueName: \"kubernetes.io/projected/e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210-kube-api-access-7vjh7\") pod \"console-76775dbc85-4tdnl\" (UID: \"e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210\") " pod="openshift-console/console-76775dbc85-4tdnl" Jan 03 05:47:18 crc kubenswrapper[4854]: I0103 05:47:18.464034 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210-console-oauth-config\") pod \"console-76775dbc85-4tdnl\" (UID: \"e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210\") " pod="openshift-console/console-76775dbc85-4tdnl" Jan 03 05:47:18 crc kubenswrapper[4854]: I0103 05:47:18.464246 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210-trusted-ca-bundle\") pod \"console-76775dbc85-4tdnl\" (UID: \"e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210\") " pod="openshift-console/console-76775dbc85-4tdnl" Jan 03 05:47:18 crc kubenswrapper[4854]: I0103 05:47:18.464324 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210-console-config\") pod \"console-76775dbc85-4tdnl\" (UID: \"e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210\") " pod="openshift-console/console-76775dbc85-4tdnl" Jan 03 05:47:18 crc kubenswrapper[4854]: I0103 05:47:18.464555 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210-console-serving-cert\") pod \"console-76775dbc85-4tdnl\" (UID: \"e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210\") " pod="openshift-console/console-76775dbc85-4tdnl" Jan 03 05:47:18 crc kubenswrapper[4854]: I0103 05:47:18.464623 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210-service-ca\") pod \"console-76775dbc85-4tdnl\" (UID: \"e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210\") " pod="openshift-console/console-76775dbc85-4tdnl" Jan 03 05:47:18 crc kubenswrapper[4854]: I0103 05:47:18.464826 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210-oauth-serving-cert\") pod \"console-76775dbc85-4tdnl\" (UID: \"e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210\") " pod="openshift-console/console-76775dbc85-4tdnl" Jan 03 05:47:18 crc kubenswrapper[4854]: I0103 05:47:18.567148 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210-console-oauth-config\") pod \"console-76775dbc85-4tdnl\" (UID: \"e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210\") " pod="openshift-console/console-76775dbc85-4tdnl" Jan 03 05:47:18 crc kubenswrapper[4854]: I0103 05:47:18.567776 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210-trusted-ca-bundle\") pod \"console-76775dbc85-4tdnl\" (UID: \"e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210\") " pod="openshift-console/console-76775dbc85-4tdnl" Jan 03 05:47:18 crc kubenswrapper[4854]: I0103 05:47:18.567834 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210-console-config\") pod \"console-76775dbc85-4tdnl\" (UID: \"e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210\") " pod="openshift-console/console-76775dbc85-4tdnl" Jan 03 05:47:18 crc kubenswrapper[4854]: I0103 05:47:18.569266 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210-trusted-ca-bundle\") pod \"console-76775dbc85-4tdnl\" (UID: \"e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210\") " pod="openshift-console/console-76775dbc85-4tdnl" Jan 03 05:47:18 crc kubenswrapper[4854]: I0103 05:47:18.569821 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210-console-config\") pod \"console-76775dbc85-4tdnl\" (UID: \"e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210\") " pod="openshift-console/console-76775dbc85-4tdnl" Jan 03 05:47:18 crc kubenswrapper[4854]: I0103 05:47:18.570137 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210-service-ca\") pod \"console-76775dbc85-4tdnl\" (UID: \"e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210\") " pod="openshift-console/console-76775dbc85-4tdnl" Jan 03 05:47:18 crc kubenswrapper[4854]: I0103 05:47:18.571741 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210-service-ca\") pod \"console-76775dbc85-4tdnl\" (UID: \"e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210\") " pod="openshift-console/console-76775dbc85-4tdnl" Jan 03 05:47:18 crc kubenswrapper[4854]: I0103 05:47:18.572017 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210-console-serving-cert\") pod \"console-76775dbc85-4tdnl\" (UID: \"e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210\") " pod="openshift-console/console-76775dbc85-4tdnl" Jan 03 05:47:18 crc kubenswrapper[4854]: I0103 05:47:18.573057 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210-oauth-serving-cert\") pod \"console-76775dbc85-4tdnl\" (UID: \"e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210\") " pod="openshift-console/console-76775dbc85-4tdnl" Jan 03 05:47:18 crc kubenswrapper[4854]: I0103 05:47:18.573354 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vjh7\" (UniqueName: \"kubernetes.io/projected/e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210-kube-api-access-7vjh7\") pod \"console-76775dbc85-4tdnl\" (UID: \"e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210\") " pod="openshift-console/console-76775dbc85-4tdnl" Jan 03 05:47:18 crc kubenswrapper[4854]: I0103 05:47:18.573824 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210-oauth-serving-cert\") pod \"console-76775dbc85-4tdnl\" (UID: \"e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210\") " pod="openshift-console/console-76775dbc85-4tdnl" Jan 03 05:47:18 crc kubenswrapper[4854]: I0103 05:47:18.577016 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210-console-oauth-config\") pod \"console-76775dbc85-4tdnl\" (UID: \"e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210\") " pod="openshift-console/console-76775dbc85-4tdnl" Jan 03 05:47:18 crc kubenswrapper[4854]: I0103 05:47:18.577195 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210-console-serving-cert\") pod \"console-76775dbc85-4tdnl\" (UID: \"e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210\") " pod="openshift-console/console-76775dbc85-4tdnl" Jan 03 05:47:18 crc kubenswrapper[4854]: I0103 05:47:18.597896 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vjh7\" (UniqueName: \"kubernetes.io/projected/e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210-kube-api-access-7vjh7\") pod \"console-76775dbc85-4tdnl\" (UID: \"e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210\") " pod="openshift-console/console-76775dbc85-4tdnl" Jan 03 05:47:18 crc kubenswrapper[4854]: I0103 05:47:18.627943 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76775dbc85-4tdnl" Jan 03 05:47:18 crc kubenswrapper[4854]: I0103 05:47:18.897489 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-76775dbc85-4tdnl"] Jan 03 05:47:18 crc kubenswrapper[4854]: W0103 05:47:18.909219 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode77a9738_c4d7_4ca3_bbf7_c1c6e8ce4210.slice/crio-54db28405a9c9114c00755ba3742c00bbe63caa734348b6b9f7b47b1a8ab7d24 WatchSource:0}: Error finding container 54db28405a9c9114c00755ba3742c00bbe63caa734348b6b9f7b47b1a8ab7d24: Status 404 returned error can't find the container with id 54db28405a9c9114c00755ba3742c00bbe63caa734348b6b9f7b47b1a8ab7d24 Jan 03 05:47:19 crc kubenswrapper[4854]: I0103 05:47:19.810369 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76775dbc85-4tdnl" event={"ID":"e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210","Type":"ContainerStarted","Data":"6baa761307040e0b368469ef570944b30d01fbf529c92f060f1840666ccaa6f3"} Jan 03 05:47:19 crc kubenswrapper[4854]: I0103 05:47:19.810842 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76775dbc85-4tdnl" event={"ID":"e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210","Type":"ContainerStarted","Data":"54db28405a9c9114c00755ba3742c00bbe63caa734348b6b9f7b47b1a8ab7d24"} Jan 03 05:47:19 crc kubenswrapper[4854]: I0103 05:47:19.842119 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-76775dbc85-4tdnl" podStartSLOduration=1.8420655369999999 podStartE2EDuration="1.842065537s" podCreationTimestamp="2026-01-03 05:47:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:47:19.839634327 +0000 UTC m=+418.166210969" watchObservedRunningTime="2026-01-03 05:47:19.842065537 +0000 UTC m=+418.168642139" Jan 03 05:47:28 crc kubenswrapper[4854]: I0103 05:47:28.629276 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-76775dbc85-4tdnl" Jan 03 05:47:28 crc kubenswrapper[4854]: I0103 05:47:28.632778 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-76775dbc85-4tdnl" Jan 03 05:47:28 crc kubenswrapper[4854]: I0103 05:47:28.660884 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-76775dbc85-4tdnl" Jan 03 05:47:28 crc kubenswrapper[4854]: I0103 05:47:28.894297 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-76775dbc85-4tdnl" Jan 03 05:47:28 crc kubenswrapper[4854]: I0103 05:47:28.992373 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5446bc98d5-fqrs8"] Jan 03 05:47:41 crc kubenswrapper[4854]: I0103 05:47:41.757510 4854 patch_prober.go:28] interesting pod/machine-config-daemon-qdhfx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 05:47:41 crc kubenswrapper[4854]: I0103 05:47:41.758157 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 05:47:41 crc kubenswrapper[4854]: I0103 05:47:41.758223 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" Jan 03 05:47:41 crc kubenswrapper[4854]: I0103 05:47:41.759368 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d8ba999ad3c3dcd9750b64af99186e9f84152e1189793a50472b4e974fec8292"} pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 03 05:47:41 crc kubenswrapper[4854]: I0103 05:47:41.759458 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" containerID="cri-o://d8ba999ad3c3dcd9750b64af99186e9f84152e1189793a50472b4e974fec8292" gracePeriod=600 Jan 03 05:47:41 crc kubenswrapper[4854]: I0103 05:47:41.988505 4854 generic.go:334] "Generic (PLEG): container finished" podID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerID="d8ba999ad3c3dcd9750b64af99186e9f84152e1189793a50472b4e974fec8292" exitCode=0 Jan 03 05:47:41 crc kubenswrapper[4854]: I0103 05:47:41.988634 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" event={"ID":"e8c88d7d-092b-44f7-b4c8-3540be3c0e8b","Type":"ContainerDied","Data":"d8ba999ad3c3dcd9750b64af99186e9f84152e1189793a50472b4e974fec8292"} Jan 03 05:47:41 crc kubenswrapper[4854]: I0103 05:47:41.988963 4854 scope.go:117] "RemoveContainer" containerID="41ee4426739e125fc38ef9de0bc907f228c08816a774c8b5f992bf1e1c0c09cc" Jan 03 05:47:43 crc kubenswrapper[4854]: I0103 05:47:43.001679 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" event={"ID":"e8c88d7d-092b-44f7-b4c8-3540be3c0e8b","Type":"ContainerStarted","Data":"75bb4ac621ba37ad54638b77615a99cc4b805eef98715dac578d470754e1c858"} Jan 03 05:47:54 crc kubenswrapper[4854]: I0103 05:47:54.047139 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-5446bc98d5-fqrs8" podUID="d8578134-f9bd-481a-909a-297dd5b66076" containerName="console" containerID="cri-o://74843480a1570e022bd2b3afedd694a9e6f6a85c290b03723da57a336247f3b9" gracePeriod=15 Jan 03 05:47:54 crc kubenswrapper[4854]: I0103 05:47:54.445795 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5446bc98d5-fqrs8_d8578134-f9bd-481a-909a-297dd5b66076/console/0.log" Jan 03 05:47:54 crc kubenswrapper[4854]: I0103 05:47:54.446239 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5446bc98d5-fqrs8" Jan 03 05:47:54 crc kubenswrapper[4854]: I0103 05:47:54.447167 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d8578134-f9bd-481a-909a-297dd5b66076-console-config\") pod \"d8578134-f9bd-481a-909a-297dd5b66076\" (UID: \"d8578134-f9bd-481a-909a-297dd5b66076\") " Jan 03 05:47:54 crc kubenswrapper[4854]: I0103 05:47:54.447201 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d8578134-f9bd-481a-909a-297dd5b66076-oauth-serving-cert\") pod \"d8578134-f9bd-481a-909a-297dd5b66076\" (UID: \"d8578134-f9bd-481a-909a-297dd5b66076\") " Jan 03 05:47:54 crc kubenswrapper[4854]: I0103 05:47:54.447227 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8578134-f9bd-481a-909a-297dd5b66076-trusted-ca-bundle\") pod \"d8578134-f9bd-481a-909a-297dd5b66076\" (UID: \"d8578134-f9bd-481a-909a-297dd5b66076\") " Jan 03 05:47:54 crc kubenswrapper[4854]: I0103 05:47:54.447248 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d8578134-f9bd-481a-909a-297dd5b66076-console-serving-cert\") pod \"d8578134-f9bd-481a-909a-297dd5b66076\" (UID: \"d8578134-f9bd-481a-909a-297dd5b66076\") " Jan 03 05:47:54 crc kubenswrapper[4854]: I0103 05:47:54.447310 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fp54\" (UniqueName: \"kubernetes.io/projected/d8578134-f9bd-481a-909a-297dd5b66076-kube-api-access-8fp54\") pod \"d8578134-f9bd-481a-909a-297dd5b66076\" (UID: \"d8578134-f9bd-481a-909a-297dd5b66076\") " Jan 03 05:47:54 crc kubenswrapper[4854]: I0103 05:47:54.447349 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d8578134-f9bd-481a-909a-297dd5b66076-service-ca\") pod \"d8578134-f9bd-481a-909a-297dd5b66076\" (UID: \"d8578134-f9bd-481a-909a-297dd5b66076\") " Jan 03 05:47:54 crc kubenswrapper[4854]: I0103 05:47:54.447381 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d8578134-f9bd-481a-909a-297dd5b66076-console-oauth-config\") pod \"d8578134-f9bd-481a-909a-297dd5b66076\" (UID: \"d8578134-f9bd-481a-909a-297dd5b66076\") " Jan 03 05:47:54 crc kubenswrapper[4854]: I0103 05:47:54.449407 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8578134-f9bd-481a-909a-297dd5b66076-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "d8578134-f9bd-481a-909a-297dd5b66076" (UID: "d8578134-f9bd-481a-909a-297dd5b66076"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:47:54 crc kubenswrapper[4854]: I0103 05:47:54.449452 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8578134-f9bd-481a-909a-297dd5b66076-console-config" (OuterVolumeSpecName: "console-config") pod "d8578134-f9bd-481a-909a-297dd5b66076" (UID: "d8578134-f9bd-481a-909a-297dd5b66076"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:47:54 crc kubenswrapper[4854]: I0103 05:47:54.449464 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8578134-f9bd-481a-909a-297dd5b66076-service-ca" (OuterVolumeSpecName: "service-ca") pod "d8578134-f9bd-481a-909a-297dd5b66076" (UID: "d8578134-f9bd-481a-909a-297dd5b66076"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:47:54 crc kubenswrapper[4854]: I0103 05:47:54.449996 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8578134-f9bd-481a-909a-297dd5b66076-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d8578134-f9bd-481a-909a-297dd5b66076" (UID: "d8578134-f9bd-481a-909a-297dd5b66076"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:47:54 crc kubenswrapper[4854]: I0103 05:47:54.455073 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8578134-f9bd-481a-909a-297dd5b66076-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "d8578134-f9bd-481a-909a-297dd5b66076" (UID: "d8578134-f9bd-481a-909a-297dd5b66076"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:47:54 crc kubenswrapper[4854]: I0103 05:47:54.455572 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8578134-f9bd-481a-909a-297dd5b66076-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "d8578134-f9bd-481a-909a-297dd5b66076" (UID: "d8578134-f9bd-481a-909a-297dd5b66076"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:47:54 crc kubenswrapper[4854]: I0103 05:47:54.456273 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8578134-f9bd-481a-909a-297dd5b66076-kube-api-access-8fp54" (OuterVolumeSpecName: "kube-api-access-8fp54") pod "d8578134-f9bd-481a-909a-297dd5b66076" (UID: "d8578134-f9bd-481a-909a-297dd5b66076"). InnerVolumeSpecName "kube-api-access-8fp54". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:47:54 crc kubenswrapper[4854]: I0103 05:47:54.548809 4854 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d8578134-f9bd-481a-909a-297dd5b66076-service-ca\") on node \"crc\" DevicePath \"\"" Jan 03 05:47:54 crc kubenswrapper[4854]: I0103 05:47:54.548874 4854 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d8578134-f9bd-481a-909a-297dd5b66076-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 03 05:47:54 crc kubenswrapper[4854]: I0103 05:47:54.548890 4854 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d8578134-f9bd-481a-909a-297dd5b66076-console-config\") on node \"crc\" DevicePath \"\"" Jan 03 05:47:54 crc kubenswrapper[4854]: I0103 05:47:54.548901 4854 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d8578134-f9bd-481a-909a-297dd5b66076-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 05:47:54 crc kubenswrapper[4854]: I0103 05:47:54.548913 4854 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8578134-f9bd-481a-909a-297dd5b66076-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 05:47:54 crc kubenswrapper[4854]: I0103 05:47:54.548925 4854 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d8578134-f9bd-481a-909a-297dd5b66076-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 05:47:54 crc kubenswrapper[4854]: I0103 05:47:54.548968 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fp54\" (UniqueName: \"kubernetes.io/projected/d8578134-f9bd-481a-909a-297dd5b66076-kube-api-access-8fp54\") on node \"crc\" DevicePath \"\"" Jan 03 05:47:55 crc kubenswrapper[4854]: I0103 05:47:55.101629 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5446bc98d5-fqrs8_d8578134-f9bd-481a-909a-297dd5b66076/console/0.log" Jan 03 05:47:55 crc kubenswrapper[4854]: I0103 05:47:55.101676 4854 generic.go:334] "Generic (PLEG): container finished" podID="d8578134-f9bd-481a-909a-297dd5b66076" containerID="74843480a1570e022bd2b3afedd694a9e6f6a85c290b03723da57a336247f3b9" exitCode=2 Jan 03 05:47:55 crc kubenswrapper[4854]: I0103 05:47:55.101703 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5446bc98d5-fqrs8" event={"ID":"d8578134-f9bd-481a-909a-297dd5b66076","Type":"ContainerDied","Data":"74843480a1570e022bd2b3afedd694a9e6f6a85c290b03723da57a336247f3b9"} Jan 03 05:47:55 crc kubenswrapper[4854]: I0103 05:47:55.101730 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5446bc98d5-fqrs8" event={"ID":"d8578134-f9bd-481a-909a-297dd5b66076","Type":"ContainerDied","Data":"c45b7bd74568a9281ecc4fa66e6fe7fb9a354c2d8105abdf8e5e84c6c10e1415"} Jan 03 05:47:55 crc kubenswrapper[4854]: I0103 05:47:55.101746 4854 scope.go:117] "RemoveContainer" containerID="74843480a1570e022bd2b3afedd694a9e6f6a85c290b03723da57a336247f3b9" Jan 03 05:47:55 crc kubenswrapper[4854]: I0103 05:47:55.101849 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5446bc98d5-fqrs8" Jan 03 05:47:55 crc kubenswrapper[4854]: I0103 05:47:55.127663 4854 scope.go:117] "RemoveContainer" containerID="74843480a1570e022bd2b3afedd694a9e6f6a85c290b03723da57a336247f3b9" Jan 03 05:47:55 crc kubenswrapper[4854]: E0103 05:47:55.129244 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74843480a1570e022bd2b3afedd694a9e6f6a85c290b03723da57a336247f3b9\": container with ID starting with 74843480a1570e022bd2b3afedd694a9e6f6a85c290b03723da57a336247f3b9 not found: ID does not exist" containerID="74843480a1570e022bd2b3afedd694a9e6f6a85c290b03723da57a336247f3b9" Jan 03 05:47:55 crc kubenswrapper[4854]: I0103 05:47:55.129301 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74843480a1570e022bd2b3afedd694a9e6f6a85c290b03723da57a336247f3b9"} err="failed to get container status \"74843480a1570e022bd2b3afedd694a9e6f6a85c290b03723da57a336247f3b9\": rpc error: code = NotFound desc = could not find container \"74843480a1570e022bd2b3afedd694a9e6f6a85c290b03723da57a336247f3b9\": container with ID starting with 74843480a1570e022bd2b3afedd694a9e6f6a85c290b03723da57a336247f3b9 not found: ID does not exist" Jan 03 05:47:55 crc kubenswrapper[4854]: I0103 05:47:55.142909 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5446bc98d5-fqrs8"] Jan 03 05:47:55 crc kubenswrapper[4854]: I0103 05:47:55.151519 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-5446bc98d5-fqrs8"] Jan 03 05:47:56 crc kubenswrapper[4854]: I0103 05:47:56.132610 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8578134-f9bd-481a-909a-297dd5b66076" path="/var/lib/kubelet/pods/d8578134-f9bd-481a-909a-297dd5b66076/volumes" Jan 03 05:50:11 crc kubenswrapper[4854]: I0103 05:50:11.755683 4854 patch_prober.go:28] interesting pod/machine-config-daemon-qdhfx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 05:50:11 crc kubenswrapper[4854]: I0103 05:50:11.756516 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 05:50:41 crc kubenswrapper[4854]: I0103 05:50:41.755693 4854 patch_prober.go:28] interesting pod/machine-config-daemon-qdhfx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 05:50:41 crc kubenswrapper[4854]: I0103 05:50:41.756202 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 05:50:54 crc kubenswrapper[4854]: I0103 05:50:54.024499 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086dztp"] Jan 03 05:50:54 crc kubenswrapper[4854]: E0103 05:50:54.025755 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8578134-f9bd-481a-909a-297dd5b66076" containerName="console" Jan 03 05:50:54 crc kubenswrapper[4854]: I0103 05:50:54.025786 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8578134-f9bd-481a-909a-297dd5b66076" containerName="console" Jan 03 05:50:54 crc kubenswrapper[4854]: I0103 05:50:54.026060 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8578134-f9bd-481a-909a-297dd5b66076" containerName="console" Jan 03 05:50:54 crc kubenswrapper[4854]: I0103 05:50:54.028049 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086dztp" Jan 03 05:50:54 crc kubenswrapper[4854]: I0103 05:50:54.031614 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 03 05:50:54 crc kubenswrapper[4854]: I0103 05:50:54.040893 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086dztp"] Jan 03 05:50:54 crc kubenswrapper[4854]: I0103 05:50:54.136953 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b6420d73-ab72-4d83-9234-1700c23e6393-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086dztp\" (UID: \"b6420d73-ab72-4d83-9234-1700c23e6393\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086dztp" Jan 03 05:50:54 crc kubenswrapper[4854]: I0103 05:50:54.137125 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b6420d73-ab72-4d83-9234-1700c23e6393-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086dztp\" (UID: \"b6420d73-ab72-4d83-9234-1700c23e6393\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086dztp" Jan 03 05:50:54 crc kubenswrapper[4854]: I0103 05:50:54.137202 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tp82\" (UniqueName: \"kubernetes.io/projected/b6420d73-ab72-4d83-9234-1700c23e6393-kube-api-access-6tp82\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086dztp\" (UID: \"b6420d73-ab72-4d83-9234-1700c23e6393\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086dztp" Jan 03 05:50:54 crc kubenswrapper[4854]: I0103 05:50:54.238861 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b6420d73-ab72-4d83-9234-1700c23e6393-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086dztp\" (UID: \"b6420d73-ab72-4d83-9234-1700c23e6393\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086dztp" Jan 03 05:50:54 crc kubenswrapper[4854]: I0103 05:50:54.239035 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tp82\" (UniqueName: \"kubernetes.io/projected/b6420d73-ab72-4d83-9234-1700c23e6393-kube-api-access-6tp82\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086dztp\" (UID: \"b6420d73-ab72-4d83-9234-1700c23e6393\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086dztp" Jan 03 05:50:54 crc kubenswrapper[4854]: I0103 05:50:54.239250 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b6420d73-ab72-4d83-9234-1700c23e6393-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086dztp\" (UID: \"b6420d73-ab72-4d83-9234-1700c23e6393\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086dztp" Jan 03 05:50:54 crc kubenswrapper[4854]: I0103 05:50:54.242008 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b6420d73-ab72-4d83-9234-1700c23e6393-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086dztp\" (UID: \"b6420d73-ab72-4d83-9234-1700c23e6393\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086dztp" Jan 03 05:50:54 crc kubenswrapper[4854]: I0103 05:50:54.242872 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b6420d73-ab72-4d83-9234-1700c23e6393-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086dztp\" (UID: \"b6420d73-ab72-4d83-9234-1700c23e6393\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086dztp" Jan 03 05:50:54 crc kubenswrapper[4854]: I0103 05:50:54.263019 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tp82\" (UniqueName: \"kubernetes.io/projected/b6420d73-ab72-4d83-9234-1700c23e6393-kube-api-access-6tp82\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086dztp\" (UID: \"b6420d73-ab72-4d83-9234-1700c23e6393\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086dztp" Jan 03 05:50:54 crc kubenswrapper[4854]: I0103 05:50:54.357649 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086dztp" Jan 03 05:50:54 crc kubenswrapper[4854]: I0103 05:50:54.854509 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086dztp"] Jan 03 05:50:54 crc kubenswrapper[4854]: W0103 05:50:54.874596 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6420d73_ab72_4d83_9234_1700c23e6393.slice/crio-2417cc719c209fdb94b47f92763c645a6550f8827807f1132bac8b0fe9b43b4f WatchSource:0}: Error finding container 2417cc719c209fdb94b47f92763c645a6550f8827807f1132bac8b0fe9b43b4f: Status 404 returned error can't find the container with id 2417cc719c209fdb94b47f92763c645a6550f8827807f1132bac8b0fe9b43b4f Jan 03 05:50:55 crc kubenswrapper[4854]: I0103 05:50:55.535223 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086dztp" event={"ID":"b6420d73-ab72-4d83-9234-1700c23e6393","Type":"ContainerDied","Data":"4aceec162543476bbed9162235252bd54e11251c51ca0822e3ca94471f167cc3"} Jan 03 05:50:55 crc kubenswrapper[4854]: I0103 05:50:55.537115 4854 generic.go:334] "Generic (PLEG): container finished" podID="b6420d73-ab72-4d83-9234-1700c23e6393" containerID="4aceec162543476bbed9162235252bd54e11251c51ca0822e3ca94471f167cc3" exitCode=0 Jan 03 05:50:55 crc kubenswrapper[4854]: I0103 05:50:55.537189 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086dztp" event={"ID":"b6420d73-ab72-4d83-9234-1700c23e6393","Type":"ContainerStarted","Data":"2417cc719c209fdb94b47f92763c645a6550f8827807f1132bac8b0fe9b43b4f"} Jan 03 05:50:55 crc kubenswrapper[4854]: I0103 05:50:55.539527 4854 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 03 05:50:57 crc kubenswrapper[4854]: I0103 05:50:57.552507 4854 generic.go:334] "Generic (PLEG): container finished" podID="b6420d73-ab72-4d83-9234-1700c23e6393" containerID="538f2151af52c8682a6d15833470e13db560f6a863f03890411a385425676aad" exitCode=0 Jan 03 05:50:57 crc kubenswrapper[4854]: I0103 05:50:57.552603 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086dztp" event={"ID":"b6420d73-ab72-4d83-9234-1700c23e6393","Type":"ContainerDied","Data":"538f2151af52c8682a6d15833470e13db560f6a863f03890411a385425676aad"} Jan 03 05:50:58 crc kubenswrapper[4854]: I0103 05:50:58.566908 4854 generic.go:334] "Generic (PLEG): container finished" podID="b6420d73-ab72-4d83-9234-1700c23e6393" containerID="f86c511eb50a08a3ee595da5604687d3138558d9d823a5a9301d176d1f79e442" exitCode=0 Jan 03 05:50:58 crc kubenswrapper[4854]: I0103 05:50:58.566966 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086dztp" event={"ID":"b6420d73-ab72-4d83-9234-1700c23e6393","Type":"ContainerDied","Data":"f86c511eb50a08a3ee595da5604687d3138558d9d823a5a9301d176d1f79e442"} Jan 03 05:50:59 crc kubenswrapper[4854]: I0103 05:50:59.974131 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086dztp" Jan 03 05:51:00 crc kubenswrapper[4854]: I0103 05:51:00.034566 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tp82\" (UniqueName: \"kubernetes.io/projected/b6420d73-ab72-4d83-9234-1700c23e6393-kube-api-access-6tp82\") pod \"b6420d73-ab72-4d83-9234-1700c23e6393\" (UID: \"b6420d73-ab72-4d83-9234-1700c23e6393\") " Jan 03 05:51:00 crc kubenswrapper[4854]: I0103 05:51:00.034695 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b6420d73-ab72-4d83-9234-1700c23e6393-util\") pod \"b6420d73-ab72-4d83-9234-1700c23e6393\" (UID: \"b6420d73-ab72-4d83-9234-1700c23e6393\") " Jan 03 05:51:00 crc kubenswrapper[4854]: I0103 05:51:00.034746 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b6420d73-ab72-4d83-9234-1700c23e6393-bundle\") pod \"b6420d73-ab72-4d83-9234-1700c23e6393\" (UID: \"b6420d73-ab72-4d83-9234-1700c23e6393\") " Jan 03 05:51:00 crc kubenswrapper[4854]: I0103 05:51:00.037528 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6420d73-ab72-4d83-9234-1700c23e6393-bundle" (OuterVolumeSpecName: "bundle") pod "b6420d73-ab72-4d83-9234-1700c23e6393" (UID: "b6420d73-ab72-4d83-9234-1700c23e6393"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:51:00 crc kubenswrapper[4854]: I0103 05:51:00.040333 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6420d73-ab72-4d83-9234-1700c23e6393-kube-api-access-6tp82" (OuterVolumeSpecName: "kube-api-access-6tp82") pod "b6420d73-ab72-4d83-9234-1700c23e6393" (UID: "b6420d73-ab72-4d83-9234-1700c23e6393"). InnerVolumeSpecName "kube-api-access-6tp82". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:51:00 crc kubenswrapper[4854]: I0103 05:51:00.059351 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6420d73-ab72-4d83-9234-1700c23e6393-util" (OuterVolumeSpecName: "util") pod "b6420d73-ab72-4d83-9234-1700c23e6393" (UID: "b6420d73-ab72-4d83-9234-1700c23e6393"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:51:00 crc kubenswrapper[4854]: I0103 05:51:00.137184 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tp82\" (UniqueName: \"kubernetes.io/projected/b6420d73-ab72-4d83-9234-1700c23e6393-kube-api-access-6tp82\") on node \"crc\" DevicePath \"\"" Jan 03 05:51:00 crc kubenswrapper[4854]: I0103 05:51:00.137235 4854 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b6420d73-ab72-4d83-9234-1700c23e6393-util\") on node \"crc\" DevicePath \"\"" Jan 03 05:51:00 crc kubenswrapper[4854]: I0103 05:51:00.137254 4854 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b6420d73-ab72-4d83-9234-1700c23e6393-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 05:51:00 crc kubenswrapper[4854]: I0103 05:51:00.581383 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086dztp" Jan 03 05:51:00 crc kubenswrapper[4854]: I0103 05:51:00.581348 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086dztp" event={"ID":"b6420d73-ab72-4d83-9234-1700c23e6393","Type":"ContainerDied","Data":"2417cc719c209fdb94b47f92763c645a6550f8827807f1132bac8b0fe9b43b4f"} Jan 03 05:51:00 crc kubenswrapper[4854]: I0103 05:51:00.581442 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2417cc719c209fdb94b47f92763c645a6550f8827807f1132bac8b0fe9b43b4f" Jan 03 05:51:05 crc kubenswrapper[4854]: I0103 05:51:05.257608 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zffbr"] Jan 03 05:51:05 crc kubenswrapper[4854]: I0103 05:51:05.258651 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" podUID="dea8fd3f-411f-44a8-a1d6-4881f41fc149" containerName="ovn-controller" containerID="cri-o://1ed03d4f5e2709066a245014f5a4caef63f0b02ecf2fce8e89c6b9f43fc58aba" gracePeriod=30 Jan 03 05:51:05 crc kubenswrapper[4854]: I0103 05:51:05.258679 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" podUID="dea8fd3f-411f-44a8-a1d6-4881f41fc149" containerName="sbdb" containerID="cri-o://6c0bb025e7decbbbaa763a759e9e0c7557db17fcc68ec3f6cb51787d923a0e8d" gracePeriod=30 Jan 03 05:51:05 crc kubenswrapper[4854]: I0103 05:51:05.258705 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" podUID="dea8fd3f-411f-44a8-a1d6-4881f41fc149" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://0ff12fdc87727ca31d1c393d3ba08b1b3c814b82d28164b99e396b2ab8537276" gracePeriod=30 Jan 03 05:51:05 crc kubenswrapper[4854]: I0103 05:51:05.258792 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" podUID="dea8fd3f-411f-44a8-a1d6-4881f41fc149" containerName="kube-rbac-proxy-node" containerID="cri-o://fa2191fe20b6f81927579ecb9da5b3eb8b9ef9cabfa0dc608b92a1565bcc1160" gracePeriod=30 Jan 03 05:51:05 crc kubenswrapper[4854]: I0103 05:51:05.258834 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" podUID="dea8fd3f-411f-44a8-a1d6-4881f41fc149" containerName="nbdb" containerID="cri-o://c1a422e6522d553b1ce279a23816bf94865070feb187372d04cc50e8bb733d31" gracePeriod=30 Jan 03 05:51:05 crc kubenswrapper[4854]: I0103 05:51:05.258854 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" podUID="dea8fd3f-411f-44a8-a1d6-4881f41fc149" containerName="northd" containerID="cri-o://74d71e736e09b8dab45141b304bca1c5ce323b3a9cf06b5a917072a38be385d2" gracePeriod=30 Jan 03 05:51:05 crc kubenswrapper[4854]: I0103 05:51:05.258874 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" podUID="dea8fd3f-411f-44a8-a1d6-4881f41fc149" containerName="ovn-acl-logging" containerID="cri-o://e4d2128cfbfad21895f6b5c0b98cdb06df7b48aa178c15b4162635e92afcfb71" gracePeriod=30 Jan 03 05:51:05 crc kubenswrapper[4854]: I0103 05:51:05.309208 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" podUID="dea8fd3f-411f-44a8-a1d6-4881f41fc149" containerName="ovnkube-controller" containerID="cri-o://6044841ea44f74d5a3fe2aa4bd6f8c818aaf51bd4e502d7c505b64cd25518288" gracePeriod=30 Jan 03 05:51:05 crc kubenswrapper[4854]: I0103 05:51:05.635507 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zffbr_dea8fd3f-411f-44a8-a1d6-4881f41fc149/ovn-acl-logging/0.log" Jan 03 05:51:05 crc kubenswrapper[4854]: I0103 05:51:05.636307 4854 generic.go:334] "Generic (PLEG): container finished" podID="dea8fd3f-411f-44a8-a1d6-4881f41fc149" containerID="e4d2128cfbfad21895f6b5c0b98cdb06df7b48aa178c15b4162635e92afcfb71" exitCode=143 Jan 03 05:51:05 crc kubenswrapper[4854]: I0103 05:51:05.636351 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" event={"ID":"dea8fd3f-411f-44a8-a1d6-4881f41fc149","Type":"ContainerDied","Data":"e4d2128cfbfad21895f6b5c0b98cdb06df7b48aa178c15b4162635e92afcfb71"} Jan 03 05:51:05 crc kubenswrapper[4854]: E0103 05:51:05.975716 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c1a422e6522d553b1ce279a23816bf94865070feb187372d04cc50e8bb733d31 is running failed: container process not found" containerID="c1a422e6522d553b1ce279a23816bf94865070feb187372d04cc50e8bb733d31" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Jan 03 05:51:05 crc kubenswrapper[4854]: E0103 05:51:05.975722 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6c0bb025e7decbbbaa763a759e9e0c7557db17fcc68ec3f6cb51787d923a0e8d is running failed: container process not found" containerID="6c0bb025e7decbbbaa763a759e9e0c7557db17fcc68ec3f6cb51787d923a0e8d" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Jan 03 05:51:05 crc kubenswrapper[4854]: E0103 05:51:05.976376 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c1a422e6522d553b1ce279a23816bf94865070feb187372d04cc50e8bb733d31 is running failed: container process not found" containerID="c1a422e6522d553b1ce279a23816bf94865070feb187372d04cc50e8bb733d31" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Jan 03 05:51:05 crc kubenswrapper[4854]: E0103 05:51:05.976468 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6c0bb025e7decbbbaa763a759e9e0c7557db17fcc68ec3f6cb51787d923a0e8d is running failed: container process not found" containerID="6c0bb025e7decbbbaa763a759e9e0c7557db17fcc68ec3f6cb51787d923a0e8d" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Jan 03 05:51:05 crc kubenswrapper[4854]: E0103 05:51:05.976606 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c1a422e6522d553b1ce279a23816bf94865070feb187372d04cc50e8bb733d31 is running failed: container process not found" containerID="c1a422e6522d553b1ce279a23816bf94865070feb187372d04cc50e8bb733d31" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Jan 03 05:51:05 crc kubenswrapper[4854]: E0103 05:51:05.976649 4854 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c1a422e6522d553b1ce279a23816bf94865070feb187372d04cc50e8bb733d31 is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" podUID="dea8fd3f-411f-44a8-a1d6-4881f41fc149" containerName="nbdb" Jan 03 05:51:05 crc kubenswrapper[4854]: E0103 05:51:05.976693 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6c0bb025e7decbbbaa763a759e9e0c7557db17fcc68ec3f6cb51787d923a0e8d is running failed: container process not found" containerID="6c0bb025e7decbbbaa763a759e9e0c7557db17fcc68ec3f6cb51787d923a0e8d" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Jan 03 05:51:05 crc kubenswrapper[4854]: E0103 05:51:05.976719 4854 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6c0bb025e7decbbbaa763a759e9e0c7557db17fcc68ec3f6cb51787d923a0e8d is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" podUID="dea8fd3f-411f-44a8-a1d6-4881f41fc149" containerName="sbdb" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.586288 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zffbr_dea8fd3f-411f-44a8-a1d6-4881f41fc149/ovn-acl-logging/0.log" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.586931 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zffbr_dea8fd3f-411f-44a8-a1d6-4881f41fc149/ovn-controller/0.log" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.587391 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.644278 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zffbr_dea8fd3f-411f-44a8-a1d6-4881f41fc149/ovn-acl-logging/0.log" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.644759 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zffbr_dea8fd3f-411f-44a8-a1d6-4881f41fc149/ovn-controller/0.log" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.645129 4854 generic.go:334] "Generic (PLEG): container finished" podID="dea8fd3f-411f-44a8-a1d6-4881f41fc149" containerID="6044841ea44f74d5a3fe2aa4bd6f8c818aaf51bd4e502d7c505b64cd25518288" exitCode=0 Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.645166 4854 generic.go:334] "Generic (PLEG): container finished" podID="dea8fd3f-411f-44a8-a1d6-4881f41fc149" containerID="6c0bb025e7decbbbaa763a759e9e0c7557db17fcc68ec3f6cb51787d923a0e8d" exitCode=0 Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.645178 4854 generic.go:334] "Generic (PLEG): container finished" podID="dea8fd3f-411f-44a8-a1d6-4881f41fc149" containerID="c1a422e6522d553b1ce279a23816bf94865070feb187372d04cc50e8bb733d31" exitCode=0 Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.645186 4854 generic.go:334] "Generic (PLEG): container finished" podID="dea8fd3f-411f-44a8-a1d6-4881f41fc149" containerID="74d71e736e09b8dab45141b304bca1c5ce323b3a9cf06b5a917072a38be385d2" exitCode=0 Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.645197 4854 generic.go:334] "Generic (PLEG): container finished" podID="dea8fd3f-411f-44a8-a1d6-4881f41fc149" containerID="0ff12fdc87727ca31d1c393d3ba08b1b3c814b82d28164b99e396b2ab8537276" exitCode=0 Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.645207 4854 generic.go:334] "Generic (PLEG): container finished" podID="dea8fd3f-411f-44a8-a1d6-4881f41fc149" containerID="fa2191fe20b6f81927579ecb9da5b3eb8b9ef9cabfa0dc608b92a1565bcc1160" exitCode=0 Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.645202 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" event={"ID":"dea8fd3f-411f-44a8-a1d6-4881f41fc149","Type":"ContainerDied","Data":"6044841ea44f74d5a3fe2aa4bd6f8c818aaf51bd4e502d7c505b64cd25518288"} Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.645279 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" event={"ID":"dea8fd3f-411f-44a8-a1d6-4881f41fc149","Type":"ContainerDied","Data":"6c0bb025e7decbbbaa763a759e9e0c7557db17fcc68ec3f6cb51787d923a0e8d"} Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.645294 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" event={"ID":"dea8fd3f-411f-44a8-a1d6-4881f41fc149","Type":"ContainerDied","Data":"c1a422e6522d553b1ce279a23816bf94865070feb187372d04cc50e8bb733d31"} Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.645303 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" event={"ID":"dea8fd3f-411f-44a8-a1d6-4881f41fc149","Type":"ContainerDied","Data":"74d71e736e09b8dab45141b304bca1c5ce323b3a9cf06b5a917072a38be385d2"} Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.645312 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" event={"ID":"dea8fd3f-411f-44a8-a1d6-4881f41fc149","Type":"ContainerDied","Data":"0ff12fdc87727ca31d1c393d3ba08b1b3c814b82d28164b99e396b2ab8537276"} Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.645338 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" event={"ID":"dea8fd3f-411f-44a8-a1d6-4881f41fc149","Type":"ContainerDied","Data":"fa2191fe20b6f81927579ecb9da5b3eb8b9ef9cabfa0dc608b92a1565bcc1160"} Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.645350 4854 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e4d2128cfbfad21895f6b5c0b98cdb06df7b48aa178c15b4162635e92afcfb71"} Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.645361 4854 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1ed03d4f5e2709066a245014f5a4caef63f0b02ecf2fce8e89c6b9f43fc58aba"} Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.645367 4854 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3faa8a1d01333196ebebebb77705daea5692b77e3602574b64b3944798dd5413"} Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.645375 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" event={"ID":"dea8fd3f-411f-44a8-a1d6-4881f41fc149","Type":"ContainerDied","Data":"1ed03d4f5e2709066a245014f5a4caef63f0b02ecf2fce8e89c6b9f43fc58aba"} Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.645383 4854 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6044841ea44f74d5a3fe2aa4bd6f8c818aaf51bd4e502d7c505b64cd25518288"} Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.645389 4854 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6c0bb025e7decbbbaa763a759e9e0c7557db17fcc68ec3f6cb51787d923a0e8d"} Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.645395 4854 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c1a422e6522d553b1ce279a23816bf94865070feb187372d04cc50e8bb733d31"} Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.645415 4854 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"74d71e736e09b8dab45141b304bca1c5ce323b3a9cf06b5a917072a38be385d2"} Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.645420 4854 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0ff12fdc87727ca31d1c393d3ba08b1b3c814b82d28164b99e396b2ab8537276"} Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.645426 4854 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fa2191fe20b6f81927579ecb9da5b3eb8b9ef9cabfa0dc608b92a1565bcc1160"} Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.645431 4854 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e4d2128cfbfad21895f6b5c0b98cdb06df7b48aa178c15b4162635e92afcfb71"} Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.645436 4854 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1ed03d4f5e2709066a245014f5a4caef63f0b02ecf2fce8e89c6b9f43fc58aba"} Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.645443 4854 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3faa8a1d01333196ebebebb77705daea5692b77e3602574b64b3944798dd5413"} Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.645219 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.645216 4854 generic.go:334] "Generic (PLEG): container finished" podID="dea8fd3f-411f-44a8-a1d6-4881f41fc149" containerID="1ed03d4f5e2709066a245014f5a4caef63f0b02ecf2fce8e89c6b9f43fc58aba" exitCode=143 Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.645966 4854 scope.go:117] "RemoveContainer" containerID="6044841ea44f74d5a3fe2aa4bd6f8c818aaf51bd4e502d7c505b64cd25518288" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.646020 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zffbr" event={"ID":"dea8fd3f-411f-44a8-a1d6-4881f41fc149","Type":"ContainerDied","Data":"565fc801a2168358e61255fc30de012d150001e705004cf9bbfa025053a1507b"} Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.646058 4854 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6044841ea44f74d5a3fe2aa4bd6f8c818aaf51bd4e502d7c505b64cd25518288"} Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.646095 4854 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6c0bb025e7decbbbaa763a759e9e0c7557db17fcc68ec3f6cb51787d923a0e8d"} Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.646103 4854 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c1a422e6522d553b1ce279a23816bf94865070feb187372d04cc50e8bb733d31"} Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.646108 4854 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"74d71e736e09b8dab45141b304bca1c5ce323b3a9cf06b5a917072a38be385d2"} Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.646113 4854 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0ff12fdc87727ca31d1c393d3ba08b1b3c814b82d28164b99e396b2ab8537276"} Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.646118 4854 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fa2191fe20b6f81927579ecb9da5b3eb8b9ef9cabfa0dc608b92a1565bcc1160"} Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.646123 4854 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e4d2128cfbfad21895f6b5c0b98cdb06df7b48aa178c15b4162635e92afcfb71"} Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.646128 4854 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1ed03d4f5e2709066a245014f5a4caef63f0b02ecf2fce8e89c6b9f43fc58aba"} Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.646134 4854 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3faa8a1d01333196ebebebb77705daea5692b77e3602574b64b3944798dd5413"} Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.647687 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-spn2r_9bfe5118-0560-4d0c-9f5a-8a77143dd58e/kube-multus/0.log" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.647727 4854 generic.go:334] "Generic (PLEG): container finished" podID="9bfe5118-0560-4d0c-9f5a-8a77143dd58e" containerID="9039309fdb9b29d081ebbe9b1145ccab345e3ac234f4bbc0b9267d69a4ee8f81" exitCode=2 Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.647746 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-spn2r" event={"ID":"9bfe5118-0560-4d0c-9f5a-8a77143dd58e","Type":"ContainerDied","Data":"9039309fdb9b29d081ebbe9b1145ccab345e3ac234f4bbc0b9267d69a4ee8f81"} Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.648315 4854 scope.go:117] "RemoveContainer" containerID="9039309fdb9b29d081ebbe9b1145ccab345e3ac234f4bbc0b9267d69a4ee8f81" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.692253 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-cs97n"] Jan 03 05:51:06 crc kubenswrapper[4854]: E0103 05:51:06.692458 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dea8fd3f-411f-44a8-a1d6-4881f41fc149" containerName="northd" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.692470 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="dea8fd3f-411f-44a8-a1d6-4881f41fc149" containerName="northd" Jan 03 05:51:06 crc kubenswrapper[4854]: E0103 05:51:06.692481 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dea8fd3f-411f-44a8-a1d6-4881f41fc149" containerName="ovn-controller" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.692486 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="dea8fd3f-411f-44a8-a1d6-4881f41fc149" containerName="ovn-controller" Jan 03 05:51:06 crc kubenswrapper[4854]: E0103 05:51:06.692500 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dea8fd3f-411f-44a8-a1d6-4881f41fc149" containerName="nbdb" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.692506 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="dea8fd3f-411f-44a8-a1d6-4881f41fc149" containerName="nbdb" Jan 03 05:51:06 crc kubenswrapper[4854]: E0103 05:51:06.692515 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6420d73-ab72-4d83-9234-1700c23e6393" containerName="extract" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.692521 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6420d73-ab72-4d83-9234-1700c23e6393" containerName="extract" Jan 03 05:51:06 crc kubenswrapper[4854]: E0103 05:51:06.692529 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dea8fd3f-411f-44a8-a1d6-4881f41fc149" containerName="ovn-acl-logging" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.692534 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="dea8fd3f-411f-44a8-a1d6-4881f41fc149" containerName="ovn-acl-logging" Jan 03 05:51:06 crc kubenswrapper[4854]: E0103 05:51:06.692540 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dea8fd3f-411f-44a8-a1d6-4881f41fc149" containerName="kube-rbac-proxy-ovn-metrics" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.692546 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="dea8fd3f-411f-44a8-a1d6-4881f41fc149" containerName="kube-rbac-proxy-ovn-metrics" Jan 03 05:51:06 crc kubenswrapper[4854]: E0103 05:51:06.692553 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6420d73-ab72-4d83-9234-1700c23e6393" containerName="pull" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.692559 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6420d73-ab72-4d83-9234-1700c23e6393" containerName="pull" Jan 03 05:51:06 crc kubenswrapper[4854]: E0103 05:51:06.692568 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dea8fd3f-411f-44a8-a1d6-4881f41fc149" containerName="kubecfg-setup" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.692573 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="dea8fd3f-411f-44a8-a1d6-4881f41fc149" containerName="kubecfg-setup" Jan 03 05:51:06 crc kubenswrapper[4854]: E0103 05:51:06.692583 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dea8fd3f-411f-44a8-a1d6-4881f41fc149" containerName="kube-rbac-proxy-node" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.692589 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="dea8fd3f-411f-44a8-a1d6-4881f41fc149" containerName="kube-rbac-proxy-node" Jan 03 05:51:06 crc kubenswrapper[4854]: E0103 05:51:06.692598 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6420d73-ab72-4d83-9234-1700c23e6393" containerName="util" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.692604 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6420d73-ab72-4d83-9234-1700c23e6393" containerName="util" Jan 03 05:51:06 crc kubenswrapper[4854]: E0103 05:51:06.692611 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dea8fd3f-411f-44a8-a1d6-4881f41fc149" containerName="ovnkube-controller" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.692618 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="dea8fd3f-411f-44a8-a1d6-4881f41fc149" containerName="ovnkube-controller" Jan 03 05:51:06 crc kubenswrapper[4854]: E0103 05:51:06.692628 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dea8fd3f-411f-44a8-a1d6-4881f41fc149" containerName="sbdb" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.692634 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="dea8fd3f-411f-44a8-a1d6-4881f41fc149" containerName="sbdb" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.692727 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="dea8fd3f-411f-44a8-a1d6-4881f41fc149" containerName="nbdb" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.692738 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="dea8fd3f-411f-44a8-a1d6-4881f41fc149" containerName="ovn-controller" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.692744 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="dea8fd3f-411f-44a8-a1d6-4881f41fc149" containerName="ovnkube-controller" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.692751 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="dea8fd3f-411f-44a8-a1d6-4881f41fc149" containerName="ovn-acl-logging" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.692760 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="dea8fd3f-411f-44a8-a1d6-4881f41fc149" containerName="sbdb" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.692768 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6420d73-ab72-4d83-9234-1700c23e6393" containerName="extract" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.692776 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="dea8fd3f-411f-44a8-a1d6-4881f41fc149" containerName="kube-rbac-proxy-node" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.692784 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="dea8fd3f-411f-44a8-a1d6-4881f41fc149" containerName="northd" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.692791 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="dea8fd3f-411f-44a8-a1d6-4881f41fc149" containerName="kube-rbac-proxy-ovn-metrics" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.695532 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.699757 4854 scope.go:117] "RemoveContainer" containerID="6c0bb025e7decbbbaa763a759e9e0c7557db17fcc68ec3f6cb51787d923a0e8d" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.728374 4854 scope.go:117] "RemoveContainer" containerID="c1a422e6522d553b1ce279a23816bf94865070feb187372d04cc50e8bb733d31" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.730836 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-host-cni-netd\") pod \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.730858 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-host-var-lib-cni-networks-ovn-kubernetes\") pod \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.730876 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-etc-openvswitch\") pod \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.730897 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-run-systemd\") pod \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.730934 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dea8fd3f-411f-44a8-a1d6-4881f41fc149-ovnkube-config\") pod \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.730953 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-host-slash\") pod \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.730977 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dea8fd3f-411f-44a8-a1d6-4881f41fc149-ovn-node-metrics-cert\") pod \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.730994 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-run-ovn\") pod \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.731008 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-host-cni-bin\") pod \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.731031 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-host-run-netns\") pod \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.731054 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8qgd\" (UniqueName: \"kubernetes.io/projected/dea8fd3f-411f-44a8-a1d6-4881f41fc149-kube-api-access-z8qgd\") pod \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.731069 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-host-kubelet\") pod \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.735142 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-host-run-ovn-kubernetes\") pod \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.732223 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-host-slash" (OuterVolumeSpecName: "host-slash") pod "dea8fd3f-411f-44a8-a1d6-4881f41fc149" (UID: "dea8fd3f-411f-44a8-a1d6-4881f41fc149"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.732260 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "dea8fd3f-411f-44a8-a1d6-4881f41fc149" (UID: "dea8fd3f-411f-44a8-a1d6-4881f41fc149"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.735223 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "dea8fd3f-411f-44a8-a1d6-4881f41fc149" (UID: "dea8fd3f-411f-44a8-a1d6-4881f41fc149"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.732292 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "dea8fd3f-411f-44a8-a1d6-4881f41fc149" (UID: "dea8fd3f-411f-44a8-a1d6-4881f41fc149"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.732316 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "dea8fd3f-411f-44a8-a1d6-4881f41fc149" (UID: "dea8fd3f-411f-44a8-a1d6-4881f41fc149"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.732476 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "dea8fd3f-411f-44a8-a1d6-4881f41fc149" (UID: "dea8fd3f-411f-44a8-a1d6-4881f41fc149"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.733025 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dea8fd3f-411f-44a8-a1d6-4881f41fc149-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "dea8fd3f-411f-44a8-a1d6-4881f41fc149" (UID: "dea8fd3f-411f-44a8-a1d6-4881f41fc149"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.733166 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "dea8fd3f-411f-44a8-a1d6-4881f41fc149" (UID: "dea8fd3f-411f-44a8-a1d6-4881f41fc149"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.733186 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "dea8fd3f-411f-44a8-a1d6-4881f41fc149" (UID: "dea8fd3f-411f-44a8-a1d6-4881f41fc149"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.735300 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "dea8fd3f-411f-44a8-a1d6-4881f41fc149" (UID: "dea8fd3f-411f-44a8-a1d6-4881f41fc149"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.735295 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "dea8fd3f-411f-44a8-a1d6-4881f41fc149" (UID: "dea8fd3f-411f-44a8-a1d6-4881f41fc149"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.735187 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-run-openvswitch\") pod \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.735384 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-var-lib-openvswitch\") pod \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.735406 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-systemd-units\") pod \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.735458 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dea8fd3f-411f-44a8-a1d6-4881f41fc149-env-overrides\") pod \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.735500 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-log-socket\") pod \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.735523 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/dea8fd3f-411f-44a8-a1d6-4881f41fc149-ovnkube-script-lib\") pod \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.735552 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-node-log\") pod \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\" (UID: \"dea8fd3f-411f-44a8-a1d6-4881f41fc149\") " Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.735613 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-log-socket" (OuterVolumeSpecName: "log-socket") pod "dea8fd3f-411f-44a8-a1d6-4881f41fc149" (UID: "dea8fd3f-411f-44a8-a1d6-4881f41fc149"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.735657 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "dea8fd3f-411f-44a8-a1d6-4881f41fc149" (UID: "dea8fd3f-411f-44a8-a1d6-4881f41fc149"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.735679 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "dea8fd3f-411f-44a8-a1d6-4881f41fc149" (UID: "dea8fd3f-411f-44a8-a1d6-4881f41fc149"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.735887 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c6c4aab5-c8ed-4323-87ef-a932943637e0-node-log\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.735945 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c6c4aab5-c8ed-4323-87ef-a932943637e0-host-cni-bin\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.735973 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c6c4aab5-c8ed-4323-87ef-a932943637e0-host-slash\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.735999 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c6c4aab5-c8ed-4323-87ef-a932943637e0-host-run-netns\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.736058 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c6c4aab5-c8ed-4323-87ef-a932943637e0-ovn-node-metrics-cert\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.736101 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c6c4aab5-c8ed-4323-87ef-a932943637e0-run-systemd\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.736106 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dea8fd3f-411f-44a8-a1d6-4881f41fc149-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "dea8fd3f-411f-44a8-a1d6-4881f41fc149" (UID: "dea8fd3f-411f-44a8-a1d6-4881f41fc149"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.736161 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-node-log" (OuterVolumeSpecName: "node-log") pod "dea8fd3f-411f-44a8-a1d6-4881f41fc149" (UID: "dea8fd3f-411f-44a8-a1d6-4881f41fc149"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.736177 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c6c4aab5-c8ed-4323-87ef-a932943637e0-ovnkube-config\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.736221 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6c4aab5-c8ed-4323-87ef-a932943637e0-host-cni-netd\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.736250 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dea8fd3f-411f-44a8-a1d6-4881f41fc149-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "dea8fd3f-411f-44a8-a1d6-4881f41fc149" (UID: "dea8fd3f-411f-44a8-a1d6-4881f41fc149"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.736277 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c6c4aab5-c8ed-4323-87ef-a932943637e0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.736440 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c6c4aab5-c8ed-4323-87ef-a932943637e0-env-overrides\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.736485 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c6c4aab5-c8ed-4323-87ef-a932943637e0-host-kubelet\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.736655 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c6c4aab5-c8ed-4323-87ef-a932943637e0-log-socket\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.736748 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c6c4aab5-c8ed-4323-87ef-a932943637e0-systemd-units\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.736776 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6c4aab5-c8ed-4323-87ef-a932943637e0-run-openvswitch\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.736806 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c6c4aab5-c8ed-4323-87ef-a932943637e0-host-run-ovn-kubernetes\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.736836 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6c4aab5-c8ed-4323-87ef-a932943637e0-var-lib-openvswitch\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.736855 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c6c4aab5-c8ed-4323-87ef-a932943637e0-run-ovn\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.736979 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6wjb\" (UniqueName: \"kubernetes.io/projected/c6c4aab5-c8ed-4323-87ef-a932943637e0-kube-api-access-v6wjb\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.737010 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6c4aab5-c8ed-4323-87ef-a932943637e0-etc-openvswitch\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.737114 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c6c4aab5-c8ed-4323-87ef-a932943637e0-ovnkube-script-lib\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.737219 4854 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.737232 4854 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.737243 4854 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.742336 4854 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.742373 4854 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.742385 4854 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dea8fd3f-411f-44a8-a1d6-4881f41fc149-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.742397 4854 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/dea8fd3f-411f-44a8-a1d6-4881f41fc149-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.742408 4854 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-log-socket\") on node \"crc\" DevicePath \"\"" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.742417 4854 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-node-log\") on node \"crc\" DevicePath \"\"" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.742425 4854 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.742437 4854 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.742446 4854 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.742459 4854 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dea8fd3f-411f-44a8-a1d6-4881f41fc149-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.742470 4854 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-host-slash\") on node \"crc\" DevicePath \"\"" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.742481 4854 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.742489 4854 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.742499 4854 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.770463 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dea8fd3f-411f-44a8-a1d6-4881f41fc149-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "dea8fd3f-411f-44a8-a1d6-4881f41fc149" (UID: "dea8fd3f-411f-44a8-a1d6-4881f41fc149"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.770653 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dea8fd3f-411f-44a8-a1d6-4881f41fc149-kube-api-access-z8qgd" (OuterVolumeSpecName: "kube-api-access-z8qgd") pod "dea8fd3f-411f-44a8-a1d6-4881f41fc149" (UID: "dea8fd3f-411f-44a8-a1d6-4881f41fc149"). InnerVolumeSpecName "kube-api-access-z8qgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.781313 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "dea8fd3f-411f-44a8-a1d6-4881f41fc149" (UID: "dea8fd3f-411f-44a8-a1d6-4881f41fc149"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.788129 4854 scope.go:117] "RemoveContainer" containerID="74d71e736e09b8dab45141b304bca1c5ce323b3a9cf06b5a917072a38be385d2" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.821969 4854 scope.go:117] "RemoveContainer" containerID="0ff12fdc87727ca31d1c393d3ba08b1b3c814b82d28164b99e396b2ab8537276" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.846755 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c6c4aab5-c8ed-4323-87ef-a932943637e0-ovnkube-script-lib\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.846835 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c6c4aab5-c8ed-4323-87ef-a932943637e0-node-log\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.846867 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c6c4aab5-c8ed-4323-87ef-a932943637e0-host-cni-bin\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.846889 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c6c4aab5-c8ed-4323-87ef-a932943637e0-host-slash\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.846908 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c6c4aab5-c8ed-4323-87ef-a932943637e0-host-run-netns\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.846936 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c6c4aab5-c8ed-4323-87ef-a932943637e0-ovn-node-metrics-cert\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.846953 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c6c4aab5-c8ed-4323-87ef-a932943637e0-run-systemd\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.846973 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c6c4aab5-c8ed-4323-87ef-a932943637e0-ovnkube-config\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.846994 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6c4aab5-c8ed-4323-87ef-a932943637e0-host-cni-netd\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.847015 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c6c4aab5-c8ed-4323-87ef-a932943637e0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.847032 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c6c4aab5-c8ed-4323-87ef-a932943637e0-host-kubelet\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.847048 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c6c4aab5-c8ed-4323-87ef-a932943637e0-env-overrides\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.847064 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c6c4aab5-c8ed-4323-87ef-a932943637e0-log-socket\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.847111 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c6c4aab5-c8ed-4323-87ef-a932943637e0-systemd-units\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.847141 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6c4aab5-c8ed-4323-87ef-a932943637e0-run-openvswitch\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.847190 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c6c4aab5-c8ed-4323-87ef-a932943637e0-host-run-ovn-kubernetes\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.847221 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6c4aab5-c8ed-4323-87ef-a932943637e0-var-lib-openvswitch\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.847250 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c6c4aab5-c8ed-4323-87ef-a932943637e0-run-ovn\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.847286 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6c4aab5-c8ed-4323-87ef-a932943637e0-etc-openvswitch\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.847310 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6wjb\" (UniqueName: \"kubernetes.io/projected/c6c4aab5-c8ed-4323-87ef-a932943637e0-kube-api-access-v6wjb\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.847369 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8qgd\" (UniqueName: \"kubernetes.io/projected/dea8fd3f-411f-44a8-a1d6-4881f41fc149-kube-api-access-z8qgd\") on node \"crc\" DevicePath \"\"" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.847382 4854 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/dea8fd3f-411f-44a8-a1d6-4881f41fc149-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.847393 4854 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dea8fd3f-411f-44a8-a1d6-4881f41fc149-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.848379 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c6c4aab5-c8ed-4323-87ef-a932943637e0-ovnkube-script-lib\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.848432 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c6c4aab5-c8ed-4323-87ef-a932943637e0-node-log\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.848457 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c6c4aab5-c8ed-4323-87ef-a932943637e0-host-cni-bin\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.848480 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c6c4aab5-c8ed-4323-87ef-a932943637e0-host-slash\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.848504 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c6c4aab5-c8ed-4323-87ef-a932943637e0-host-run-netns\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.848825 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c6c4aab5-c8ed-4323-87ef-a932943637e0-log-socket\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.848899 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c6c4aab5-c8ed-4323-87ef-a932943637e0-run-systemd\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.849170 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c6c4aab5-c8ed-4323-87ef-a932943637e0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.849218 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6c4aab5-c8ed-4323-87ef-a932943637e0-host-cni-netd\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.849251 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c6c4aab5-c8ed-4323-87ef-a932943637e0-host-kubelet\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.849275 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c6c4aab5-c8ed-4323-87ef-a932943637e0-host-run-ovn-kubernetes\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.849300 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c6c4aab5-c8ed-4323-87ef-a932943637e0-systemd-units\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.849337 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6c4aab5-c8ed-4323-87ef-a932943637e0-run-openvswitch\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.849359 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c6c4aab5-c8ed-4323-87ef-a932943637e0-run-ovn\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.849380 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6c4aab5-c8ed-4323-87ef-a932943637e0-var-lib-openvswitch\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.849438 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c6c4aab5-c8ed-4323-87ef-a932943637e0-ovnkube-config\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.849480 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6c4aab5-c8ed-4323-87ef-a932943637e0-etc-openvswitch\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.849727 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c6c4aab5-c8ed-4323-87ef-a932943637e0-env-overrides\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.853610 4854 scope.go:117] "RemoveContainer" containerID="fa2191fe20b6f81927579ecb9da5b3eb8b9ef9cabfa0dc608b92a1565bcc1160" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.854633 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c6c4aab5-c8ed-4323-87ef-a932943637e0-ovn-node-metrics-cert\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.868532 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6wjb\" (UniqueName: \"kubernetes.io/projected/c6c4aab5-c8ed-4323-87ef-a932943637e0-kube-api-access-v6wjb\") pod \"ovnkube-node-cs97n\" (UID: \"c6c4aab5-c8ed-4323-87ef-a932943637e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.886559 4854 scope.go:117] "RemoveContainer" containerID="e4d2128cfbfad21895f6b5c0b98cdb06df7b48aa178c15b4162635e92afcfb71" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.903349 4854 scope.go:117] "RemoveContainer" containerID="1ed03d4f5e2709066a245014f5a4caef63f0b02ecf2fce8e89c6b9f43fc58aba" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.921532 4854 scope.go:117] "RemoveContainer" containerID="3faa8a1d01333196ebebebb77705daea5692b77e3602574b64b3944798dd5413" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.952510 4854 scope.go:117] "RemoveContainer" containerID="6044841ea44f74d5a3fe2aa4bd6f8c818aaf51bd4e502d7c505b64cd25518288" Jan 03 05:51:06 crc kubenswrapper[4854]: E0103 05:51:06.955675 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6044841ea44f74d5a3fe2aa4bd6f8c818aaf51bd4e502d7c505b64cd25518288\": container with ID starting with 6044841ea44f74d5a3fe2aa4bd6f8c818aaf51bd4e502d7c505b64cd25518288 not found: ID does not exist" containerID="6044841ea44f74d5a3fe2aa4bd6f8c818aaf51bd4e502d7c505b64cd25518288" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.955715 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6044841ea44f74d5a3fe2aa4bd6f8c818aaf51bd4e502d7c505b64cd25518288"} err="failed to get container status \"6044841ea44f74d5a3fe2aa4bd6f8c818aaf51bd4e502d7c505b64cd25518288\": rpc error: code = NotFound desc = could not find container \"6044841ea44f74d5a3fe2aa4bd6f8c818aaf51bd4e502d7c505b64cd25518288\": container with ID starting with 6044841ea44f74d5a3fe2aa4bd6f8c818aaf51bd4e502d7c505b64cd25518288 not found: ID does not exist" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.955742 4854 scope.go:117] "RemoveContainer" containerID="6c0bb025e7decbbbaa763a759e9e0c7557db17fcc68ec3f6cb51787d923a0e8d" Jan 03 05:51:06 crc kubenswrapper[4854]: E0103 05:51:06.956218 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c0bb025e7decbbbaa763a759e9e0c7557db17fcc68ec3f6cb51787d923a0e8d\": container with ID starting with 6c0bb025e7decbbbaa763a759e9e0c7557db17fcc68ec3f6cb51787d923a0e8d not found: ID does not exist" containerID="6c0bb025e7decbbbaa763a759e9e0c7557db17fcc68ec3f6cb51787d923a0e8d" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.956239 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c0bb025e7decbbbaa763a759e9e0c7557db17fcc68ec3f6cb51787d923a0e8d"} err="failed to get container status \"6c0bb025e7decbbbaa763a759e9e0c7557db17fcc68ec3f6cb51787d923a0e8d\": rpc error: code = NotFound desc = could not find container \"6c0bb025e7decbbbaa763a759e9e0c7557db17fcc68ec3f6cb51787d923a0e8d\": container with ID starting with 6c0bb025e7decbbbaa763a759e9e0c7557db17fcc68ec3f6cb51787d923a0e8d not found: ID does not exist" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.956253 4854 scope.go:117] "RemoveContainer" containerID="c1a422e6522d553b1ce279a23816bf94865070feb187372d04cc50e8bb733d31" Jan 03 05:51:06 crc kubenswrapper[4854]: E0103 05:51:06.956467 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1a422e6522d553b1ce279a23816bf94865070feb187372d04cc50e8bb733d31\": container with ID starting with c1a422e6522d553b1ce279a23816bf94865070feb187372d04cc50e8bb733d31 not found: ID does not exist" containerID="c1a422e6522d553b1ce279a23816bf94865070feb187372d04cc50e8bb733d31" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.956488 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1a422e6522d553b1ce279a23816bf94865070feb187372d04cc50e8bb733d31"} err="failed to get container status \"c1a422e6522d553b1ce279a23816bf94865070feb187372d04cc50e8bb733d31\": rpc error: code = NotFound desc = could not find container \"c1a422e6522d553b1ce279a23816bf94865070feb187372d04cc50e8bb733d31\": container with ID starting with c1a422e6522d553b1ce279a23816bf94865070feb187372d04cc50e8bb733d31 not found: ID does not exist" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.956501 4854 scope.go:117] "RemoveContainer" containerID="74d71e736e09b8dab45141b304bca1c5ce323b3a9cf06b5a917072a38be385d2" Jan 03 05:51:06 crc kubenswrapper[4854]: E0103 05:51:06.956673 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74d71e736e09b8dab45141b304bca1c5ce323b3a9cf06b5a917072a38be385d2\": container with ID starting with 74d71e736e09b8dab45141b304bca1c5ce323b3a9cf06b5a917072a38be385d2 not found: ID does not exist" containerID="74d71e736e09b8dab45141b304bca1c5ce323b3a9cf06b5a917072a38be385d2" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.956693 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74d71e736e09b8dab45141b304bca1c5ce323b3a9cf06b5a917072a38be385d2"} err="failed to get container status \"74d71e736e09b8dab45141b304bca1c5ce323b3a9cf06b5a917072a38be385d2\": rpc error: code = NotFound desc = could not find container \"74d71e736e09b8dab45141b304bca1c5ce323b3a9cf06b5a917072a38be385d2\": container with ID starting with 74d71e736e09b8dab45141b304bca1c5ce323b3a9cf06b5a917072a38be385d2 not found: ID does not exist" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.956706 4854 scope.go:117] "RemoveContainer" containerID="0ff12fdc87727ca31d1c393d3ba08b1b3c814b82d28164b99e396b2ab8537276" Jan 03 05:51:06 crc kubenswrapper[4854]: E0103 05:51:06.956878 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ff12fdc87727ca31d1c393d3ba08b1b3c814b82d28164b99e396b2ab8537276\": container with ID starting with 0ff12fdc87727ca31d1c393d3ba08b1b3c814b82d28164b99e396b2ab8537276 not found: ID does not exist" containerID="0ff12fdc87727ca31d1c393d3ba08b1b3c814b82d28164b99e396b2ab8537276" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.956904 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ff12fdc87727ca31d1c393d3ba08b1b3c814b82d28164b99e396b2ab8537276"} err="failed to get container status \"0ff12fdc87727ca31d1c393d3ba08b1b3c814b82d28164b99e396b2ab8537276\": rpc error: code = NotFound desc = could not find container \"0ff12fdc87727ca31d1c393d3ba08b1b3c814b82d28164b99e396b2ab8537276\": container with ID starting with 0ff12fdc87727ca31d1c393d3ba08b1b3c814b82d28164b99e396b2ab8537276 not found: ID does not exist" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.956920 4854 scope.go:117] "RemoveContainer" containerID="fa2191fe20b6f81927579ecb9da5b3eb8b9ef9cabfa0dc608b92a1565bcc1160" Jan 03 05:51:06 crc kubenswrapper[4854]: E0103 05:51:06.957292 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa2191fe20b6f81927579ecb9da5b3eb8b9ef9cabfa0dc608b92a1565bcc1160\": container with ID starting with fa2191fe20b6f81927579ecb9da5b3eb8b9ef9cabfa0dc608b92a1565bcc1160 not found: ID does not exist" containerID="fa2191fe20b6f81927579ecb9da5b3eb8b9ef9cabfa0dc608b92a1565bcc1160" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.957320 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa2191fe20b6f81927579ecb9da5b3eb8b9ef9cabfa0dc608b92a1565bcc1160"} err="failed to get container status \"fa2191fe20b6f81927579ecb9da5b3eb8b9ef9cabfa0dc608b92a1565bcc1160\": rpc error: code = NotFound desc = could not find container \"fa2191fe20b6f81927579ecb9da5b3eb8b9ef9cabfa0dc608b92a1565bcc1160\": container with ID starting with fa2191fe20b6f81927579ecb9da5b3eb8b9ef9cabfa0dc608b92a1565bcc1160 not found: ID does not exist" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.957334 4854 scope.go:117] "RemoveContainer" containerID="e4d2128cfbfad21895f6b5c0b98cdb06df7b48aa178c15b4162635e92afcfb71" Jan 03 05:51:06 crc kubenswrapper[4854]: E0103 05:51:06.957527 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4d2128cfbfad21895f6b5c0b98cdb06df7b48aa178c15b4162635e92afcfb71\": container with ID starting with e4d2128cfbfad21895f6b5c0b98cdb06df7b48aa178c15b4162635e92afcfb71 not found: ID does not exist" containerID="e4d2128cfbfad21895f6b5c0b98cdb06df7b48aa178c15b4162635e92afcfb71" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.957543 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4d2128cfbfad21895f6b5c0b98cdb06df7b48aa178c15b4162635e92afcfb71"} err="failed to get container status \"e4d2128cfbfad21895f6b5c0b98cdb06df7b48aa178c15b4162635e92afcfb71\": rpc error: code = NotFound desc = could not find container \"e4d2128cfbfad21895f6b5c0b98cdb06df7b48aa178c15b4162635e92afcfb71\": container with ID starting with e4d2128cfbfad21895f6b5c0b98cdb06df7b48aa178c15b4162635e92afcfb71 not found: ID does not exist" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.957555 4854 scope.go:117] "RemoveContainer" containerID="1ed03d4f5e2709066a245014f5a4caef63f0b02ecf2fce8e89c6b9f43fc58aba" Jan 03 05:51:06 crc kubenswrapper[4854]: E0103 05:51:06.957920 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ed03d4f5e2709066a245014f5a4caef63f0b02ecf2fce8e89c6b9f43fc58aba\": container with ID starting with 1ed03d4f5e2709066a245014f5a4caef63f0b02ecf2fce8e89c6b9f43fc58aba not found: ID does not exist" containerID="1ed03d4f5e2709066a245014f5a4caef63f0b02ecf2fce8e89c6b9f43fc58aba" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.957939 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ed03d4f5e2709066a245014f5a4caef63f0b02ecf2fce8e89c6b9f43fc58aba"} err="failed to get container status \"1ed03d4f5e2709066a245014f5a4caef63f0b02ecf2fce8e89c6b9f43fc58aba\": rpc error: code = NotFound desc = could not find container \"1ed03d4f5e2709066a245014f5a4caef63f0b02ecf2fce8e89c6b9f43fc58aba\": container with ID starting with 1ed03d4f5e2709066a245014f5a4caef63f0b02ecf2fce8e89c6b9f43fc58aba not found: ID does not exist" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.957958 4854 scope.go:117] "RemoveContainer" containerID="3faa8a1d01333196ebebebb77705daea5692b77e3602574b64b3944798dd5413" Jan 03 05:51:06 crc kubenswrapper[4854]: E0103 05:51:06.958142 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3faa8a1d01333196ebebebb77705daea5692b77e3602574b64b3944798dd5413\": container with ID starting with 3faa8a1d01333196ebebebb77705daea5692b77e3602574b64b3944798dd5413 not found: ID does not exist" containerID="3faa8a1d01333196ebebebb77705daea5692b77e3602574b64b3944798dd5413" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.958164 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3faa8a1d01333196ebebebb77705daea5692b77e3602574b64b3944798dd5413"} err="failed to get container status \"3faa8a1d01333196ebebebb77705daea5692b77e3602574b64b3944798dd5413\": rpc error: code = NotFound desc = could not find container \"3faa8a1d01333196ebebebb77705daea5692b77e3602574b64b3944798dd5413\": container with ID starting with 3faa8a1d01333196ebebebb77705daea5692b77e3602574b64b3944798dd5413 not found: ID does not exist" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.958177 4854 scope.go:117] "RemoveContainer" containerID="6044841ea44f74d5a3fe2aa4bd6f8c818aaf51bd4e502d7c505b64cd25518288" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.958365 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6044841ea44f74d5a3fe2aa4bd6f8c818aaf51bd4e502d7c505b64cd25518288"} err="failed to get container status \"6044841ea44f74d5a3fe2aa4bd6f8c818aaf51bd4e502d7c505b64cd25518288\": rpc error: code = NotFound desc = could not find container \"6044841ea44f74d5a3fe2aa4bd6f8c818aaf51bd4e502d7c505b64cd25518288\": container with ID starting with 6044841ea44f74d5a3fe2aa4bd6f8c818aaf51bd4e502d7c505b64cd25518288 not found: ID does not exist" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.958383 4854 scope.go:117] "RemoveContainer" containerID="6c0bb025e7decbbbaa763a759e9e0c7557db17fcc68ec3f6cb51787d923a0e8d" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.958610 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c0bb025e7decbbbaa763a759e9e0c7557db17fcc68ec3f6cb51787d923a0e8d"} err="failed to get container status \"6c0bb025e7decbbbaa763a759e9e0c7557db17fcc68ec3f6cb51787d923a0e8d\": rpc error: code = NotFound desc = could not find container \"6c0bb025e7decbbbaa763a759e9e0c7557db17fcc68ec3f6cb51787d923a0e8d\": container with ID starting with 6c0bb025e7decbbbaa763a759e9e0c7557db17fcc68ec3f6cb51787d923a0e8d not found: ID does not exist" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.958647 4854 scope.go:117] "RemoveContainer" containerID="c1a422e6522d553b1ce279a23816bf94865070feb187372d04cc50e8bb733d31" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.958885 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1a422e6522d553b1ce279a23816bf94865070feb187372d04cc50e8bb733d31"} err="failed to get container status \"c1a422e6522d553b1ce279a23816bf94865070feb187372d04cc50e8bb733d31\": rpc error: code = NotFound desc = could not find container \"c1a422e6522d553b1ce279a23816bf94865070feb187372d04cc50e8bb733d31\": container with ID starting with c1a422e6522d553b1ce279a23816bf94865070feb187372d04cc50e8bb733d31 not found: ID does not exist" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.958904 4854 scope.go:117] "RemoveContainer" containerID="74d71e736e09b8dab45141b304bca1c5ce323b3a9cf06b5a917072a38be385d2" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.961161 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74d71e736e09b8dab45141b304bca1c5ce323b3a9cf06b5a917072a38be385d2"} err="failed to get container status \"74d71e736e09b8dab45141b304bca1c5ce323b3a9cf06b5a917072a38be385d2\": rpc error: code = NotFound desc = could not find container \"74d71e736e09b8dab45141b304bca1c5ce323b3a9cf06b5a917072a38be385d2\": container with ID starting with 74d71e736e09b8dab45141b304bca1c5ce323b3a9cf06b5a917072a38be385d2 not found: ID does not exist" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.961182 4854 scope.go:117] "RemoveContainer" containerID="0ff12fdc87727ca31d1c393d3ba08b1b3c814b82d28164b99e396b2ab8537276" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.961367 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ff12fdc87727ca31d1c393d3ba08b1b3c814b82d28164b99e396b2ab8537276"} err="failed to get container status \"0ff12fdc87727ca31d1c393d3ba08b1b3c814b82d28164b99e396b2ab8537276\": rpc error: code = NotFound desc = could not find container \"0ff12fdc87727ca31d1c393d3ba08b1b3c814b82d28164b99e396b2ab8537276\": container with ID starting with 0ff12fdc87727ca31d1c393d3ba08b1b3c814b82d28164b99e396b2ab8537276 not found: ID does not exist" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.961385 4854 scope.go:117] "RemoveContainer" containerID="fa2191fe20b6f81927579ecb9da5b3eb8b9ef9cabfa0dc608b92a1565bcc1160" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.961570 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa2191fe20b6f81927579ecb9da5b3eb8b9ef9cabfa0dc608b92a1565bcc1160"} err="failed to get container status \"fa2191fe20b6f81927579ecb9da5b3eb8b9ef9cabfa0dc608b92a1565bcc1160\": rpc error: code = NotFound desc = could not find container \"fa2191fe20b6f81927579ecb9da5b3eb8b9ef9cabfa0dc608b92a1565bcc1160\": container with ID starting with fa2191fe20b6f81927579ecb9da5b3eb8b9ef9cabfa0dc608b92a1565bcc1160 not found: ID does not exist" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.961589 4854 scope.go:117] "RemoveContainer" containerID="e4d2128cfbfad21895f6b5c0b98cdb06df7b48aa178c15b4162635e92afcfb71" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.961756 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4d2128cfbfad21895f6b5c0b98cdb06df7b48aa178c15b4162635e92afcfb71"} err="failed to get container status \"e4d2128cfbfad21895f6b5c0b98cdb06df7b48aa178c15b4162635e92afcfb71\": rpc error: code = NotFound desc = could not find container \"e4d2128cfbfad21895f6b5c0b98cdb06df7b48aa178c15b4162635e92afcfb71\": container with ID starting with e4d2128cfbfad21895f6b5c0b98cdb06df7b48aa178c15b4162635e92afcfb71 not found: ID does not exist" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.961772 4854 scope.go:117] "RemoveContainer" containerID="1ed03d4f5e2709066a245014f5a4caef63f0b02ecf2fce8e89c6b9f43fc58aba" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.961934 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ed03d4f5e2709066a245014f5a4caef63f0b02ecf2fce8e89c6b9f43fc58aba"} err="failed to get container status \"1ed03d4f5e2709066a245014f5a4caef63f0b02ecf2fce8e89c6b9f43fc58aba\": rpc error: code = NotFound desc = could not find container \"1ed03d4f5e2709066a245014f5a4caef63f0b02ecf2fce8e89c6b9f43fc58aba\": container with ID starting with 1ed03d4f5e2709066a245014f5a4caef63f0b02ecf2fce8e89c6b9f43fc58aba not found: ID does not exist" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.961955 4854 scope.go:117] "RemoveContainer" containerID="3faa8a1d01333196ebebebb77705daea5692b77e3602574b64b3944798dd5413" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.962229 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3faa8a1d01333196ebebebb77705daea5692b77e3602574b64b3944798dd5413"} err="failed to get container status \"3faa8a1d01333196ebebebb77705daea5692b77e3602574b64b3944798dd5413\": rpc error: code = NotFound desc = could not find container \"3faa8a1d01333196ebebebb77705daea5692b77e3602574b64b3944798dd5413\": container with ID starting with 3faa8a1d01333196ebebebb77705daea5692b77e3602574b64b3944798dd5413 not found: ID does not exist" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.962247 4854 scope.go:117] "RemoveContainer" containerID="6044841ea44f74d5a3fe2aa4bd6f8c818aaf51bd4e502d7c505b64cd25518288" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.962501 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6044841ea44f74d5a3fe2aa4bd6f8c818aaf51bd4e502d7c505b64cd25518288"} err="failed to get container status \"6044841ea44f74d5a3fe2aa4bd6f8c818aaf51bd4e502d7c505b64cd25518288\": rpc error: code = NotFound desc = could not find container \"6044841ea44f74d5a3fe2aa4bd6f8c818aaf51bd4e502d7c505b64cd25518288\": container with ID starting with 6044841ea44f74d5a3fe2aa4bd6f8c818aaf51bd4e502d7c505b64cd25518288 not found: ID does not exist" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.962541 4854 scope.go:117] "RemoveContainer" containerID="6c0bb025e7decbbbaa763a759e9e0c7557db17fcc68ec3f6cb51787d923a0e8d" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.962779 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c0bb025e7decbbbaa763a759e9e0c7557db17fcc68ec3f6cb51787d923a0e8d"} err="failed to get container status \"6c0bb025e7decbbbaa763a759e9e0c7557db17fcc68ec3f6cb51787d923a0e8d\": rpc error: code = NotFound desc = could not find container \"6c0bb025e7decbbbaa763a759e9e0c7557db17fcc68ec3f6cb51787d923a0e8d\": container with ID starting with 6c0bb025e7decbbbaa763a759e9e0c7557db17fcc68ec3f6cb51787d923a0e8d not found: ID does not exist" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.962799 4854 scope.go:117] "RemoveContainer" containerID="c1a422e6522d553b1ce279a23816bf94865070feb187372d04cc50e8bb733d31" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.962952 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1a422e6522d553b1ce279a23816bf94865070feb187372d04cc50e8bb733d31"} err="failed to get container status \"c1a422e6522d553b1ce279a23816bf94865070feb187372d04cc50e8bb733d31\": rpc error: code = NotFound desc = could not find container \"c1a422e6522d553b1ce279a23816bf94865070feb187372d04cc50e8bb733d31\": container with ID starting with c1a422e6522d553b1ce279a23816bf94865070feb187372d04cc50e8bb733d31 not found: ID does not exist" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.962978 4854 scope.go:117] "RemoveContainer" containerID="74d71e736e09b8dab45141b304bca1c5ce323b3a9cf06b5a917072a38be385d2" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.963139 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74d71e736e09b8dab45141b304bca1c5ce323b3a9cf06b5a917072a38be385d2"} err="failed to get container status \"74d71e736e09b8dab45141b304bca1c5ce323b3a9cf06b5a917072a38be385d2\": rpc error: code = NotFound desc = could not find container \"74d71e736e09b8dab45141b304bca1c5ce323b3a9cf06b5a917072a38be385d2\": container with ID starting with 74d71e736e09b8dab45141b304bca1c5ce323b3a9cf06b5a917072a38be385d2 not found: ID does not exist" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.963156 4854 scope.go:117] "RemoveContainer" containerID="0ff12fdc87727ca31d1c393d3ba08b1b3c814b82d28164b99e396b2ab8537276" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.963311 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ff12fdc87727ca31d1c393d3ba08b1b3c814b82d28164b99e396b2ab8537276"} err="failed to get container status \"0ff12fdc87727ca31d1c393d3ba08b1b3c814b82d28164b99e396b2ab8537276\": rpc error: code = NotFound desc = could not find container \"0ff12fdc87727ca31d1c393d3ba08b1b3c814b82d28164b99e396b2ab8537276\": container with ID starting with 0ff12fdc87727ca31d1c393d3ba08b1b3c814b82d28164b99e396b2ab8537276 not found: ID does not exist" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.963327 4854 scope.go:117] "RemoveContainer" containerID="fa2191fe20b6f81927579ecb9da5b3eb8b9ef9cabfa0dc608b92a1565bcc1160" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.963469 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa2191fe20b6f81927579ecb9da5b3eb8b9ef9cabfa0dc608b92a1565bcc1160"} err="failed to get container status \"fa2191fe20b6f81927579ecb9da5b3eb8b9ef9cabfa0dc608b92a1565bcc1160\": rpc error: code = NotFound desc = could not find container \"fa2191fe20b6f81927579ecb9da5b3eb8b9ef9cabfa0dc608b92a1565bcc1160\": container with ID starting with fa2191fe20b6f81927579ecb9da5b3eb8b9ef9cabfa0dc608b92a1565bcc1160 not found: ID does not exist" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.963486 4854 scope.go:117] "RemoveContainer" containerID="e4d2128cfbfad21895f6b5c0b98cdb06df7b48aa178c15b4162635e92afcfb71" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.963638 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4d2128cfbfad21895f6b5c0b98cdb06df7b48aa178c15b4162635e92afcfb71"} err="failed to get container status \"e4d2128cfbfad21895f6b5c0b98cdb06df7b48aa178c15b4162635e92afcfb71\": rpc error: code = NotFound desc = could not find container \"e4d2128cfbfad21895f6b5c0b98cdb06df7b48aa178c15b4162635e92afcfb71\": container with ID starting with e4d2128cfbfad21895f6b5c0b98cdb06df7b48aa178c15b4162635e92afcfb71 not found: ID does not exist" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.963655 4854 scope.go:117] "RemoveContainer" containerID="1ed03d4f5e2709066a245014f5a4caef63f0b02ecf2fce8e89c6b9f43fc58aba" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.963805 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ed03d4f5e2709066a245014f5a4caef63f0b02ecf2fce8e89c6b9f43fc58aba"} err="failed to get container status \"1ed03d4f5e2709066a245014f5a4caef63f0b02ecf2fce8e89c6b9f43fc58aba\": rpc error: code = NotFound desc = could not find container \"1ed03d4f5e2709066a245014f5a4caef63f0b02ecf2fce8e89c6b9f43fc58aba\": container with ID starting with 1ed03d4f5e2709066a245014f5a4caef63f0b02ecf2fce8e89c6b9f43fc58aba not found: ID does not exist" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.963823 4854 scope.go:117] "RemoveContainer" containerID="3faa8a1d01333196ebebebb77705daea5692b77e3602574b64b3944798dd5413" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.963972 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3faa8a1d01333196ebebebb77705daea5692b77e3602574b64b3944798dd5413"} err="failed to get container status \"3faa8a1d01333196ebebebb77705daea5692b77e3602574b64b3944798dd5413\": rpc error: code = NotFound desc = could not find container \"3faa8a1d01333196ebebebb77705daea5692b77e3602574b64b3944798dd5413\": container with ID starting with 3faa8a1d01333196ebebebb77705daea5692b77e3602574b64b3944798dd5413 not found: ID does not exist" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.963989 4854 scope.go:117] "RemoveContainer" containerID="6044841ea44f74d5a3fe2aa4bd6f8c818aaf51bd4e502d7c505b64cd25518288" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.964150 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6044841ea44f74d5a3fe2aa4bd6f8c818aaf51bd4e502d7c505b64cd25518288"} err="failed to get container status \"6044841ea44f74d5a3fe2aa4bd6f8c818aaf51bd4e502d7c505b64cd25518288\": rpc error: code = NotFound desc = could not find container \"6044841ea44f74d5a3fe2aa4bd6f8c818aaf51bd4e502d7c505b64cd25518288\": container with ID starting with 6044841ea44f74d5a3fe2aa4bd6f8c818aaf51bd4e502d7c505b64cd25518288 not found: ID does not exist" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.964167 4854 scope.go:117] "RemoveContainer" containerID="6c0bb025e7decbbbaa763a759e9e0c7557db17fcc68ec3f6cb51787d923a0e8d" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.967999 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c0bb025e7decbbbaa763a759e9e0c7557db17fcc68ec3f6cb51787d923a0e8d"} err="failed to get container status \"6c0bb025e7decbbbaa763a759e9e0c7557db17fcc68ec3f6cb51787d923a0e8d\": rpc error: code = NotFound desc = could not find container \"6c0bb025e7decbbbaa763a759e9e0c7557db17fcc68ec3f6cb51787d923a0e8d\": container with ID starting with 6c0bb025e7decbbbaa763a759e9e0c7557db17fcc68ec3f6cb51787d923a0e8d not found: ID does not exist" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.968030 4854 scope.go:117] "RemoveContainer" containerID="c1a422e6522d553b1ce279a23816bf94865070feb187372d04cc50e8bb733d31" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.968405 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1a422e6522d553b1ce279a23816bf94865070feb187372d04cc50e8bb733d31"} err="failed to get container status \"c1a422e6522d553b1ce279a23816bf94865070feb187372d04cc50e8bb733d31\": rpc error: code = NotFound desc = could not find container \"c1a422e6522d553b1ce279a23816bf94865070feb187372d04cc50e8bb733d31\": container with ID starting with c1a422e6522d553b1ce279a23816bf94865070feb187372d04cc50e8bb733d31 not found: ID does not exist" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.968454 4854 scope.go:117] "RemoveContainer" containerID="74d71e736e09b8dab45141b304bca1c5ce323b3a9cf06b5a917072a38be385d2" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.968724 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74d71e736e09b8dab45141b304bca1c5ce323b3a9cf06b5a917072a38be385d2"} err="failed to get container status \"74d71e736e09b8dab45141b304bca1c5ce323b3a9cf06b5a917072a38be385d2\": rpc error: code = NotFound desc = could not find container \"74d71e736e09b8dab45141b304bca1c5ce323b3a9cf06b5a917072a38be385d2\": container with ID starting with 74d71e736e09b8dab45141b304bca1c5ce323b3a9cf06b5a917072a38be385d2 not found: ID does not exist" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.968744 4854 scope.go:117] "RemoveContainer" containerID="0ff12fdc87727ca31d1c393d3ba08b1b3c814b82d28164b99e396b2ab8537276" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.969024 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ff12fdc87727ca31d1c393d3ba08b1b3c814b82d28164b99e396b2ab8537276"} err="failed to get container status \"0ff12fdc87727ca31d1c393d3ba08b1b3c814b82d28164b99e396b2ab8537276\": rpc error: code = NotFound desc = could not find container \"0ff12fdc87727ca31d1c393d3ba08b1b3c814b82d28164b99e396b2ab8537276\": container with ID starting with 0ff12fdc87727ca31d1c393d3ba08b1b3c814b82d28164b99e396b2ab8537276 not found: ID does not exist" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.969054 4854 scope.go:117] "RemoveContainer" containerID="fa2191fe20b6f81927579ecb9da5b3eb8b9ef9cabfa0dc608b92a1565bcc1160" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.969251 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa2191fe20b6f81927579ecb9da5b3eb8b9ef9cabfa0dc608b92a1565bcc1160"} err="failed to get container status \"fa2191fe20b6f81927579ecb9da5b3eb8b9ef9cabfa0dc608b92a1565bcc1160\": rpc error: code = NotFound desc = could not find container \"fa2191fe20b6f81927579ecb9da5b3eb8b9ef9cabfa0dc608b92a1565bcc1160\": container with ID starting with fa2191fe20b6f81927579ecb9da5b3eb8b9ef9cabfa0dc608b92a1565bcc1160 not found: ID does not exist" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.969271 4854 scope.go:117] "RemoveContainer" containerID="e4d2128cfbfad21895f6b5c0b98cdb06df7b48aa178c15b4162635e92afcfb71" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.969447 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4d2128cfbfad21895f6b5c0b98cdb06df7b48aa178c15b4162635e92afcfb71"} err="failed to get container status \"e4d2128cfbfad21895f6b5c0b98cdb06df7b48aa178c15b4162635e92afcfb71\": rpc error: code = NotFound desc = could not find container \"e4d2128cfbfad21895f6b5c0b98cdb06df7b48aa178c15b4162635e92afcfb71\": container with ID starting with e4d2128cfbfad21895f6b5c0b98cdb06df7b48aa178c15b4162635e92afcfb71 not found: ID does not exist" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.969464 4854 scope.go:117] "RemoveContainer" containerID="1ed03d4f5e2709066a245014f5a4caef63f0b02ecf2fce8e89c6b9f43fc58aba" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.969656 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ed03d4f5e2709066a245014f5a4caef63f0b02ecf2fce8e89c6b9f43fc58aba"} err="failed to get container status \"1ed03d4f5e2709066a245014f5a4caef63f0b02ecf2fce8e89c6b9f43fc58aba\": rpc error: code = NotFound desc = could not find container \"1ed03d4f5e2709066a245014f5a4caef63f0b02ecf2fce8e89c6b9f43fc58aba\": container with ID starting with 1ed03d4f5e2709066a245014f5a4caef63f0b02ecf2fce8e89c6b9f43fc58aba not found: ID does not exist" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.969682 4854 scope.go:117] "RemoveContainer" containerID="3faa8a1d01333196ebebebb77705daea5692b77e3602574b64b3944798dd5413" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.969852 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3faa8a1d01333196ebebebb77705daea5692b77e3602574b64b3944798dd5413"} err="failed to get container status \"3faa8a1d01333196ebebebb77705daea5692b77e3602574b64b3944798dd5413\": rpc error: code = NotFound desc = could not find container \"3faa8a1d01333196ebebebb77705daea5692b77e3602574b64b3944798dd5413\": container with ID starting with 3faa8a1d01333196ebebebb77705daea5692b77e3602574b64b3944798dd5413 not found: ID does not exist" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.969869 4854 scope.go:117] "RemoveContainer" containerID="6044841ea44f74d5a3fe2aa4bd6f8c818aaf51bd4e502d7c505b64cd25518288" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.970070 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6044841ea44f74d5a3fe2aa4bd6f8c818aaf51bd4e502d7c505b64cd25518288"} err="failed to get container status \"6044841ea44f74d5a3fe2aa4bd6f8c818aaf51bd4e502d7c505b64cd25518288\": rpc error: code = NotFound desc = could not find container \"6044841ea44f74d5a3fe2aa4bd6f8c818aaf51bd4e502d7c505b64cd25518288\": container with ID starting with 6044841ea44f74d5a3fe2aa4bd6f8c818aaf51bd4e502d7c505b64cd25518288 not found: ID does not exist" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.970099 4854 scope.go:117] "RemoveContainer" containerID="6c0bb025e7decbbbaa763a759e9e0c7557db17fcc68ec3f6cb51787d923a0e8d" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.970545 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c0bb025e7decbbbaa763a759e9e0c7557db17fcc68ec3f6cb51787d923a0e8d"} err="failed to get container status \"6c0bb025e7decbbbaa763a759e9e0c7557db17fcc68ec3f6cb51787d923a0e8d\": rpc error: code = NotFound desc = could not find container \"6c0bb025e7decbbbaa763a759e9e0c7557db17fcc68ec3f6cb51787d923a0e8d\": container with ID starting with 6c0bb025e7decbbbaa763a759e9e0c7557db17fcc68ec3f6cb51787d923a0e8d not found: ID does not exist" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.970593 4854 scope.go:117] "RemoveContainer" containerID="c1a422e6522d553b1ce279a23816bf94865070feb187372d04cc50e8bb733d31" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.970913 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1a422e6522d553b1ce279a23816bf94865070feb187372d04cc50e8bb733d31"} err="failed to get container status \"c1a422e6522d553b1ce279a23816bf94865070feb187372d04cc50e8bb733d31\": rpc error: code = NotFound desc = could not find container \"c1a422e6522d553b1ce279a23816bf94865070feb187372d04cc50e8bb733d31\": container with ID starting with c1a422e6522d553b1ce279a23816bf94865070feb187372d04cc50e8bb733d31 not found: ID does not exist" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.970940 4854 scope.go:117] "RemoveContainer" containerID="74d71e736e09b8dab45141b304bca1c5ce323b3a9cf06b5a917072a38be385d2" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.971301 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74d71e736e09b8dab45141b304bca1c5ce323b3a9cf06b5a917072a38be385d2"} err="failed to get container status \"74d71e736e09b8dab45141b304bca1c5ce323b3a9cf06b5a917072a38be385d2\": rpc error: code = NotFound desc = could not find container \"74d71e736e09b8dab45141b304bca1c5ce323b3a9cf06b5a917072a38be385d2\": container with ID starting with 74d71e736e09b8dab45141b304bca1c5ce323b3a9cf06b5a917072a38be385d2 not found: ID does not exist" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.971332 4854 scope.go:117] "RemoveContainer" containerID="0ff12fdc87727ca31d1c393d3ba08b1b3c814b82d28164b99e396b2ab8537276" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.971676 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ff12fdc87727ca31d1c393d3ba08b1b3c814b82d28164b99e396b2ab8537276"} err="failed to get container status \"0ff12fdc87727ca31d1c393d3ba08b1b3c814b82d28164b99e396b2ab8537276\": rpc error: code = NotFound desc = could not find container \"0ff12fdc87727ca31d1c393d3ba08b1b3c814b82d28164b99e396b2ab8537276\": container with ID starting with 0ff12fdc87727ca31d1c393d3ba08b1b3c814b82d28164b99e396b2ab8537276 not found: ID does not exist" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.971694 4854 scope.go:117] "RemoveContainer" containerID="fa2191fe20b6f81927579ecb9da5b3eb8b9ef9cabfa0dc608b92a1565bcc1160" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.971933 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa2191fe20b6f81927579ecb9da5b3eb8b9ef9cabfa0dc608b92a1565bcc1160"} err="failed to get container status \"fa2191fe20b6f81927579ecb9da5b3eb8b9ef9cabfa0dc608b92a1565bcc1160\": rpc error: code = NotFound desc = could not find container \"fa2191fe20b6f81927579ecb9da5b3eb8b9ef9cabfa0dc608b92a1565bcc1160\": container with ID starting with fa2191fe20b6f81927579ecb9da5b3eb8b9ef9cabfa0dc608b92a1565bcc1160 not found: ID does not exist" Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.986877 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zffbr"] Jan 03 05:51:06 crc kubenswrapper[4854]: I0103 05:51:06.993134 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zffbr"] Jan 03 05:51:07 crc kubenswrapper[4854]: I0103 05:51:07.021903 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:07 crc kubenswrapper[4854]: I0103 05:51:07.660330 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-spn2r_9bfe5118-0560-4d0c-9f5a-8a77143dd58e/kube-multus/0.log" Jan 03 05:51:07 crc kubenswrapper[4854]: I0103 05:51:07.660659 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-spn2r" event={"ID":"9bfe5118-0560-4d0c-9f5a-8a77143dd58e","Type":"ContainerStarted","Data":"28116aa28489e8e50f9aa68b6373db44bb95ff4b7c1c2ed647fa2a2b562aa427"} Jan 03 05:51:07 crc kubenswrapper[4854]: I0103 05:51:07.663861 4854 generic.go:334] "Generic (PLEG): container finished" podID="c6c4aab5-c8ed-4323-87ef-a932943637e0" containerID="cd96ae42996aa5a291f4b0624456c22d21831dd051220b554bd5deb469270b96" exitCode=0 Jan 03 05:51:07 crc kubenswrapper[4854]: I0103 05:51:07.663909 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" event={"ID":"c6c4aab5-c8ed-4323-87ef-a932943637e0","Type":"ContainerDied","Data":"cd96ae42996aa5a291f4b0624456c22d21831dd051220b554bd5deb469270b96"} Jan 03 05:51:07 crc kubenswrapper[4854]: I0103 05:51:07.663938 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" event={"ID":"c6c4aab5-c8ed-4323-87ef-a932943637e0","Type":"ContainerStarted","Data":"e2d011e47591f9ed0a0cc72ea5c911136ccec22981288f007d9685e6f40b968c"} Jan 03 05:51:08 crc kubenswrapper[4854]: I0103 05:51:08.127825 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dea8fd3f-411f-44a8-a1d6-4881f41fc149" path="/var/lib/kubelet/pods/dea8fd3f-411f-44a8-a1d6-4881f41fc149/volumes" Jan 03 05:51:08 crc kubenswrapper[4854]: I0103 05:51:08.676287 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" event={"ID":"c6c4aab5-c8ed-4323-87ef-a932943637e0","Type":"ContainerStarted","Data":"39be61b18cad54aa3b5bde8e7260b5127da325bd7fe91392e272f324097e417b"} Jan 03 05:51:08 crc kubenswrapper[4854]: I0103 05:51:08.676873 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" event={"ID":"c6c4aab5-c8ed-4323-87ef-a932943637e0","Type":"ContainerStarted","Data":"d01409d11e0df9407f878a07e612a3e5cfa02c7f5cf88811627e68ccedf37473"} Jan 03 05:51:09 crc kubenswrapper[4854]: I0103 05:51:09.687133 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" event={"ID":"c6c4aab5-c8ed-4323-87ef-a932943637e0","Type":"ContainerStarted","Data":"390728673f6713c8773e64d6153ffdda0e10db16e86379a55a409753c509bded"} Jan 03 05:51:09 crc kubenswrapper[4854]: I0103 05:51:09.687679 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" event={"ID":"c6c4aab5-c8ed-4323-87ef-a932943637e0","Type":"ContainerStarted","Data":"9f1bf40ad57a646d3480b88f55430ca2552926bb0570de1b2a83103ddb63961e"} Jan 03 05:51:09 crc kubenswrapper[4854]: I0103 05:51:09.687781 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" event={"ID":"c6c4aab5-c8ed-4323-87ef-a932943637e0","Type":"ContainerStarted","Data":"94b62d674368e0c1df366f908c412a04dcfa7c9ed644ce0216ab92ae2cf31a23"} Jan 03 05:51:10 crc kubenswrapper[4854]: I0103 05:51:10.696496 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" event={"ID":"c6c4aab5-c8ed-4323-87ef-a932943637e0","Type":"ContainerStarted","Data":"aa2b94c3fe2547fc7e595fd456b08646939e5c7905b277b7e69b0fa39279c24a"} Jan 03 05:51:11 crc kubenswrapper[4854]: I0103 05:51:11.755604 4854 patch_prober.go:28] interesting pod/machine-config-daemon-qdhfx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 05:51:11 crc kubenswrapper[4854]: I0103 05:51:11.755875 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 05:51:11 crc kubenswrapper[4854]: I0103 05:51:11.755910 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" Jan 03 05:51:11 crc kubenswrapper[4854]: I0103 05:51:11.756338 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"75bb4ac621ba37ad54638b77615a99cc4b805eef98715dac578d470754e1c858"} pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 03 05:51:11 crc kubenswrapper[4854]: I0103 05:51:11.756387 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" containerID="cri-o://75bb4ac621ba37ad54638b77615a99cc4b805eef98715dac578d470754e1c858" gracePeriod=600 Jan 03 05:51:12 crc kubenswrapper[4854]: I0103 05:51:12.374927 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-zd6cb"] Jan 03 05:51:12 crc kubenswrapper[4854]: I0103 05:51:12.375968 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zd6cb" Jan 03 05:51:12 crc kubenswrapper[4854]: I0103 05:51:12.379857 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 03 05:51:12 crc kubenswrapper[4854]: I0103 05:51:12.379947 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 03 05:51:12 crc kubenswrapper[4854]: I0103 05:51:12.380136 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-xfdbd" Jan 03 05:51:12 crc kubenswrapper[4854]: I0103 05:51:12.424555 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6fd7f4dc5-ht6nr"] Jan 03 05:51:12 crc kubenswrapper[4854]: I0103 05:51:12.425399 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fd7f4dc5-ht6nr" Jan 03 05:51:12 crc kubenswrapper[4854]: I0103 05:51:12.427366 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-p7lr6" Jan 03 05:51:12 crc kubenswrapper[4854]: I0103 05:51:12.427569 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 03 05:51:12 crc kubenswrapper[4854]: I0103 05:51:12.434600 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6fd7f4dc5-g9m4l"] Jan 03 05:51:12 crc kubenswrapper[4854]: I0103 05:51:12.435630 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fd7f4dc5-g9m4l" Jan 03 05:51:12 crc kubenswrapper[4854]: I0103 05:51:12.563578 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e06ca97c-67a8-445d-8757-447a32701d72-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6fd7f4dc5-ht6nr\" (UID: \"e06ca97c-67a8-445d-8757-447a32701d72\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fd7f4dc5-ht6nr" Jan 03 05:51:12 crc kubenswrapper[4854]: I0103 05:51:12.563623 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5f976e43-5c43-4078-bef3-eb53dc0e4f18-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6fd7f4dc5-g9m4l\" (UID: \"5f976e43-5c43-4078-bef3-eb53dc0e4f18\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fd7f4dc5-g9m4l" Jan 03 05:51:12 crc kubenswrapper[4854]: I0103 05:51:12.563650 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5f976e43-5c43-4078-bef3-eb53dc0e4f18-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6fd7f4dc5-g9m4l\" (UID: \"5f976e43-5c43-4078-bef3-eb53dc0e4f18\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fd7f4dc5-g9m4l" Jan 03 05:51:12 crc kubenswrapper[4854]: I0103 05:51:12.563855 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e06ca97c-67a8-445d-8757-447a32701d72-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6fd7f4dc5-ht6nr\" (UID: \"e06ca97c-67a8-445d-8757-447a32701d72\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fd7f4dc5-ht6nr" Jan 03 05:51:12 crc kubenswrapper[4854]: I0103 05:51:12.563948 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqqx5\" (UniqueName: \"kubernetes.io/projected/fbb08b05-633e-45e0-b237-34f8100ab3c9-kube-api-access-rqqx5\") pod \"obo-prometheus-operator-68bc856cb9-zd6cb\" (UID: \"fbb08b05-633e-45e0-b237-34f8100ab3c9\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zd6cb" Jan 03 05:51:12 crc kubenswrapper[4854]: I0103 05:51:12.600781 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-9trnq"] Jan 03 05:51:12 crc kubenswrapper[4854]: I0103 05:51:12.601510 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9trnq" Jan 03 05:51:12 crc kubenswrapper[4854]: I0103 05:51:12.603738 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 03 05:51:12 crc kubenswrapper[4854]: I0103 05:51:12.603911 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-ktwjf" Jan 03 05:51:12 crc kubenswrapper[4854]: I0103 05:51:12.665088 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5f976e43-5c43-4078-bef3-eb53dc0e4f18-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6fd7f4dc5-g9m4l\" (UID: \"5f976e43-5c43-4078-bef3-eb53dc0e4f18\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fd7f4dc5-g9m4l" Jan 03 05:51:12 crc kubenswrapper[4854]: I0103 05:51:12.665160 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e06ca97c-67a8-445d-8757-447a32701d72-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6fd7f4dc5-ht6nr\" (UID: \"e06ca97c-67a8-445d-8757-447a32701d72\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fd7f4dc5-ht6nr" Jan 03 05:51:12 crc kubenswrapper[4854]: I0103 05:51:12.665190 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqqx5\" (UniqueName: \"kubernetes.io/projected/fbb08b05-633e-45e0-b237-34f8100ab3c9-kube-api-access-rqqx5\") pod \"obo-prometheus-operator-68bc856cb9-zd6cb\" (UID: \"fbb08b05-633e-45e0-b237-34f8100ab3c9\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zd6cb" Jan 03 05:51:12 crc kubenswrapper[4854]: I0103 05:51:12.665248 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e06ca97c-67a8-445d-8757-447a32701d72-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6fd7f4dc5-ht6nr\" (UID: \"e06ca97c-67a8-445d-8757-447a32701d72\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fd7f4dc5-ht6nr" Jan 03 05:51:12 crc kubenswrapper[4854]: I0103 05:51:12.665266 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5f976e43-5c43-4078-bef3-eb53dc0e4f18-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6fd7f4dc5-g9m4l\" (UID: \"5f976e43-5c43-4078-bef3-eb53dc0e4f18\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fd7f4dc5-g9m4l" Jan 03 05:51:12 crc kubenswrapper[4854]: I0103 05:51:12.671652 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5f976e43-5c43-4078-bef3-eb53dc0e4f18-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6fd7f4dc5-g9m4l\" (UID: \"5f976e43-5c43-4078-bef3-eb53dc0e4f18\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fd7f4dc5-g9m4l" Jan 03 05:51:12 crc kubenswrapper[4854]: I0103 05:51:12.673606 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5f976e43-5c43-4078-bef3-eb53dc0e4f18-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6fd7f4dc5-g9m4l\" (UID: \"5f976e43-5c43-4078-bef3-eb53dc0e4f18\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fd7f4dc5-g9m4l" Jan 03 05:51:12 crc kubenswrapper[4854]: I0103 05:51:12.678680 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e06ca97c-67a8-445d-8757-447a32701d72-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6fd7f4dc5-ht6nr\" (UID: \"e06ca97c-67a8-445d-8757-447a32701d72\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fd7f4dc5-ht6nr" Jan 03 05:51:12 crc kubenswrapper[4854]: I0103 05:51:12.678835 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e06ca97c-67a8-445d-8757-447a32701d72-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6fd7f4dc5-ht6nr\" (UID: \"e06ca97c-67a8-445d-8757-447a32701d72\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fd7f4dc5-ht6nr" Jan 03 05:51:12 crc kubenswrapper[4854]: I0103 05:51:12.688261 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqqx5\" (UniqueName: \"kubernetes.io/projected/fbb08b05-633e-45e0-b237-34f8100ab3c9-kube-api-access-rqqx5\") pod \"obo-prometheus-operator-68bc856cb9-zd6cb\" (UID: \"fbb08b05-633e-45e0-b237-34f8100ab3c9\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zd6cb" Jan 03 05:51:12 crc kubenswrapper[4854]: I0103 05:51:12.695280 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zd6cb" Jan 03 05:51:12 crc kubenswrapper[4854]: I0103 05:51:12.720332 4854 generic.go:334] "Generic (PLEG): container finished" podID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerID="75bb4ac621ba37ad54638b77615a99cc4b805eef98715dac578d470754e1c858" exitCode=0 Jan 03 05:51:12 crc kubenswrapper[4854]: I0103 05:51:12.720393 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" event={"ID":"e8c88d7d-092b-44f7-b4c8-3540be3c0e8b","Type":"ContainerDied","Data":"75bb4ac621ba37ad54638b77615a99cc4b805eef98715dac578d470754e1c858"} Jan 03 05:51:12 crc kubenswrapper[4854]: I0103 05:51:12.720425 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" event={"ID":"e8c88d7d-092b-44f7-b4c8-3540be3c0e8b","Type":"ContainerStarted","Data":"e01a9e027959d17e8604f32720a945b283546578a7ff1bc2cd05356d9cba66ad"} Jan 03 05:51:12 crc kubenswrapper[4854]: I0103 05:51:12.720442 4854 scope.go:117] "RemoveContainer" containerID="d8ba999ad3c3dcd9750b64af99186e9f84152e1189793a50472b4e974fec8292" Jan 03 05:51:12 crc kubenswrapper[4854]: E0103 05:51:12.725107 4854 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-zd6cb_openshift-operators_fbb08b05-633e-45e0-b237-34f8100ab3c9_0(e93bbcb8d90d2ae1c17f71d40058fc3eda99a06da393646ee27f4e9cd1ab4d17): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 03 05:51:12 crc kubenswrapper[4854]: E0103 05:51:12.725154 4854 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-zd6cb_openshift-operators_fbb08b05-633e-45e0-b237-34f8100ab3c9_0(e93bbcb8d90d2ae1c17f71d40058fc3eda99a06da393646ee27f4e9cd1ab4d17): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zd6cb" Jan 03 05:51:12 crc kubenswrapper[4854]: E0103 05:51:12.725178 4854 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-zd6cb_openshift-operators_fbb08b05-633e-45e0-b237-34f8100ab3c9_0(e93bbcb8d90d2ae1c17f71d40058fc3eda99a06da393646ee27f4e9cd1ab4d17): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zd6cb" Jan 03 05:51:12 crc kubenswrapper[4854]: E0103 05:51:12.725213 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-zd6cb_openshift-operators(fbb08b05-633e-45e0-b237-34f8100ab3c9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-zd6cb_openshift-operators(fbb08b05-633e-45e0-b237-34f8100ab3c9)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-zd6cb_openshift-operators_fbb08b05-633e-45e0-b237-34f8100ab3c9_0(e93bbcb8d90d2ae1c17f71d40058fc3eda99a06da393646ee27f4e9cd1ab4d17): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zd6cb" podUID="fbb08b05-633e-45e0-b237-34f8100ab3c9" Jan 03 05:51:12 crc kubenswrapper[4854]: I0103 05:51:12.725703 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" event={"ID":"c6c4aab5-c8ed-4323-87ef-a932943637e0","Type":"ContainerStarted","Data":"02a78ab772751b0fcd5d340ace0e8f0fc956ce6fa0fc12a830ce5c2ce715c45f"} Jan 03 05:51:12 crc kubenswrapper[4854]: I0103 05:51:12.727385 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-tgcxk"] Jan 03 05:51:12 crc kubenswrapper[4854]: I0103 05:51:12.728498 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-tgcxk" Jan 03 05:51:12 crc kubenswrapper[4854]: I0103 05:51:12.730553 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-rt98t" Jan 03 05:51:12 crc kubenswrapper[4854]: I0103 05:51:12.740170 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fd7f4dc5-ht6nr" Jan 03 05:51:12 crc kubenswrapper[4854]: I0103 05:51:12.751744 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fd7f4dc5-g9m4l" Jan 03 05:51:12 crc kubenswrapper[4854]: I0103 05:51:12.776821 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmqrr\" (UniqueName: \"kubernetes.io/projected/5c8ccde8-0051-491f-b5d6-a2930440c138-kube-api-access-cmqrr\") pod \"observability-operator-59bdc8b94-9trnq\" (UID: \"5c8ccde8-0051-491f-b5d6-a2930440c138\") " pod="openshift-operators/observability-operator-59bdc8b94-9trnq" Jan 03 05:51:12 crc kubenswrapper[4854]: I0103 05:51:12.776900 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/5c8ccde8-0051-491f-b5d6-a2930440c138-observability-operator-tls\") pod \"observability-operator-59bdc8b94-9trnq\" (UID: \"5c8ccde8-0051-491f-b5d6-a2930440c138\") " pod="openshift-operators/observability-operator-59bdc8b94-9trnq" Jan 03 05:51:12 crc kubenswrapper[4854]: E0103 05:51:12.783635 4854 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6fd7f4dc5-ht6nr_openshift-operators_e06ca97c-67a8-445d-8757-447a32701d72_0(fd68aabd4754020e29c2219ed2f9f3e6a952f2be68f0d642be2c7c867b127ad5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 03 05:51:12 crc kubenswrapper[4854]: E0103 05:51:12.783701 4854 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6fd7f4dc5-ht6nr_openshift-operators_e06ca97c-67a8-445d-8757-447a32701d72_0(fd68aabd4754020e29c2219ed2f9f3e6a952f2be68f0d642be2c7c867b127ad5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fd7f4dc5-ht6nr" Jan 03 05:51:12 crc kubenswrapper[4854]: E0103 05:51:12.783726 4854 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6fd7f4dc5-ht6nr_openshift-operators_e06ca97c-67a8-445d-8757-447a32701d72_0(fd68aabd4754020e29c2219ed2f9f3e6a952f2be68f0d642be2c7c867b127ad5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fd7f4dc5-ht6nr" Jan 03 05:51:12 crc kubenswrapper[4854]: E0103 05:51:12.783768 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6fd7f4dc5-ht6nr_openshift-operators(e06ca97c-67a8-445d-8757-447a32701d72)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6fd7f4dc5-ht6nr_openshift-operators(e06ca97c-67a8-445d-8757-447a32701d72)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6fd7f4dc5-ht6nr_openshift-operators_e06ca97c-67a8-445d-8757-447a32701d72_0(fd68aabd4754020e29c2219ed2f9f3e6a952f2be68f0d642be2c7c867b127ad5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fd7f4dc5-ht6nr" podUID="e06ca97c-67a8-445d-8757-447a32701d72" Jan 03 05:51:12 crc kubenswrapper[4854]: E0103 05:51:12.798344 4854 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6fd7f4dc5-g9m4l_openshift-operators_5f976e43-5c43-4078-bef3-eb53dc0e4f18_0(cc8a20a6db6a651b34c7bf69caf8171e797697295031329c6690bdfd8263174c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 03 05:51:12 crc kubenswrapper[4854]: E0103 05:51:12.798412 4854 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6fd7f4dc5-g9m4l_openshift-operators_5f976e43-5c43-4078-bef3-eb53dc0e4f18_0(cc8a20a6db6a651b34c7bf69caf8171e797697295031329c6690bdfd8263174c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fd7f4dc5-g9m4l" Jan 03 05:51:12 crc kubenswrapper[4854]: E0103 05:51:12.798434 4854 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6fd7f4dc5-g9m4l_openshift-operators_5f976e43-5c43-4078-bef3-eb53dc0e4f18_0(cc8a20a6db6a651b34c7bf69caf8171e797697295031329c6690bdfd8263174c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fd7f4dc5-g9m4l" Jan 03 05:51:12 crc kubenswrapper[4854]: E0103 05:51:12.798489 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6fd7f4dc5-g9m4l_openshift-operators(5f976e43-5c43-4078-bef3-eb53dc0e4f18)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6fd7f4dc5-g9m4l_openshift-operators(5f976e43-5c43-4078-bef3-eb53dc0e4f18)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6fd7f4dc5-g9m4l_openshift-operators_5f976e43-5c43-4078-bef3-eb53dc0e4f18_0(cc8a20a6db6a651b34c7bf69caf8171e797697295031329c6690bdfd8263174c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fd7f4dc5-g9m4l" podUID="5f976e43-5c43-4078-bef3-eb53dc0e4f18" Jan 03 05:51:12 crc kubenswrapper[4854]: I0103 05:51:12.878532 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nz6x\" (UniqueName: \"kubernetes.io/projected/ff43f741-1a42-4dfa-bfea-11b28b56487c-kube-api-access-2nz6x\") pod \"perses-operator-5bf474d74f-tgcxk\" (UID: \"ff43f741-1a42-4dfa-bfea-11b28b56487c\") " pod="openshift-operators/perses-operator-5bf474d74f-tgcxk" Jan 03 05:51:12 crc kubenswrapper[4854]: I0103 05:51:12.878677 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/5c8ccde8-0051-491f-b5d6-a2930440c138-observability-operator-tls\") pod \"observability-operator-59bdc8b94-9trnq\" (UID: \"5c8ccde8-0051-491f-b5d6-a2930440c138\") " pod="openshift-operators/observability-operator-59bdc8b94-9trnq" Jan 03 05:51:12 crc kubenswrapper[4854]: I0103 05:51:12.878797 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/ff43f741-1a42-4dfa-bfea-11b28b56487c-openshift-service-ca\") pod \"perses-operator-5bf474d74f-tgcxk\" (UID: \"ff43f741-1a42-4dfa-bfea-11b28b56487c\") " pod="openshift-operators/perses-operator-5bf474d74f-tgcxk" Jan 03 05:51:12 crc kubenswrapper[4854]: I0103 05:51:12.878869 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmqrr\" (UniqueName: \"kubernetes.io/projected/5c8ccde8-0051-491f-b5d6-a2930440c138-kube-api-access-cmqrr\") pod \"observability-operator-59bdc8b94-9trnq\" (UID: \"5c8ccde8-0051-491f-b5d6-a2930440c138\") " pod="openshift-operators/observability-operator-59bdc8b94-9trnq" Jan 03 05:51:12 crc kubenswrapper[4854]: I0103 05:51:12.886648 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/5c8ccde8-0051-491f-b5d6-a2930440c138-observability-operator-tls\") pod \"observability-operator-59bdc8b94-9trnq\" (UID: \"5c8ccde8-0051-491f-b5d6-a2930440c138\") " pod="openshift-operators/observability-operator-59bdc8b94-9trnq" Jan 03 05:51:12 crc kubenswrapper[4854]: I0103 05:51:12.894600 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmqrr\" (UniqueName: \"kubernetes.io/projected/5c8ccde8-0051-491f-b5d6-a2930440c138-kube-api-access-cmqrr\") pod \"observability-operator-59bdc8b94-9trnq\" (UID: \"5c8ccde8-0051-491f-b5d6-a2930440c138\") " pod="openshift-operators/observability-operator-59bdc8b94-9trnq" Jan 03 05:51:12 crc kubenswrapper[4854]: I0103 05:51:12.914143 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9trnq" Jan 03 05:51:12 crc kubenswrapper[4854]: E0103 05:51:12.934850 4854 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9trnq_openshift-operators_5c8ccde8-0051-491f-b5d6-a2930440c138_0(20679036d6f0df26a4fbc33a09a5d350202cdfb279f97af69c131e271db262f8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 03 05:51:12 crc kubenswrapper[4854]: E0103 05:51:12.934925 4854 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9trnq_openshift-operators_5c8ccde8-0051-491f-b5d6-a2930440c138_0(20679036d6f0df26a4fbc33a09a5d350202cdfb279f97af69c131e271db262f8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-9trnq" Jan 03 05:51:12 crc kubenswrapper[4854]: E0103 05:51:12.934945 4854 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9trnq_openshift-operators_5c8ccde8-0051-491f-b5d6-a2930440c138_0(20679036d6f0df26a4fbc33a09a5d350202cdfb279f97af69c131e271db262f8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-9trnq" Jan 03 05:51:12 crc kubenswrapper[4854]: E0103 05:51:12.934989 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-9trnq_openshift-operators(5c8ccde8-0051-491f-b5d6-a2930440c138)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-9trnq_openshift-operators(5c8ccde8-0051-491f-b5d6-a2930440c138)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9trnq_openshift-operators_5c8ccde8-0051-491f-b5d6-a2930440c138_0(20679036d6f0df26a4fbc33a09a5d350202cdfb279f97af69c131e271db262f8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-9trnq" podUID="5c8ccde8-0051-491f-b5d6-a2930440c138" Jan 03 05:51:12 crc kubenswrapper[4854]: I0103 05:51:12.980097 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nz6x\" (UniqueName: \"kubernetes.io/projected/ff43f741-1a42-4dfa-bfea-11b28b56487c-kube-api-access-2nz6x\") pod \"perses-operator-5bf474d74f-tgcxk\" (UID: \"ff43f741-1a42-4dfa-bfea-11b28b56487c\") " pod="openshift-operators/perses-operator-5bf474d74f-tgcxk" Jan 03 05:51:12 crc kubenswrapper[4854]: I0103 05:51:12.980212 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/ff43f741-1a42-4dfa-bfea-11b28b56487c-openshift-service-ca\") pod \"perses-operator-5bf474d74f-tgcxk\" (UID: \"ff43f741-1a42-4dfa-bfea-11b28b56487c\") " pod="openshift-operators/perses-operator-5bf474d74f-tgcxk" Jan 03 05:51:12 crc kubenswrapper[4854]: I0103 05:51:12.981270 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/ff43f741-1a42-4dfa-bfea-11b28b56487c-openshift-service-ca\") pod \"perses-operator-5bf474d74f-tgcxk\" (UID: \"ff43f741-1a42-4dfa-bfea-11b28b56487c\") " pod="openshift-operators/perses-operator-5bf474d74f-tgcxk" Jan 03 05:51:13 crc kubenswrapper[4854]: I0103 05:51:13.017206 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nz6x\" (UniqueName: \"kubernetes.io/projected/ff43f741-1a42-4dfa-bfea-11b28b56487c-kube-api-access-2nz6x\") pod \"perses-operator-5bf474d74f-tgcxk\" (UID: \"ff43f741-1a42-4dfa-bfea-11b28b56487c\") " pod="openshift-operators/perses-operator-5bf474d74f-tgcxk" Jan 03 05:51:13 crc kubenswrapper[4854]: I0103 05:51:13.042448 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-tgcxk" Jan 03 05:51:13 crc kubenswrapper[4854]: E0103 05:51:13.061703 4854 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-tgcxk_openshift-operators_ff43f741-1a42-4dfa-bfea-11b28b56487c_0(29365f89066a60228eaa7e848013c8a49c39b86b4b63047266ba6f4a02d1a2e7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 03 05:51:13 crc kubenswrapper[4854]: E0103 05:51:13.061772 4854 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-tgcxk_openshift-operators_ff43f741-1a42-4dfa-bfea-11b28b56487c_0(29365f89066a60228eaa7e848013c8a49c39b86b4b63047266ba6f4a02d1a2e7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-tgcxk" Jan 03 05:51:13 crc kubenswrapper[4854]: E0103 05:51:13.061809 4854 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-tgcxk_openshift-operators_ff43f741-1a42-4dfa-bfea-11b28b56487c_0(29365f89066a60228eaa7e848013c8a49c39b86b4b63047266ba6f4a02d1a2e7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-tgcxk" Jan 03 05:51:13 crc kubenswrapper[4854]: E0103 05:51:13.061887 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-tgcxk_openshift-operators(ff43f741-1a42-4dfa-bfea-11b28b56487c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-tgcxk_openshift-operators(ff43f741-1a42-4dfa-bfea-11b28b56487c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-tgcxk_openshift-operators_ff43f741-1a42-4dfa-bfea-11b28b56487c_0(29365f89066a60228eaa7e848013c8a49c39b86b4b63047266ba6f4a02d1a2e7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-tgcxk" podUID="ff43f741-1a42-4dfa-bfea-11b28b56487c" Jan 03 05:51:15 crc kubenswrapper[4854]: I0103 05:51:15.168750 4854 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 03 05:51:15 crc kubenswrapper[4854]: I0103 05:51:15.754399 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" event={"ID":"c6c4aab5-c8ed-4323-87ef-a932943637e0","Type":"ContainerStarted","Data":"139f6748fdfbc1455c045afb871561752e4b5cc05bc8e11152857192bd50480a"} Jan 03 05:51:15 crc kubenswrapper[4854]: I0103 05:51:15.754787 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:15 crc kubenswrapper[4854]: I0103 05:51:15.754803 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:15 crc kubenswrapper[4854]: I0103 05:51:15.754815 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:15 crc kubenswrapper[4854]: I0103 05:51:15.783155 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" podStartSLOduration=9.783133549 podStartE2EDuration="9.783133549s" podCreationTimestamp="2026-01-03 05:51:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:51:15.78039813 +0000 UTC m=+654.106974712" watchObservedRunningTime="2026-01-03 05:51:15.783133549 +0000 UTC m=+654.109710131" Jan 03 05:51:15 crc kubenswrapper[4854]: I0103 05:51:15.806029 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:15 crc kubenswrapper[4854]: I0103 05:51:15.812844 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:17 crc kubenswrapper[4854]: I0103 05:51:17.269596 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6fd7f4dc5-ht6nr"] Jan 03 05:51:17 crc kubenswrapper[4854]: I0103 05:51:17.269709 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fd7f4dc5-ht6nr" Jan 03 05:51:17 crc kubenswrapper[4854]: I0103 05:51:17.270177 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fd7f4dc5-ht6nr" Jan 03 05:51:17 crc kubenswrapper[4854]: I0103 05:51:17.273694 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-zd6cb"] Jan 03 05:51:17 crc kubenswrapper[4854]: I0103 05:51:17.273803 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zd6cb" Jan 03 05:51:17 crc kubenswrapper[4854]: I0103 05:51:17.274318 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zd6cb" Jan 03 05:51:17 crc kubenswrapper[4854]: I0103 05:51:17.286058 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-tgcxk"] Jan 03 05:51:17 crc kubenswrapper[4854]: I0103 05:51:17.286191 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-tgcxk" Jan 03 05:51:17 crc kubenswrapper[4854]: I0103 05:51:17.286600 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-tgcxk" Jan 03 05:51:17 crc kubenswrapper[4854]: I0103 05:51:17.291145 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-9trnq"] Jan 03 05:51:17 crc kubenswrapper[4854]: I0103 05:51:17.291534 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9trnq" Jan 03 05:51:17 crc kubenswrapper[4854]: I0103 05:51:17.292040 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9trnq" Jan 03 05:51:17 crc kubenswrapper[4854]: E0103 05:51:17.323378 4854 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6fd7f4dc5-ht6nr_openshift-operators_e06ca97c-67a8-445d-8757-447a32701d72_0(246676b55878303a909c1cf4316a93778869e1fa695b69a694226046b3f7de7a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 03 05:51:17 crc kubenswrapper[4854]: E0103 05:51:17.323440 4854 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6fd7f4dc5-ht6nr_openshift-operators_e06ca97c-67a8-445d-8757-447a32701d72_0(246676b55878303a909c1cf4316a93778869e1fa695b69a694226046b3f7de7a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fd7f4dc5-ht6nr" Jan 03 05:51:17 crc kubenswrapper[4854]: E0103 05:51:17.323476 4854 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6fd7f4dc5-ht6nr_openshift-operators_e06ca97c-67a8-445d-8757-447a32701d72_0(246676b55878303a909c1cf4316a93778869e1fa695b69a694226046b3f7de7a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fd7f4dc5-ht6nr" Jan 03 05:51:17 crc kubenswrapper[4854]: E0103 05:51:17.323520 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6fd7f4dc5-ht6nr_openshift-operators(e06ca97c-67a8-445d-8757-447a32701d72)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6fd7f4dc5-ht6nr_openshift-operators(e06ca97c-67a8-445d-8757-447a32701d72)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6fd7f4dc5-ht6nr_openshift-operators_e06ca97c-67a8-445d-8757-447a32701d72_0(246676b55878303a909c1cf4316a93778869e1fa695b69a694226046b3f7de7a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fd7f4dc5-ht6nr" podUID="e06ca97c-67a8-445d-8757-447a32701d72" Jan 03 05:51:17 crc kubenswrapper[4854]: E0103 05:51:17.364645 4854 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-tgcxk_openshift-operators_ff43f741-1a42-4dfa-bfea-11b28b56487c_0(dd583343e7cb4517e3ae75e02fd2cc84074b8b8bf45cf1d4748935aa00f8cad9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 03 05:51:17 crc kubenswrapper[4854]: E0103 05:51:17.364702 4854 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-tgcxk_openshift-operators_ff43f741-1a42-4dfa-bfea-11b28b56487c_0(dd583343e7cb4517e3ae75e02fd2cc84074b8b8bf45cf1d4748935aa00f8cad9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-tgcxk" Jan 03 05:51:17 crc kubenswrapper[4854]: E0103 05:51:17.364723 4854 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-tgcxk_openshift-operators_ff43f741-1a42-4dfa-bfea-11b28b56487c_0(dd583343e7cb4517e3ae75e02fd2cc84074b8b8bf45cf1d4748935aa00f8cad9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-tgcxk" Jan 03 05:51:17 crc kubenswrapper[4854]: E0103 05:51:17.364762 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-tgcxk_openshift-operators(ff43f741-1a42-4dfa-bfea-11b28b56487c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-tgcxk_openshift-operators(ff43f741-1a42-4dfa-bfea-11b28b56487c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-tgcxk_openshift-operators_ff43f741-1a42-4dfa-bfea-11b28b56487c_0(dd583343e7cb4517e3ae75e02fd2cc84074b8b8bf45cf1d4748935aa00f8cad9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-tgcxk" podUID="ff43f741-1a42-4dfa-bfea-11b28b56487c" Jan 03 05:51:17 crc kubenswrapper[4854]: E0103 05:51:17.377194 4854 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-zd6cb_openshift-operators_fbb08b05-633e-45e0-b237-34f8100ab3c9_0(bb49fa1129767dccb5503e99677413a44c1e163df0018ce45c6c09114f935331): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 03 05:51:17 crc kubenswrapper[4854]: E0103 05:51:17.377248 4854 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-zd6cb_openshift-operators_fbb08b05-633e-45e0-b237-34f8100ab3c9_0(bb49fa1129767dccb5503e99677413a44c1e163df0018ce45c6c09114f935331): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zd6cb" Jan 03 05:51:17 crc kubenswrapper[4854]: E0103 05:51:17.377268 4854 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-zd6cb_openshift-operators_fbb08b05-633e-45e0-b237-34f8100ab3c9_0(bb49fa1129767dccb5503e99677413a44c1e163df0018ce45c6c09114f935331): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zd6cb" Jan 03 05:51:17 crc kubenswrapper[4854]: E0103 05:51:17.377305 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-zd6cb_openshift-operators(fbb08b05-633e-45e0-b237-34f8100ab3c9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-zd6cb_openshift-operators(fbb08b05-633e-45e0-b237-34f8100ab3c9)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-zd6cb_openshift-operators_fbb08b05-633e-45e0-b237-34f8100ab3c9_0(bb49fa1129767dccb5503e99677413a44c1e163df0018ce45c6c09114f935331): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zd6cb" podUID="fbb08b05-633e-45e0-b237-34f8100ab3c9" Jan 03 05:51:17 crc kubenswrapper[4854]: E0103 05:51:17.447255 4854 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9trnq_openshift-operators_5c8ccde8-0051-491f-b5d6-a2930440c138_0(c882880e0384c25d982fdb72de9ee5698eeb2d2508e2b278d0534e140c10719b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 03 05:51:17 crc kubenswrapper[4854]: E0103 05:51:17.447336 4854 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9trnq_openshift-operators_5c8ccde8-0051-491f-b5d6-a2930440c138_0(c882880e0384c25d982fdb72de9ee5698eeb2d2508e2b278d0534e140c10719b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-9trnq" Jan 03 05:51:17 crc kubenswrapper[4854]: E0103 05:51:17.447370 4854 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9trnq_openshift-operators_5c8ccde8-0051-491f-b5d6-a2930440c138_0(c882880e0384c25d982fdb72de9ee5698eeb2d2508e2b278d0534e140c10719b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-9trnq" Jan 03 05:51:17 crc kubenswrapper[4854]: E0103 05:51:17.447408 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-9trnq_openshift-operators(5c8ccde8-0051-491f-b5d6-a2930440c138)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-9trnq_openshift-operators(5c8ccde8-0051-491f-b5d6-a2930440c138)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9trnq_openshift-operators_5c8ccde8-0051-491f-b5d6-a2930440c138_0(c882880e0384c25d982fdb72de9ee5698eeb2d2508e2b278d0534e140c10719b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-9trnq" podUID="5c8ccde8-0051-491f-b5d6-a2930440c138" Jan 03 05:51:17 crc kubenswrapper[4854]: I0103 05:51:17.554975 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6fd7f4dc5-g9m4l"] Jan 03 05:51:17 crc kubenswrapper[4854]: I0103 05:51:17.555103 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fd7f4dc5-g9m4l" Jan 03 05:51:17 crc kubenswrapper[4854]: I0103 05:51:17.555544 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fd7f4dc5-g9m4l" Jan 03 05:51:17 crc kubenswrapper[4854]: E0103 05:51:17.582117 4854 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6fd7f4dc5-g9m4l_openshift-operators_5f976e43-5c43-4078-bef3-eb53dc0e4f18_0(f52515c6f7237ed77cb088057a265bc41b0835b72c9dcbc0f064e619d5899181): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 03 05:51:17 crc kubenswrapper[4854]: E0103 05:51:17.582176 4854 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6fd7f4dc5-g9m4l_openshift-operators_5f976e43-5c43-4078-bef3-eb53dc0e4f18_0(f52515c6f7237ed77cb088057a265bc41b0835b72c9dcbc0f064e619d5899181): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fd7f4dc5-g9m4l" Jan 03 05:51:17 crc kubenswrapper[4854]: E0103 05:51:17.582197 4854 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6fd7f4dc5-g9m4l_openshift-operators_5f976e43-5c43-4078-bef3-eb53dc0e4f18_0(f52515c6f7237ed77cb088057a265bc41b0835b72c9dcbc0f064e619d5899181): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fd7f4dc5-g9m4l" Jan 03 05:51:17 crc kubenswrapper[4854]: E0103 05:51:17.582245 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6fd7f4dc5-g9m4l_openshift-operators(5f976e43-5c43-4078-bef3-eb53dc0e4f18)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6fd7f4dc5-g9m4l_openshift-operators(5f976e43-5c43-4078-bef3-eb53dc0e4f18)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6fd7f4dc5-g9m4l_openshift-operators_5f976e43-5c43-4078-bef3-eb53dc0e4f18_0(f52515c6f7237ed77cb088057a265bc41b0835b72c9dcbc0f064e619d5899181): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fd7f4dc5-g9m4l" podUID="5f976e43-5c43-4078-bef3-eb53dc0e4f18" Jan 03 05:51:22 crc kubenswrapper[4854]: I0103 05:51:22.355283 4854 scope.go:117] "RemoveContainer" containerID="0578ac453395e979a1316b50402ba660b646103a4ccce294c2f4164820cea48e" Jan 03 05:51:28 crc kubenswrapper[4854]: I0103 05:51:28.117519 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zd6cb" Jan 03 05:51:28 crc kubenswrapper[4854]: I0103 05:51:28.118188 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zd6cb" Jan 03 05:51:28 crc kubenswrapper[4854]: I0103 05:51:28.603024 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-zd6cb"] Jan 03 05:51:28 crc kubenswrapper[4854]: I0103 05:51:28.835422 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zd6cb" event={"ID":"fbb08b05-633e-45e0-b237-34f8100ab3c9","Type":"ContainerStarted","Data":"37d4db7a68889c95746fdfa278babb80dc2f5175e2b4eb97f043d5c9e29fbd3b"} Jan 03 05:51:29 crc kubenswrapper[4854]: I0103 05:51:29.117420 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fd7f4dc5-ht6nr" Jan 03 05:51:29 crc kubenswrapper[4854]: I0103 05:51:29.117483 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9trnq" Jan 03 05:51:29 crc kubenswrapper[4854]: I0103 05:51:29.117857 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fd7f4dc5-ht6nr" Jan 03 05:51:29 crc kubenswrapper[4854]: I0103 05:51:29.118143 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9trnq" Jan 03 05:51:29 crc kubenswrapper[4854]: I0103 05:51:29.477762 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6fd7f4dc5-ht6nr"] Jan 03 05:51:29 crc kubenswrapper[4854]: I0103 05:51:29.784505 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-9trnq"] Jan 03 05:51:29 crc kubenswrapper[4854]: I0103 05:51:29.842171 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fd7f4dc5-ht6nr" event={"ID":"e06ca97c-67a8-445d-8757-447a32701d72","Type":"ContainerStarted","Data":"60bb0b8146411638c554b3d8902c414a6ec345bdd262d66771dc7c51f1b3e013"} Jan 03 05:51:29 crc kubenswrapper[4854]: I0103 05:51:29.843464 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-9trnq" event={"ID":"5c8ccde8-0051-491f-b5d6-a2930440c138","Type":"ContainerStarted","Data":"6bbfbeb5497353e34c9e857767ce9f0da2f83541d209e729f0e53f1940fc76f9"} Jan 03 05:51:30 crc kubenswrapper[4854]: I0103 05:51:30.117317 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fd7f4dc5-g9m4l" Jan 03 05:51:30 crc kubenswrapper[4854]: I0103 05:51:30.117827 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fd7f4dc5-g9m4l" Jan 03 05:51:30 crc kubenswrapper[4854]: I0103 05:51:30.678597 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6fd7f4dc5-g9m4l"] Jan 03 05:51:30 crc kubenswrapper[4854]: W0103 05:51:30.691593 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f976e43_5c43_4078_bef3_eb53dc0e4f18.slice/crio-bc58391360149524700ad705881276303af2a61fc483244d4e17e3af568f37dc WatchSource:0}: Error finding container bc58391360149524700ad705881276303af2a61fc483244d4e17e3af568f37dc: Status 404 returned error can't find the container with id bc58391360149524700ad705881276303af2a61fc483244d4e17e3af568f37dc Jan 03 05:51:30 crc kubenswrapper[4854]: I0103 05:51:30.852569 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fd7f4dc5-g9m4l" event={"ID":"5f976e43-5c43-4078-bef3-eb53dc0e4f18","Type":"ContainerStarted","Data":"bc58391360149524700ad705881276303af2a61fc483244d4e17e3af568f37dc"} Jan 03 05:51:31 crc kubenswrapper[4854]: I0103 05:51:31.117089 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-tgcxk" Jan 03 05:51:31 crc kubenswrapper[4854]: I0103 05:51:31.117630 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-tgcxk" Jan 03 05:51:31 crc kubenswrapper[4854]: I0103 05:51:31.782634 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-tgcxk"] Jan 03 05:51:31 crc kubenswrapper[4854]: I0103 05:51:31.860672 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-tgcxk" event={"ID":"ff43f741-1a42-4dfa-bfea-11b28b56487c","Type":"ContainerStarted","Data":"97f3bdb7299406f633b0392043d068b786fccf30735901032b2ffc461e6afaa3"} Jan 03 05:51:37 crc kubenswrapper[4854]: I0103 05:51:37.100375 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" Jan 03 05:51:43 crc kubenswrapper[4854]: I0103 05:51:43.952999 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zd6cb" event={"ID":"fbb08b05-633e-45e0-b237-34f8100ab3c9","Type":"ContainerStarted","Data":"74033d386a52e8ce39048d0ea1c1b0992c91b26cff38f0fd6ea8c68d7f15f817"} Jan 03 05:51:43 crc kubenswrapper[4854]: I0103 05:51:43.957296 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fd7f4dc5-g9m4l" event={"ID":"5f976e43-5c43-4078-bef3-eb53dc0e4f18","Type":"ContainerStarted","Data":"bcb008291d738a947517a20795f7b8522f93de9353f922ec6d5f4841afb20346"} Jan 03 05:51:43 crc kubenswrapper[4854]: I0103 05:51:43.962919 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fd7f4dc5-ht6nr" event={"ID":"e06ca97c-67a8-445d-8757-447a32701d72","Type":"ContainerStarted","Data":"67f3ecd004d8ca663dbb65e2217d022a79a44e00d11256390379ba2737839925"} Jan 03 05:51:43 crc kubenswrapper[4854]: I0103 05:51:43.965417 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-tgcxk" event={"ID":"ff43f741-1a42-4dfa-bfea-11b28b56487c","Type":"ContainerStarted","Data":"f405bd45a3ece408bad970f00fb93b5c6601f3fcffd8d97ae7f1fa72ba2b990a"} Jan 03 05:51:43 crc kubenswrapper[4854]: I0103 05:51:43.965601 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-tgcxk" Jan 03 05:51:43 crc kubenswrapper[4854]: I0103 05:51:43.970666 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-9trnq" event={"ID":"5c8ccde8-0051-491f-b5d6-a2930440c138","Type":"ContainerStarted","Data":"1fef4fe0b5cd3735e92b2987769721a91e3baf81c3b158e62352607a1dd17e36"} Jan 03 05:51:43 crc kubenswrapper[4854]: I0103 05:51:43.970964 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-9trnq" Jan 03 05:51:43 crc kubenswrapper[4854]: I0103 05:51:43.976055 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-9trnq" Jan 03 05:51:43 crc kubenswrapper[4854]: I0103 05:51:43.995210 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zd6cb" podStartSLOduration=17.758422726 podStartE2EDuration="31.995179536s" podCreationTimestamp="2026-01-03 05:51:12 +0000 UTC" firstStartedPulling="2026-01-03 05:51:28.608782628 +0000 UTC m=+666.935359200" lastFinishedPulling="2026-01-03 05:51:42.845539378 +0000 UTC m=+681.172116010" observedRunningTime="2026-01-03 05:51:43.982134555 +0000 UTC m=+682.308711227" watchObservedRunningTime="2026-01-03 05:51:43.995179536 +0000 UTC m=+682.321756188" Jan 03 05:51:44 crc kubenswrapper[4854]: I0103 05:51:44.019830 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fd7f4dc5-ht6nr" podStartSLOduration=18.70336137 podStartE2EDuration="32.01977573s" podCreationTimestamp="2026-01-03 05:51:12 +0000 UTC" firstStartedPulling="2026-01-03 05:51:29.496785898 +0000 UTC m=+667.823362470" lastFinishedPulling="2026-01-03 05:51:42.813200258 +0000 UTC m=+681.139776830" observedRunningTime="2026-01-03 05:51:44.016584679 +0000 UTC m=+682.343161301" watchObservedRunningTime="2026-01-03 05:51:44.01977573 +0000 UTC m=+682.346352342" Jan 03 05:51:44 crc kubenswrapper[4854]: I0103 05:51:44.096731 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-9trnq" podStartSLOduration=19.03601818 podStartE2EDuration="32.096709832s" podCreationTimestamp="2026-01-03 05:51:12 +0000 UTC" firstStartedPulling="2026-01-03 05:51:29.800335679 +0000 UTC m=+668.126912261" lastFinishedPulling="2026-01-03 05:51:42.861027331 +0000 UTC m=+681.187603913" observedRunningTime="2026-01-03 05:51:44.086223576 +0000 UTC m=+682.412800238" watchObservedRunningTime="2026-01-03 05:51:44.096709832 +0000 UTC m=+682.423286404" Jan 03 05:51:44 crc kubenswrapper[4854]: I0103 05:51:44.122857 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-tgcxk" podStartSLOduration=21.087322533 podStartE2EDuration="32.122834635s" podCreationTimestamp="2026-01-03 05:51:12 +0000 UTC" firstStartedPulling="2026-01-03 05:51:31.809325299 +0000 UTC m=+670.135901871" lastFinishedPulling="2026-01-03 05:51:42.844837361 +0000 UTC m=+681.171413973" observedRunningTime="2026-01-03 05:51:44.111187239 +0000 UTC m=+682.437763871" watchObservedRunningTime="2026-01-03 05:51:44.122834635 +0000 UTC m=+682.449411227" Jan 03 05:51:44 crc kubenswrapper[4854]: I0103 05:51:44.143221 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fd7f4dc5-g9m4l" podStartSLOduration=20.021615825 podStartE2EDuration="32.143198152s" podCreationTimestamp="2026-01-03 05:51:12 +0000 UTC" firstStartedPulling="2026-01-03 05:51:30.694776131 +0000 UTC m=+669.021352703" lastFinishedPulling="2026-01-03 05:51:42.816358418 +0000 UTC m=+681.142935030" observedRunningTime="2026-01-03 05:51:44.140052142 +0000 UTC m=+682.466628724" watchObservedRunningTime="2026-01-03 05:51:44.143198152 +0000 UTC m=+682.469774734" Jan 03 05:51:52 crc kubenswrapper[4854]: I0103 05:51:52.892141 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-tj7f5"] Jan 03 05:51:52 crc kubenswrapper[4854]: I0103 05:51:52.893896 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-tj7f5" Jan 03 05:51:52 crc kubenswrapper[4854]: I0103 05:51:52.898443 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 03 05:51:52 crc kubenswrapper[4854]: I0103 05:51:52.898647 4854 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-qhld5" Jan 03 05:51:52 crc kubenswrapper[4854]: I0103 05:51:52.898784 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 03 05:51:52 crc kubenswrapper[4854]: I0103 05:51:52.900073 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-98h9c"] Jan 03 05:51:52 crc kubenswrapper[4854]: I0103 05:51:52.902337 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-98h9c" Jan 03 05:51:52 crc kubenswrapper[4854]: I0103 05:51:52.904966 4854 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-n66xt" Jan 03 05:51:52 crc kubenswrapper[4854]: I0103 05:51:52.906966 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-98h9c"] Jan 03 05:51:52 crc kubenswrapper[4854]: I0103 05:51:52.912532 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-tj7f5"] Jan 03 05:51:52 crc kubenswrapper[4854]: I0103 05:51:52.919271 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-2kqhz"] Jan 03 05:51:52 crc kubenswrapper[4854]: I0103 05:51:52.920380 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-2kqhz" Jan 03 05:51:52 crc kubenswrapper[4854]: I0103 05:51:52.923848 4854 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-q7qgx" Jan 03 05:51:52 crc kubenswrapper[4854]: I0103 05:51:52.942212 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-2kqhz"] Jan 03 05:51:53 crc kubenswrapper[4854]: I0103 05:51:53.017807 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6fqb\" (UniqueName: \"kubernetes.io/projected/35ddfe59-bd9d-45a9-aa8d-eceb6d5e3206-kube-api-access-k6fqb\") pod \"cert-manager-cainjector-cf98fcc89-tj7f5\" (UID: \"35ddfe59-bd9d-45a9-aa8d-eceb6d5e3206\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-tj7f5" Jan 03 05:51:53 crc kubenswrapper[4854]: I0103 05:51:53.017871 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r89vp\" (UniqueName: \"kubernetes.io/projected/b1c0c51a-7edb-49cb-9b71-f7ce149bde33-kube-api-access-r89vp\") pod \"cert-manager-webhook-687f57d79b-2kqhz\" (UID: \"b1c0c51a-7edb-49cb-9b71-f7ce149bde33\") " pod="cert-manager/cert-manager-webhook-687f57d79b-2kqhz" Jan 03 05:51:53 crc kubenswrapper[4854]: I0103 05:51:53.017934 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvbsl\" (UniqueName: \"kubernetes.io/projected/c0e603b1-39cd-4500-a0d6-190b7a522734-kube-api-access-hvbsl\") pod \"cert-manager-858654f9db-98h9c\" (UID: \"c0e603b1-39cd-4500-a0d6-190b7a522734\") " pod="cert-manager/cert-manager-858654f9db-98h9c" Jan 03 05:51:53 crc kubenswrapper[4854]: I0103 05:51:53.047438 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-tgcxk" Jan 03 05:51:53 crc kubenswrapper[4854]: I0103 05:51:53.119221 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r89vp\" (UniqueName: \"kubernetes.io/projected/b1c0c51a-7edb-49cb-9b71-f7ce149bde33-kube-api-access-r89vp\") pod \"cert-manager-webhook-687f57d79b-2kqhz\" (UID: \"b1c0c51a-7edb-49cb-9b71-f7ce149bde33\") " pod="cert-manager/cert-manager-webhook-687f57d79b-2kqhz" Jan 03 05:51:53 crc kubenswrapper[4854]: I0103 05:51:53.119294 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvbsl\" (UniqueName: \"kubernetes.io/projected/c0e603b1-39cd-4500-a0d6-190b7a522734-kube-api-access-hvbsl\") pod \"cert-manager-858654f9db-98h9c\" (UID: \"c0e603b1-39cd-4500-a0d6-190b7a522734\") " pod="cert-manager/cert-manager-858654f9db-98h9c" Jan 03 05:51:53 crc kubenswrapper[4854]: I0103 05:51:53.119393 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6fqb\" (UniqueName: \"kubernetes.io/projected/35ddfe59-bd9d-45a9-aa8d-eceb6d5e3206-kube-api-access-k6fqb\") pod \"cert-manager-cainjector-cf98fcc89-tj7f5\" (UID: \"35ddfe59-bd9d-45a9-aa8d-eceb6d5e3206\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-tj7f5" Jan 03 05:51:53 crc kubenswrapper[4854]: I0103 05:51:53.142888 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6fqb\" (UniqueName: \"kubernetes.io/projected/35ddfe59-bd9d-45a9-aa8d-eceb6d5e3206-kube-api-access-k6fqb\") pod \"cert-manager-cainjector-cf98fcc89-tj7f5\" (UID: \"35ddfe59-bd9d-45a9-aa8d-eceb6d5e3206\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-tj7f5" Jan 03 05:51:53 crc kubenswrapper[4854]: I0103 05:51:53.143236 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r89vp\" (UniqueName: \"kubernetes.io/projected/b1c0c51a-7edb-49cb-9b71-f7ce149bde33-kube-api-access-r89vp\") pod \"cert-manager-webhook-687f57d79b-2kqhz\" (UID: \"b1c0c51a-7edb-49cb-9b71-f7ce149bde33\") " pod="cert-manager/cert-manager-webhook-687f57d79b-2kqhz" Jan 03 05:51:53 crc kubenswrapper[4854]: I0103 05:51:53.147263 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvbsl\" (UniqueName: \"kubernetes.io/projected/c0e603b1-39cd-4500-a0d6-190b7a522734-kube-api-access-hvbsl\") pod \"cert-manager-858654f9db-98h9c\" (UID: \"c0e603b1-39cd-4500-a0d6-190b7a522734\") " pod="cert-manager/cert-manager-858654f9db-98h9c" Jan 03 05:51:53 crc kubenswrapper[4854]: I0103 05:51:53.219795 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-tj7f5" Jan 03 05:51:53 crc kubenswrapper[4854]: I0103 05:51:53.243539 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-98h9c" Jan 03 05:51:53 crc kubenswrapper[4854]: I0103 05:51:53.252932 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-2kqhz" Jan 03 05:51:53 crc kubenswrapper[4854]: I0103 05:51:53.664381 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-tj7f5"] Jan 03 05:51:53 crc kubenswrapper[4854]: I0103 05:51:53.921163 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-2kqhz"] Jan 03 05:51:53 crc kubenswrapper[4854]: W0103 05:51:53.924671 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1c0c51a_7edb_49cb_9b71_f7ce149bde33.slice/crio-2b264a7a57b652b011be533f56c9d2b1a7788dc0742270c67d1299415b2cf2e8 WatchSource:0}: Error finding container 2b264a7a57b652b011be533f56c9d2b1a7788dc0742270c67d1299415b2cf2e8: Status 404 returned error can't find the container with id 2b264a7a57b652b011be533f56c9d2b1a7788dc0742270c67d1299415b2cf2e8 Jan 03 05:51:53 crc kubenswrapper[4854]: I0103 05:51:53.979750 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-98h9c"] Jan 03 05:51:53 crc kubenswrapper[4854]: W0103 05:51:53.986003 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0e603b1_39cd_4500_a0d6_190b7a522734.slice/crio-064e43a89f56783b58d4df85773c645da671917fbedffb5967bfa5821c855adc WatchSource:0}: Error finding container 064e43a89f56783b58d4df85773c645da671917fbedffb5967bfa5821c855adc: Status 404 returned error can't find the container with id 064e43a89f56783b58d4df85773c645da671917fbedffb5967bfa5821c855adc Jan 03 05:51:54 crc kubenswrapper[4854]: I0103 05:51:54.051885 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-2kqhz" event={"ID":"b1c0c51a-7edb-49cb-9b71-f7ce149bde33","Type":"ContainerStarted","Data":"2b264a7a57b652b011be533f56c9d2b1a7788dc0742270c67d1299415b2cf2e8"} Jan 03 05:51:54 crc kubenswrapper[4854]: I0103 05:51:54.053224 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-98h9c" event={"ID":"c0e603b1-39cd-4500-a0d6-190b7a522734","Type":"ContainerStarted","Data":"064e43a89f56783b58d4df85773c645da671917fbedffb5967bfa5821c855adc"} Jan 03 05:51:54 crc kubenswrapper[4854]: I0103 05:51:54.056282 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-tj7f5" event={"ID":"35ddfe59-bd9d-45a9-aa8d-eceb6d5e3206","Type":"ContainerStarted","Data":"66a6787de7ceaf578858e52b66a3ac542c8835616375f035bcb647813e3250ec"} Jan 03 05:51:59 crc kubenswrapper[4854]: I0103 05:51:59.100832 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-tj7f5" event={"ID":"35ddfe59-bd9d-45a9-aa8d-eceb6d5e3206","Type":"ContainerStarted","Data":"c5a69e319c231033d51969096d73ae6543e4f37fa34440bab87a0ee51c23d3f9"} Jan 03 05:51:59 crc kubenswrapper[4854]: I0103 05:51:59.127717 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-tj7f5" podStartSLOduration=3.65555842 podStartE2EDuration="7.127703532s" podCreationTimestamp="2026-01-03 05:51:52 +0000 UTC" firstStartedPulling="2026-01-03 05:51:53.67490749 +0000 UTC m=+692.001484062" lastFinishedPulling="2026-01-03 05:51:57.147052602 +0000 UTC m=+695.473629174" observedRunningTime="2026-01-03 05:51:59.123447164 +0000 UTC m=+697.450023736" watchObservedRunningTime="2026-01-03 05:51:59.127703532 +0000 UTC m=+697.454280104" Jan 03 05:52:00 crc kubenswrapper[4854]: I0103 05:52:00.112598 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-2kqhz" event={"ID":"b1c0c51a-7edb-49cb-9b71-f7ce149bde33","Type":"ContainerStarted","Data":"e9285e22e4b1c9f6788543f18b6d4c416b3b952ed1ee647e8920a3226019a857"} Jan 03 05:52:00 crc kubenswrapper[4854]: I0103 05:52:00.112772 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-2kqhz" Jan 03 05:52:00 crc kubenswrapper[4854]: I0103 05:52:00.116677 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-98h9c" event={"ID":"c0e603b1-39cd-4500-a0d6-190b7a522734","Type":"ContainerStarted","Data":"09f7717267c96bce01784c3169f2aa4bc97b039d64245ec49890858e2d5dcf62"} Jan 03 05:52:00 crc kubenswrapper[4854]: I0103 05:52:00.134552 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-2kqhz" podStartSLOduration=2.43520531 podStartE2EDuration="8.134531647s" podCreationTimestamp="2026-01-03 05:51:52 +0000 UTC" firstStartedPulling="2026-01-03 05:51:53.927351655 +0000 UTC m=+692.253928237" lastFinishedPulling="2026-01-03 05:51:59.626678002 +0000 UTC m=+697.953254574" observedRunningTime="2026-01-03 05:52:00.13070867 +0000 UTC m=+698.457285242" watchObservedRunningTime="2026-01-03 05:52:00.134531647 +0000 UTC m=+698.461108219" Jan 03 05:52:00 crc kubenswrapper[4854]: I0103 05:52:00.155796 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-98h9c" podStartSLOduration=2.448423665 podStartE2EDuration="8.155777746s" podCreationTimestamp="2026-01-03 05:51:52 +0000 UTC" firstStartedPulling="2026-01-03 05:51:53.98904302 +0000 UTC m=+692.315619602" lastFinishedPulling="2026-01-03 05:51:59.696397111 +0000 UTC m=+698.022973683" observedRunningTime="2026-01-03 05:52:00.147671 +0000 UTC m=+698.474247572" watchObservedRunningTime="2026-01-03 05:52:00.155777746 +0000 UTC m=+698.482354318" Jan 03 05:52:08 crc kubenswrapper[4854]: I0103 05:52:08.259247 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-2kqhz" Jan 03 05:52:36 crc kubenswrapper[4854]: I0103 05:52:36.325756 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360brjsxg"] Jan 03 05:52:36 crc kubenswrapper[4854]: I0103 05:52:36.327412 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360brjsxg" Jan 03 05:52:36 crc kubenswrapper[4854]: I0103 05:52:36.332655 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 03 05:52:36 crc kubenswrapper[4854]: I0103 05:52:36.349771 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360brjsxg"] Jan 03 05:52:36 crc kubenswrapper[4854]: I0103 05:52:36.384610 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1c0bafd5-153c-4661-9cf6-af7674792486-bundle\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360brjsxg\" (UID: \"1c0bafd5-153c-4661-9cf6-af7674792486\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360brjsxg" Jan 03 05:52:36 crc kubenswrapper[4854]: I0103 05:52:36.384714 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbs8h\" (UniqueName: \"kubernetes.io/projected/1c0bafd5-153c-4661-9cf6-af7674792486-kube-api-access-lbs8h\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360brjsxg\" (UID: \"1c0bafd5-153c-4661-9cf6-af7674792486\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360brjsxg" Jan 03 05:52:36 crc kubenswrapper[4854]: I0103 05:52:36.384769 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1c0bafd5-153c-4661-9cf6-af7674792486-util\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360brjsxg\" (UID: \"1c0bafd5-153c-4661-9cf6-af7674792486\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360brjsxg" Jan 03 05:52:36 crc kubenswrapper[4854]: I0103 05:52:36.485934 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbs8h\" (UniqueName: \"kubernetes.io/projected/1c0bafd5-153c-4661-9cf6-af7674792486-kube-api-access-lbs8h\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360brjsxg\" (UID: \"1c0bafd5-153c-4661-9cf6-af7674792486\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360brjsxg" Jan 03 05:52:36 crc kubenswrapper[4854]: I0103 05:52:36.486022 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1c0bafd5-153c-4661-9cf6-af7674792486-util\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360brjsxg\" (UID: \"1c0bafd5-153c-4661-9cf6-af7674792486\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360brjsxg" Jan 03 05:52:36 crc kubenswrapper[4854]: I0103 05:52:36.486093 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1c0bafd5-153c-4661-9cf6-af7674792486-bundle\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360brjsxg\" (UID: \"1c0bafd5-153c-4661-9cf6-af7674792486\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360brjsxg" Jan 03 05:52:36 crc kubenswrapper[4854]: I0103 05:52:36.486626 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1c0bafd5-153c-4661-9cf6-af7674792486-bundle\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360brjsxg\" (UID: \"1c0bafd5-153c-4661-9cf6-af7674792486\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360brjsxg" Jan 03 05:52:36 crc kubenswrapper[4854]: I0103 05:52:36.487003 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1c0bafd5-153c-4661-9cf6-af7674792486-util\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360brjsxg\" (UID: \"1c0bafd5-153c-4661-9cf6-af7674792486\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360brjsxg" Jan 03 05:52:36 crc kubenswrapper[4854]: I0103 05:52:36.533534 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbs8h\" (UniqueName: \"kubernetes.io/projected/1c0bafd5-153c-4661-9cf6-af7674792486-kube-api-access-lbs8h\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360brjsxg\" (UID: \"1c0bafd5-153c-4661-9cf6-af7674792486\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360brjsxg" Jan 03 05:52:36 crc kubenswrapper[4854]: I0103 05:52:36.559303 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2c2f6w"] Jan 03 05:52:36 crc kubenswrapper[4854]: I0103 05:52:36.561012 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2c2f6w" Jan 03 05:52:36 crc kubenswrapper[4854]: I0103 05:52:36.571836 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2c2f6w"] Jan 03 05:52:36 crc kubenswrapper[4854]: I0103 05:52:36.596173 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/982de9a1-6f2b-46d2-b3d8-59d2566f1295-util\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2c2f6w\" (UID: \"982de9a1-6f2b-46d2-b3d8-59d2566f1295\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2c2f6w" Jan 03 05:52:36 crc kubenswrapper[4854]: I0103 05:52:36.596301 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlw6q\" (UniqueName: \"kubernetes.io/projected/982de9a1-6f2b-46d2-b3d8-59d2566f1295-kube-api-access-qlw6q\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2c2f6w\" (UID: \"982de9a1-6f2b-46d2-b3d8-59d2566f1295\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2c2f6w" Jan 03 05:52:36 crc kubenswrapper[4854]: I0103 05:52:36.596328 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/982de9a1-6f2b-46d2-b3d8-59d2566f1295-bundle\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2c2f6w\" (UID: \"982de9a1-6f2b-46d2-b3d8-59d2566f1295\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2c2f6w" Jan 03 05:52:36 crc kubenswrapper[4854]: I0103 05:52:36.649063 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360brjsxg" Jan 03 05:52:36 crc kubenswrapper[4854]: I0103 05:52:36.697727 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlw6q\" (UniqueName: \"kubernetes.io/projected/982de9a1-6f2b-46d2-b3d8-59d2566f1295-kube-api-access-qlw6q\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2c2f6w\" (UID: \"982de9a1-6f2b-46d2-b3d8-59d2566f1295\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2c2f6w" Jan 03 05:52:36 crc kubenswrapper[4854]: I0103 05:52:36.697775 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/982de9a1-6f2b-46d2-b3d8-59d2566f1295-bundle\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2c2f6w\" (UID: \"982de9a1-6f2b-46d2-b3d8-59d2566f1295\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2c2f6w" Jan 03 05:52:36 crc kubenswrapper[4854]: I0103 05:52:36.698173 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/982de9a1-6f2b-46d2-b3d8-59d2566f1295-util\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2c2f6w\" (UID: \"982de9a1-6f2b-46d2-b3d8-59d2566f1295\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2c2f6w" Jan 03 05:52:36 crc kubenswrapper[4854]: I0103 05:52:36.698281 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/982de9a1-6f2b-46d2-b3d8-59d2566f1295-bundle\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2c2f6w\" (UID: \"982de9a1-6f2b-46d2-b3d8-59d2566f1295\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2c2f6w" Jan 03 05:52:36 crc kubenswrapper[4854]: I0103 05:52:36.698749 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/982de9a1-6f2b-46d2-b3d8-59d2566f1295-util\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2c2f6w\" (UID: \"982de9a1-6f2b-46d2-b3d8-59d2566f1295\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2c2f6w" Jan 03 05:52:36 crc kubenswrapper[4854]: I0103 05:52:36.716599 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlw6q\" (UniqueName: \"kubernetes.io/projected/982de9a1-6f2b-46d2-b3d8-59d2566f1295-kube-api-access-qlw6q\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2c2f6w\" (UID: \"982de9a1-6f2b-46d2-b3d8-59d2566f1295\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2c2f6w" Jan 03 05:52:36 crc kubenswrapper[4854]: I0103 05:52:36.900821 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2c2f6w" Jan 03 05:52:37 crc kubenswrapper[4854]: I0103 05:52:37.093568 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360brjsxg"] Jan 03 05:52:37 crc kubenswrapper[4854]: I0103 05:52:37.163838 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2c2f6w"] Jan 03 05:52:38 crc kubenswrapper[4854]: W0103 05:52:38.098489 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c0bafd5_153c_4661_9cf6_af7674792486.slice/crio-b8762e1e20ee7305728c1e95e95003f3417c44cbe769fde9b1abb5150c60ce33 WatchSource:0}: Error finding container b8762e1e20ee7305728c1e95e95003f3417c44cbe769fde9b1abb5150c60ce33: Status 404 returned error can't find the container with id b8762e1e20ee7305728c1e95e95003f3417c44cbe769fde9b1abb5150c60ce33 Jan 03 05:52:39 crc kubenswrapper[4854]: I0103 05:52:39.103660 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2c2f6w" event={"ID":"982de9a1-6f2b-46d2-b3d8-59d2566f1295","Type":"ContainerStarted","Data":"1e372ff6821ab8a08feef2eea9c44344fdd2fde9443e99e1a228924a53b0b355"} Jan 03 05:52:39 crc kubenswrapper[4854]: I0103 05:52:39.107313 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360brjsxg" event={"ID":"1c0bafd5-153c-4661-9cf6-af7674792486","Type":"ContainerStarted","Data":"b8762e1e20ee7305728c1e95e95003f3417c44cbe769fde9b1abb5150c60ce33"} Jan 03 05:52:40 crc kubenswrapper[4854]: I0103 05:52:40.079031 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dngnk"] Jan 03 05:52:40 crc kubenswrapper[4854]: I0103 05:52:40.081243 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dngnk" Jan 03 05:52:40 crc kubenswrapper[4854]: I0103 05:52:40.096935 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dngnk"] Jan 03 05:52:40 crc kubenswrapper[4854]: I0103 05:52:40.118186 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cefb4942-a9e4-42bb-884b-dac9d8c52fa4-utilities\") pod \"redhat-operators-dngnk\" (UID: \"cefb4942-a9e4-42bb-884b-dac9d8c52fa4\") " pod="openshift-marketplace/redhat-operators-dngnk" Jan 03 05:52:40 crc kubenswrapper[4854]: I0103 05:52:40.118823 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pvvv\" (UniqueName: \"kubernetes.io/projected/cefb4942-a9e4-42bb-884b-dac9d8c52fa4-kube-api-access-2pvvv\") pod \"redhat-operators-dngnk\" (UID: \"cefb4942-a9e4-42bb-884b-dac9d8c52fa4\") " pod="openshift-marketplace/redhat-operators-dngnk" Jan 03 05:52:40 crc kubenswrapper[4854]: I0103 05:52:40.119448 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cefb4942-a9e4-42bb-884b-dac9d8c52fa4-catalog-content\") pod \"redhat-operators-dngnk\" (UID: \"cefb4942-a9e4-42bb-884b-dac9d8c52fa4\") " pod="openshift-marketplace/redhat-operators-dngnk" Jan 03 05:52:40 crc kubenswrapper[4854]: I0103 05:52:40.221115 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cefb4942-a9e4-42bb-884b-dac9d8c52fa4-catalog-content\") pod \"redhat-operators-dngnk\" (UID: \"cefb4942-a9e4-42bb-884b-dac9d8c52fa4\") " pod="openshift-marketplace/redhat-operators-dngnk" Jan 03 05:52:40 crc kubenswrapper[4854]: I0103 05:52:40.221202 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cefb4942-a9e4-42bb-884b-dac9d8c52fa4-utilities\") pod \"redhat-operators-dngnk\" (UID: \"cefb4942-a9e4-42bb-884b-dac9d8c52fa4\") " pod="openshift-marketplace/redhat-operators-dngnk" Jan 03 05:52:40 crc kubenswrapper[4854]: I0103 05:52:40.221228 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pvvv\" (UniqueName: \"kubernetes.io/projected/cefb4942-a9e4-42bb-884b-dac9d8c52fa4-kube-api-access-2pvvv\") pod \"redhat-operators-dngnk\" (UID: \"cefb4942-a9e4-42bb-884b-dac9d8c52fa4\") " pod="openshift-marketplace/redhat-operators-dngnk" Jan 03 05:52:40 crc kubenswrapper[4854]: I0103 05:52:40.222032 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cefb4942-a9e4-42bb-884b-dac9d8c52fa4-catalog-content\") pod \"redhat-operators-dngnk\" (UID: \"cefb4942-a9e4-42bb-884b-dac9d8c52fa4\") " pod="openshift-marketplace/redhat-operators-dngnk" Jan 03 05:52:40 crc kubenswrapper[4854]: I0103 05:52:40.222248 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cefb4942-a9e4-42bb-884b-dac9d8c52fa4-utilities\") pod \"redhat-operators-dngnk\" (UID: \"cefb4942-a9e4-42bb-884b-dac9d8c52fa4\") " pod="openshift-marketplace/redhat-operators-dngnk" Jan 03 05:52:40 crc kubenswrapper[4854]: I0103 05:52:40.259857 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pvvv\" (UniqueName: \"kubernetes.io/projected/cefb4942-a9e4-42bb-884b-dac9d8c52fa4-kube-api-access-2pvvv\") pod \"redhat-operators-dngnk\" (UID: \"cefb4942-a9e4-42bb-884b-dac9d8c52fa4\") " pod="openshift-marketplace/redhat-operators-dngnk" Jan 03 05:52:40 crc kubenswrapper[4854]: I0103 05:52:40.402706 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dngnk" Jan 03 05:52:40 crc kubenswrapper[4854]: I0103 05:52:40.641308 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dngnk"] Jan 03 05:52:41 crc kubenswrapper[4854]: I0103 05:52:41.118876 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dngnk" event={"ID":"cefb4942-a9e4-42bb-884b-dac9d8c52fa4","Type":"ContainerStarted","Data":"421b5ae7d754fc1c33d4a4f79be08d02b9433a3a7403833064f52467bc7e06f5"} Jan 03 05:52:42 crc kubenswrapper[4854]: I0103 05:52:42.125848 4854 generic.go:334] "Generic (PLEG): container finished" podID="1c0bafd5-153c-4661-9cf6-af7674792486" containerID="45861c40f02a53575bdd786c9f1e187464095300d84ce7c03e37a55f715ae90d" exitCode=0 Jan 03 05:52:42 crc kubenswrapper[4854]: I0103 05:52:42.126401 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360brjsxg" event={"ID":"1c0bafd5-153c-4661-9cf6-af7674792486","Type":"ContainerDied","Data":"45861c40f02a53575bdd786c9f1e187464095300d84ce7c03e37a55f715ae90d"} Jan 03 05:52:42 crc kubenswrapper[4854]: I0103 05:52:42.129360 4854 generic.go:334] "Generic (PLEG): container finished" podID="cefb4942-a9e4-42bb-884b-dac9d8c52fa4" containerID="037073212a78dc58adfee852ebb2b2ea1cba1612829f0b8a75719b5df745d1df" exitCode=0 Jan 03 05:52:42 crc kubenswrapper[4854]: I0103 05:52:42.129579 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dngnk" event={"ID":"cefb4942-a9e4-42bb-884b-dac9d8c52fa4","Type":"ContainerDied","Data":"037073212a78dc58adfee852ebb2b2ea1cba1612829f0b8a75719b5df745d1df"} Jan 03 05:52:42 crc kubenswrapper[4854]: I0103 05:52:42.131896 4854 generic.go:334] "Generic (PLEG): container finished" podID="982de9a1-6f2b-46d2-b3d8-59d2566f1295" containerID="ddea09e2cc9d09d7575a76d1dab38545784aa9249948156d5c1954e53e4e7d7c" exitCode=0 Jan 03 05:52:42 crc kubenswrapper[4854]: I0103 05:52:42.131927 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2c2f6w" event={"ID":"982de9a1-6f2b-46d2-b3d8-59d2566f1295","Type":"ContainerDied","Data":"ddea09e2cc9d09d7575a76d1dab38545784aa9249948156d5c1954e53e4e7d7c"} Jan 03 05:52:44 crc kubenswrapper[4854]: I0103 05:52:44.248226 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dngnk" event={"ID":"cefb4942-a9e4-42bb-884b-dac9d8c52fa4","Type":"ContainerStarted","Data":"c8446da4c2326946f18c0c754fa674012226890a98b9b2bcf97f99a0687982b7"} Jan 03 05:52:46 crc kubenswrapper[4854]: I0103 05:52:46.269020 4854 generic.go:334] "Generic (PLEG): container finished" podID="cefb4942-a9e4-42bb-884b-dac9d8c52fa4" containerID="c8446da4c2326946f18c0c754fa674012226890a98b9b2bcf97f99a0687982b7" exitCode=0 Jan 03 05:52:46 crc kubenswrapper[4854]: I0103 05:52:46.269134 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dngnk" event={"ID":"cefb4942-a9e4-42bb-884b-dac9d8c52fa4","Type":"ContainerDied","Data":"c8446da4c2326946f18c0c754fa674012226890a98b9b2bcf97f99a0687982b7"} Jan 03 05:52:54 crc kubenswrapper[4854]: I0103 05:52:54.335954 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dngnk" event={"ID":"cefb4942-a9e4-42bb-884b-dac9d8c52fa4","Type":"ContainerStarted","Data":"94f9dfd5d2c222456ac600104973e0b17cde2d677aaab81b6a793653bbfb4ee4"} Jan 03 05:52:54 crc kubenswrapper[4854]: I0103 05:52:54.337971 4854 generic.go:334] "Generic (PLEG): container finished" podID="982de9a1-6f2b-46d2-b3d8-59d2566f1295" containerID="96ca408f37fdea9a39ee53b39c9329a0733d0c3f6f574e46c38363761980ab94" exitCode=0 Jan 03 05:52:54 crc kubenswrapper[4854]: I0103 05:52:54.338023 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2c2f6w" event={"ID":"982de9a1-6f2b-46d2-b3d8-59d2566f1295","Type":"ContainerDied","Data":"96ca408f37fdea9a39ee53b39c9329a0733d0c3f6f574e46c38363761980ab94"} Jan 03 05:52:54 crc kubenswrapper[4854]: I0103 05:52:54.340255 4854 generic.go:334] "Generic (PLEG): container finished" podID="1c0bafd5-153c-4661-9cf6-af7674792486" containerID="3ff5dff24586c5e53242aaf7b34188d43c18fc5e50a352300b641654538ebfd6" exitCode=0 Jan 03 05:52:54 crc kubenswrapper[4854]: I0103 05:52:54.340288 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360brjsxg" event={"ID":"1c0bafd5-153c-4661-9cf6-af7674792486","Type":"ContainerDied","Data":"3ff5dff24586c5e53242aaf7b34188d43c18fc5e50a352300b641654538ebfd6"} Jan 03 05:52:54 crc kubenswrapper[4854]: I0103 05:52:54.383272 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dngnk" podStartSLOduration=3.264995555 podStartE2EDuration="14.383255776s" podCreationTimestamp="2026-01-03 05:52:40 +0000 UTC" firstStartedPulling="2026-01-03 05:52:42.13136362 +0000 UTC m=+740.457940192" lastFinishedPulling="2026-01-03 05:52:53.249623831 +0000 UTC m=+751.576200413" observedRunningTime="2026-01-03 05:52:54.382802765 +0000 UTC m=+752.709379357" watchObservedRunningTime="2026-01-03 05:52:54.383255776 +0000 UTC m=+752.709832348" Jan 03 05:52:55 crc kubenswrapper[4854]: I0103 05:52:55.351854 4854 generic.go:334] "Generic (PLEG): container finished" podID="1c0bafd5-153c-4661-9cf6-af7674792486" containerID="0f9cc322e21eb144dffc3f2e069fc72fb4693379cacf511928d621d5cd419e99" exitCode=0 Jan 03 05:52:55 crc kubenswrapper[4854]: I0103 05:52:55.351933 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360brjsxg" event={"ID":"1c0bafd5-153c-4661-9cf6-af7674792486","Type":"ContainerDied","Data":"0f9cc322e21eb144dffc3f2e069fc72fb4693379cacf511928d621d5cd419e99"} Jan 03 05:52:55 crc kubenswrapper[4854]: I0103 05:52:55.355218 4854 generic.go:334] "Generic (PLEG): container finished" podID="982de9a1-6f2b-46d2-b3d8-59d2566f1295" containerID="2aeea758182033e0cc7035123b6eef3519bc65aa6f0ea7572c0aadc6e410f70b" exitCode=0 Jan 03 05:52:55 crc kubenswrapper[4854]: I0103 05:52:55.355284 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2c2f6w" event={"ID":"982de9a1-6f2b-46d2-b3d8-59d2566f1295","Type":"ContainerDied","Data":"2aeea758182033e0cc7035123b6eef3519bc65aa6f0ea7572c0aadc6e410f70b"} Jan 03 05:52:56 crc kubenswrapper[4854]: I0103 05:52:56.766245 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2c2f6w" Jan 03 05:52:56 crc kubenswrapper[4854]: I0103 05:52:56.775959 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360brjsxg" Jan 03 05:52:56 crc kubenswrapper[4854]: I0103 05:52:56.896536 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlw6q\" (UniqueName: \"kubernetes.io/projected/982de9a1-6f2b-46d2-b3d8-59d2566f1295-kube-api-access-qlw6q\") pod \"982de9a1-6f2b-46d2-b3d8-59d2566f1295\" (UID: \"982de9a1-6f2b-46d2-b3d8-59d2566f1295\") " Jan 03 05:52:56 crc kubenswrapper[4854]: I0103 05:52:56.896637 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1c0bafd5-153c-4661-9cf6-af7674792486-util\") pod \"1c0bafd5-153c-4661-9cf6-af7674792486\" (UID: \"1c0bafd5-153c-4661-9cf6-af7674792486\") " Jan 03 05:52:56 crc kubenswrapper[4854]: I0103 05:52:56.896667 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1c0bafd5-153c-4661-9cf6-af7674792486-bundle\") pod \"1c0bafd5-153c-4661-9cf6-af7674792486\" (UID: \"1c0bafd5-153c-4661-9cf6-af7674792486\") " Jan 03 05:52:56 crc kubenswrapper[4854]: I0103 05:52:56.896706 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbs8h\" (UniqueName: \"kubernetes.io/projected/1c0bafd5-153c-4661-9cf6-af7674792486-kube-api-access-lbs8h\") pod \"1c0bafd5-153c-4661-9cf6-af7674792486\" (UID: \"1c0bafd5-153c-4661-9cf6-af7674792486\") " Jan 03 05:52:56 crc kubenswrapper[4854]: I0103 05:52:56.896738 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/982de9a1-6f2b-46d2-b3d8-59d2566f1295-bundle\") pod \"982de9a1-6f2b-46d2-b3d8-59d2566f1295\" (UID: \"982de9a1-6f2b-46d2-b3d8-59d2566f1295\") " Jan 03 05:52:56 crc kubenswrapper[4854]: I0103 05:52:56.896823 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/982de9a1-6f2b-46d2-b3d8-59d2566f1295-util\") pod \"982de9a1-6f2b-46d2-b3d8-59d2566f1295\" (UID: \"982de9a1-6f2b-46d2-b3d8-59d2566f1295\") " Jan 03 05:52:56 crc kubenswrapper[4854]: I0103 05:52:56.897852 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c0bafd5-153c-4661-9cf6-af7674792486-bundle" (OuterVolumeSpecName: "bundle") pod "1c0bafd5-153c-4661-9cf6-af7674792486" (UID: "1c0bafd5-153c-4661-9cf6-af7674792486"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:52:56 crc kubenswrapper[4854]: I0103 05:52:56.898113 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/982de9a1-6f2b-46d2-b3d8-59d2566f1295-bundle" (OuterVolumeSpecName: "bundle") pod "982de9a1-6f2b-46d2-b3d8-59d2566f1295" (UID: "982de9a1-6f2b-46d2-b3d8-59d2566f1295"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:52:56 crc kubenswrapper[4854]: I0103 05:52:56.903308 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/982de9a1-6f2b-46d2-b3d8-59d2566f1295-kube-api-access-qlw6q" (OuterVolumeSpecName: "kube-api-access-qlw6q") pod "982de9a1-6f2b-46d2-b3d8-59d2566f1295" (UID: "982de9a1-6f2b-46d2-b3d8-59d2566f1295"). InnerVolumeSpecName "kube-api-access-qlw6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:52:56 crc kubenswrapper[4854]: I0103 05:52:56.909354 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/982de9a1-6f2b-46d2-b3d8-59d2566f1295-util" (OuterVolumeSpecName: "util") pod "982de9a1-6f2b-46d2-b3d8-59d2566f1295" (UID: "982de9a1-6f2b-46d2-b3d8-59d2566f1295"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:52:56 crc kubenswrapper[4854]: I0103 05:52:56.910000 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c0bafd5-153c-4661-9cf6-af7674792486-kube-api-access-lbs8h" (OuterVolumeSpecName: "kube-api-access-lbs8h") pod "1c0bafd5-153c-4661-9cf6-af7674792486" (UID: "1c0bafd5-153c-4661-9cf6-af7674792486"). InnerVolumeSpecName "kube-api-access-lbs8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:52:56 crc kubenswrapper[4854]: I0103 05:52:56.916681 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c0bafd5-153c-4661-9cf6-af7674792486-util" (OuterVolumeSpecName: "util") pod "1c0bafd5-153c-4661-9cf6-af7674792486" (UID: "1c0bafd5-153c-4661-9cf6-af7674792486"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:52:56 crc kubenswrapper[4854]: I0103 05:52:56.997997 4854 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/982de9a1-6f2b-46d2-b3d8-59d2566f1295-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 05:52:56 crc kubenswrapper[4854]: I0103 05:52:56.998029 4854 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/982de9a1-6f2b-46d2-b3d8-59d2566f1295-util\") on node \"crc\" DevicePath \"\"" Jan 03 05:52:56 crc kubenswrapper[4854]: I0103 05:52:56.998038 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlw6q\" (UniqueName: \"kubernetes.io/projected/982de9a1-6f2b-46d2-b3d8-59d2566f1295-kube-api-access-qlw6q\") on node \"crc\" DevicePath \"\"" Jan 03 05:52:56 crc kubenswrapper[4854]: I0103 05:52:56.998048 4854 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1c0bafd5-153c-4661-9cf6-af7674792486-util\") on node \"crc\" DevicePath \"\"" Jan 03 05:52:56 crc kubenswrapper[4854]: I0103 05:52:56.998057 4854 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1c0bafd5-153c-4661-9cf6-af7674792486-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 05:52:56 crc kubenswrapper[4854]: I0103 05:52:56.998065 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbs8h\" (UniqueName: \"kubernetes.io/projected/1c0bafd5-153c-4661-9cf6-af7674792486-kube-api-access-lbs8h\") on node \"crc\" DevicePath \"\"" Jan 03 05:52:57 crc kubenswrapper[4854]: I0103 05:52:57.375014 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2c2f6w" Jan 03 05:52:57 crc kubenswrapper[4854]: I0103 05:52:57.375133 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2c2f6w" event={"ID":"982de9a1-6f2b-46d2-b3d8-59d2566f1295","Type":"ContainerDied","Data":"1e372ff6821ab8a08feef2eea9c44344fdd2fde9443e99e1a228924a53b0b355"} Jan 03 05:52:57 crc kubenswrapper[4854]: I0103 05:52:57.375646 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e372ff6821ab8a08feef2eea9c44344fdd2fde9443e99e1a228924a53b0b355" Jan 03 05:52:57 crc kubenswrapper[4854]: I0103 05:52:57.379065 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360brjsxg" event={"ID":"1c0bafd5-153c-4661-9cf6-af7674792486","Type":"ContainerDied","Data":"b8762e1e20ee7305728c1e95e95003f3417c44cbe769fde9b1abb5150c60ce33"} Jan 03 05:52:57 crc kubenswrapper[4854]: I0103 05:52:57.379148 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8762e1e20ee7305728c1e95e95003f3417c44cbe769fde9b1abb5150c60ce33" Jan 03 05:52:57 crc kubenswrapper[4854]: I0103 05:52:57.379262 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360brjsxg" Jan 03 05:53:00 crc kubenswrapper[4854]: I0103 05:53:00.403912 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dngnk" Jan 03 05:53:00 crc kubenswrapper[4854]: I0103 05:53:00.404242 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dngnk" Jan 03 05:53:00 crc kubenswrapper[4854]: I0103 05:53:00.461813 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dngnk" Jan 03 05:53:01 crc kubenswrapper[4854]: I0103 05:53:01.484498 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dngnk" Jan 03 05:53:01 crc kubenswrapper[4854]: I0103 05:53:01.549418 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dngnk"] Jan 03 05:53:03 crc kubenswrapper[4854]: I0103 05:53:03.424153 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dngnk" podUID="cefb4942-a9e4-42bb-884b-dac9d8c52fa4" containerName="registry-server" containerID="cri-o://94f9dfd5d2c222456ac600104973e0b17cde2d677aaab81b6a793653bbfb4ee4" gracePeriod=2 Jan 03 05:53:05 crc kubenswrapper[4854]: I0103 05:53:05.443252 4854 generic.go:334] "Generic (PLEG): container finished" podID="cefb4942-a9e4-42bb-884b-dac9d8c52fa4" containerID="94f9dfd5d2c222456ac600104973e0b17cde2d677aaab81b6a793653bbfb4ee4" exitCode=0 Jan 03 05:53:05 crc kubenswrapper[4854]: I0103 05:53:05.444434 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dngnk" event={"ID":"cefb4942-a9e4-42bb-884b-dac9d8c52fa4","Type":"ContainerDied","Data":"94f9dfd5d2c222456ac600104973e0b17cde2d677aaab81b6a793653bbfb4ee4"} Jan 03 05:53:05 crc kubenswrapper[4854]: I0103 05:53:05.695851 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dngnk" Jan 03 05:53:05 crc kubenswrapper[4854]: I0103 05:53:05.809711 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cefb4942-a9e4-42bb-884b-dac9d8c52fa4-catalog-content\") pod \"cefb4942-a9e4-42bb-884b-dac9d8c52fa4\" (UID: \"cefb4942-a9e4-42bb-884b-dac9d8c52fa4\") " Jan 03 05:53:05 crc kubenswrapper[4854]: I0103 05:53:05.809856 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pvvv\" (UniqueName: \"kubernetes.io/projected/cefb4942-a9e4-42bb-884b-dac9d8c52fa4-kube-api-access-2pvvv\") pod \"cefb4942-a9e4-42bb-884b-dac9d8c52fa4\" (UID: \"cefb4942-a9e4-42bb-884b-dac9d8c52fa4\") " Jan 03 05:53:05 crc kubenswrapper[4854]: I0103 05:53:05.809953 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cefb4942-a9e4-42bb-884b-dac9d8c52fa4-utilities\") pod \"cefb4942-a9e4-42bb-884b-dac9d8c52fa4\" (UID: \"cefb4942-a9e4-42bb-884b-dac9d8c52fa4\") " Jan 03 05:53:05 crc kubenswrapper[4854]: I0103 05:53:05.811511 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cefb4942-a9e4-42bb-884b-dac9d8c52fa4-utilities" (OuterVolumeSpecName: "utilities") pod "cefb4942-a9e4-42bb-884b-dac9d8c52fa4" (UID: "cefb4942-a9e4-42bb-884b-dac9d8c52fa4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:53:05 crc kubenswrapper[4854]: I0103 05:53:05.815720 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cefb4942-a9e4-42bb-884b-dac9d8c52fa4-kube-api-access-2pvvv" (OuterVolumeSpecName: "kube-api-access-2pvvv") pod "cefb4942-a9e4-42bb-884b-dac9d8c52fa4" (UID: "cefb4942-a9e4-42bb-884b-dac9d8c52fa4"). InnerVolumeSpecName "kube-api-access-2pvvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:53:05 crc kubenswrapper[4854]: I0103 05:53:05.911603 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pvvv\" (UniqueName: \"kubernetes.io/projected/cefb4942-a9e4-42bb-884b-dac9d8c52fa4-kube-api-access-2pvvv\") on node \"crc\" DevicePath \"\"" Jan 03 05:53:05 crc kubenswrapper[4854]: I0103 05:53:05.911836 4854 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cefb4942-a9e4-42bb-884b-dac9d8c52fa4-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 05:53:05 crc kubenswrapper[4854]: I0103 05:53:05.947370 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cefb4942-a9e4-42bb-884b-dac9d8c52fa4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cefb4942-a9e4-42bb-884b-dac9d8c52fa4" (UID: "cefb4942-a9e4-42bb-884b-dac9d8c52fa4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:53:06 crc kubenswrapper[4854]: I0103 05:53:06.015611 4854 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cefb4942-a9e4-42bb-884b-dac9d8c52fa4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 05:53:06 crc kubenswrapper[4854]: I0103 05:53:06.457366 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dngnk" event={"ID":"cefb4942-a9e4-42bb-884b-dac9d8c52fa4","Type":"ContainerDied","Data":"421b5ae7d754fc1c33d4a4f79be08d02b9433a3a7403833064f52467bc7e06f5"} Jan 03 05:53:06 crc kubenswrapper[4854]: I0103 05:53:06.457432 4854 scope.go:117] "RemoveContainer" containerID="94f9dfd5d2c222456ac600104973e0b17cde2d677aaab81b6a793653bbfb4ee4" Jan 03 05:53:06 crc kubenswrapper[4854]: I0103 05:53:06.457466 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dngnk" Jan 03 05:53:06 crc kubenswrapper[4854]: I0103 05:53:06.483569 4854 scope.go:117] "RemoveContainer" containerID="c8446da4c2326946f18c0c754fa674012226890a98b9b2bcf97f99a0687982b7" Jan 03 05:53:06 crc kubenswrapper[4854]: I0103 05:53:06.486196 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dngnk"] Jan 03 05:53:06 crc kubenswrapper[4854]: I0103 05:53:06.493992 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dngnk"] Jan 03 05:53:06 crc kubenswrapper[4854]: I0103 05:53:06.506609 4854 scope.go:117] "RemoveContainer" containerID="037073212a78dc58adfee852ebb2b2ea1cba1612829f0b8a75719b5df745d1df" Jan 03 05:53:08 crc kubenswrapper[4854]: I0103 05:53:08.127497 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cefb4942-a9e4-42bb-884b-dac9d8c52fa4" path="/var/lib/kubelet/pods/cefb4942-a9e4-42bb-884b-dac9d8c52fa4/volumes" Jan 03 05:53:08 crc kubenswrapper[4854]: I0103 05:53:08.787135 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-bd45dfbc8-vmrll"] Jan 03 05:53:08 crc kubenswrapper[4854]: E0103 05:53:08.787771 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="982de9a1-6f2b-46d2-b3d8-59d2566f1295" containerName="util" Jan 03 05:53:08 crc kubenswrapper[4854]: I0103 05:53:08.787790 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="982de9a1-6f2b-46d2-b3d8-59d2566f1295" containerName="util" Jan 03 05:53:08 crc kubenswrapper[4854]: E0103 05:53:08.787805 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c0bafd5-153c-4661-9cf6-af7674792486" containerName="extract" Jan 03 05:53:08 crc kubenswrapper[4854]: I0103 05:53:08.787814 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c0bafd5-153c-4661-9cf6-af7674792486" containerName="extract" Jan 03 05:53:08 crc kubenswrapper[4854]: E0103 05:53:08.787826 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="982de9a1-6f2b-46d2-b3d8-59d2566f1295" containerName="extract" Jan 03 05:53:08 crc kubenswrapper[4854]: I0103 05:53:08.787834 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="982de9a1-6f2b-46d2-b3d8-59d2566f1295" containerName="extract" Jan 03 05:53:08 crc kubenswrapper[4854]: E0103 05:53:08.787846 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cefb4942-a9e4-42bb-884b-dac9d8c52fa4" containerName="extract-content" Jan 03 05:53:08 crc kubenswrapper[4854]: I0103 05:53:08.787854 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="cefb4942-a9e4-42bb-884b-dac9d8c52fa4" containerName="extract-content" Jan 03 05:53:08 crc kubenswrapper[4854]: E0103 05:53:08.787865 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c0bafd5-153c-4661-9cf6-af7674792486" containerName="pull" Jan 03 05:53:08 crc kubenswrapper[4854]: I0103 05:53:08.787872 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c0bafd5-153c-4661-9cf6-af7674792486" containerName="pull" Jan 03 05:53:08 crc kubenswrapper[4854]: E0103 05:53:08.787882 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cefb4942-a9e4-42bb-884b-dac9d8c52fa4" containerName="registry-server" Jan 03 05:53:08 crc kubenswrapper[4854]: I0103 05:53:08.787890 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="cefb4942-a9e4-42bb-884b-dac9d8c52fa4" containerName="registry-server" Jan 03 05:53:08 crc kubenswrapper[4854]: E0103 05:53:08.787899 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cefb4942-a9e4-42bb-884b-dac9d8c52fa4" containerName="extract-utilities" Jan 03 05:53:08 crc kubenswrapper[4854]: I0103 05:53:08.787907 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="cefb4942-a9e4-42bb-884b-dac9d8c52fa4" containerName="extract-utilities" Jan 03 05:53:08 crc kubenswrapper[4854]: E0103 05:53:08.787918 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="982de9a1-6f2b-46d2-b3d8-59d2566f1295" containerName="pull" Jan 03 05:53:08 crc kubenswrapper[4854]: I0103 05:53:08.787925 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="982de9a1-6f2b-46d2-b3d8-59d2566f1295" containerName="pull" Jan 03 05:53:08 crc kubenswrapper[4854]: E0103 05:53:08.787941 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c0bafd5-153c-4661-9cf6-af7674792486" containerName="util" Jan 03 05:53:08 crc kubenswrapper[4854]: I0103 05:53:08.787948 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c0bafd5-153c-4661-9cf6-af7674792486" containerName="util" Jan 03 05:53:08 crc kubenswrapper[4854]: I0103 05:53:08.788073 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="cefb4942-a9e4-42bb-884b-dac9d8c52fa4" containerName="registry-server" Jan 03 05:53:08 crc kubenswrapper[4854]: I0103 05:53:08.788108 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c0bafd5-153c-4661-9cf6-af7674792486" containerName="extract" Jan 03 05:53:08 crc kubenswrapper[4854]: I0103 05:53:08.788124 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="982de9a1-6f2b-46d2-b3d8-59d2566f1295" containerName="extract" Jan 03 05:53:08 crc kubenswrapper[4854]: I0103 05:53:08.788919 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-bd45dfbc8-vmrll" Jan 03 05:53:08 crc kubenswrapper[4854]: I0103 05:53:08.793933 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Jan 03 05:53:08 crc kubenswrapper[4854]: I0103 05:53:08.794484 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Jan 03 05:53:08 crc kubenswrapper[4854]: I0103 05:53:08.794688 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Jan 03 05:53:08 crc kubenswrapper[4854]: I0103 05:53:08.795997 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Jan 03 05:53:08 crc kubenswrapper[4854]: I0103 05:53:08.796642 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-fwfmg" Jan 03 05:53:08 crc kubenswrapper[4854]: I0103 05:53:08.796929 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Jan 03 05:53:08 crc kubenswrapper[4854]: I0103 05:53:08.812224 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-bd45dfbc8-vmrll"] Jan 03 05:53:08 crc kubenswrapper[4854]: I0103 05:53:08.868406 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/0c7ed8af-66a8-4ce9-95bd-4818cc646245-manager-config\") pod \"loki-operator-controller-manager-bd45dfbc8-vmrll\" (UID: \"0c7ed8af-66a8-4ce9-95bd-4818cc646245\") " pod="openshift-operators-redhat/loki-operator-controller-manager-bd45dfbc8-vmrll" Jan 03 05:53:08 crc kubenswrapper[4854]: I0103 05:53:08.868454 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0c7ed8af-66a8-4ce9-95bd-4818cc646245-webhook-cert\") pod \"loki-operator-controller-manager-bd45dfbc8-vmrll\" (UID: \"0c7ed8af-66a8-4ce9-95bd-4818cc646245\") " pod="openshift-operators-redhat/loki-operator-controller-manager-bd45dfbc8-vmrll" Jan 03 05:53:08 crc kubenswrapper[4854]: I0103 05:53:08.868483 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0c7ed8af-66a8-4ce9-95bd-4818cc646245-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-bd45dfbc8-vmrll\" (UID: \"0c7ed8af-66a8-4ce9-95bd-4818cc646245\") " pod="openshift-operators-redhat/loki-operator-controller-manager-bd45dfbc8-vmrll" Jan 03 05:53:08 crc kubenswrapper[4854]: I0103 05:53:08.868581 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0c7ed8af-66a8-4ce9-95bd-4818cc646245-apiservice-cert\") pod \"loki-operator-controller-manager-bd45dfbc8-vmrll\" (UID: \"0c7ed8af-66a8-4ce9-95bd-4818cc646245\") " pod="openshift-operators-redhat/loki-operator-controller-manager-bd45dfbc8-vmrll" Jan 03 05:53:08 crc kubenswrapper[4854]: I0103 05:53:08.868601 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjspm\" (UniqueName: \"kubernetes.io/projected/0c7ed8af-66a8-4ce9-95bd-4818cc646245-kube-api-access-bjspm\") pod \"loki-operator-controller-manager-bd45dfbc8-vmrll\" (UID: \"0c7ed8af-66a8-4ce9-95bd-4818cc646245\") " pod="openshift-operators-redhat/loki-operator-controller-manager-bd45dfbc8-vmrll" Jan 03 05:53:08 crc kubenswrapper[4854]: I0103 05:53:08.970375 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0c7ed8af-66a8-4ce9-95bd-4818cc646245-apiservice-cert\") pod \"loki-operator-controller-manager-bd45dfbc8-vmrll\" (UID: \"0c7ed8af-66a8-4ce9-95bd-4818cc646245\") " pod="openshift-operators-redhat/loki-operator-controller-manager-bd45dfbc8-vmrll" Jan 03 05:53:08 crc kubenswrapper[4854]: I0103 05:53:08.970423 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjspm\" (UniqueName: \"kubernetes.io/projected/0c7ed8af-66a8-4ce9-95bd-4818cc646245-kube-api-access-bjspm\") pod \"loki-operator-controller-manager-bd45dfbc8-vmrll\" (UID: \"0c7ed8af-66a8-4ce9-95bd-4818cc646245\") " pod="openshift-operators-redhat/loki-operator-controller-manager-bd45dfbc8-vmrll" Jan 03 05:53:08 crc kubenswrapper[4854]: I0103 05:53:08.970467 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/0c7ed8af-66a8-4ce9-95bd-4818cc646245-manager-config\") pod \"loki-operator-controller-manager-bd45dfbc8-vmrll\" (UID: \"0c7ed8af-66a8-4ce9-95bd-4818cc646245\") " pod="openshift-operators-redhat/loki-operator-controller-manager-bd45dfbc8-vmrll" Jan 03 05:53:08 crc kubenswrapper[4854]: I0103 05:53:08.970489 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0c7ed8af-66a8-4ce9-95bd-4818cc646245-webhook-cert\") pod \"loki-operator-controller-manager-bd45dfbc8-vmrll\" (UID: \"0c7ed8af-66a8-4ce9-95bd-4818cc646245\") " pod="openshift-operators-redhat/loki-operator-controller-manager-bd45dfbc8-vmrll" Jan 03 05:53:08 crc kubenswrapper[4854]: I0103 05:53:08.970539 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0c7ed8af-66a8-4ce9-95bd-4818cc646245-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-bd45dfbc8-vmrll\" (UID: \"0c7ed8af-66a8-4ce9-95bd-4818cc646245\") " pod="openshift-operators-redhat/loki-operator-controller-manager-bd45dfbc8-vmrll" Jan 03 05:53:08 crc kubenswrapper[4854]: I0103 05:53:08.972549 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/0c7ed8af-66a8-4ce9-95bd-4818cc646245-manager-config\") pod \"loki-operator-controller-manager-bd45dfbc8-vmrll\" (UID: \"0c7ed8af-66a8-4ce9-95bd-4818cc646245\") " pod="openshift-operators-redhat/loki-operator-controller-manager-bd45dfbc8-vmrll" Jan 03 05:53:08 crc kubenswrapper[4854]: I0103 05:53:08.979839 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0c7ed8af-66a8-4ce9-95bd-4818cc646245-apiservice-cert\") pod \"loki-operator-controller-manager-bd45dfbc8-vmrll\" (UID: \"0c7ed8af-66a8-4ce9-95bd-4818cc646245\") " pod="openshift-operators-redhat/loki-operator-controller-manager-bd45dfbc8-vmrll" Jan 03 05:53:08 crc kubenswrapper[4854]: I0103 05:53:08.980867 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0c7ed8af-66a8-4ce9-95bd-4818cc646245-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-bd45dfbc8-vmrll\" (UID: \"0c7ed8af-66a8-4ce9-95bd-4818cc646245\") " pod="openshift-operators-redhat/loki-operator-controller-manager-bd45dfbc8-vmrll" Jan 03 05:53:08 crc kubenswrapper[4854]: I0103 05:53:08.982514 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0c7ed8af-66a8-4ce9-95bd-4818cc646245-webhook-cert\") pod \"loki-operator-controller-manager-bd45dfbc8-vmrll\" (UID: \"0c7ed8af-66a8-4ce9-95bd-4818cc646245\") " pod="openshift-operators-redhat/loki-operator-controller-manager-bd45dfbc8-vmrll" Jan 03 05:53:09 crc kubenswrapper[4854]: I0103 05:53:09.003341 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjspm\" (UniqueName: \"kubernetes.io/projected/0c7ed8af-66a8-4ce9-95bd-4818cc646245-kube-api-access-bjspm\") pod \"loki-operator-controller-manager-bd45dfbc8-vmrll\" (UID: \"0c7ed8af-66a8-4ce9-95bd-4818cc646245\") " pod="openshift-operators-redhat/loki-operator-controller-manager-bd45dfbc8-vmrll" Jan 03 05:53:09 crc kubenswrapper[4854]: I0103 05:53:09.103261 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-bd45dfbc8-vmrll" Jan 03 05:53:09 crc kubenswrapper[4854]: I0103 05:53:09.607433 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-bd45dfbc8-vmrll"] Jan 03 05:53:09 crc kubenswrapper[4854]: W0103 05:53:09.610964 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c7ed8af_66a8_4ce9_95bd_4818cc646245.slice/crio-b0c6f45c3d5598696d15a843c403e0de45ccf321905fdedbfb6b670b4b454e75 WatchSource:0}: Error finding container b0c6f45c3d5598696d15a843c403e0de45ccf321905fdedbfb6b670b4b454e75: Status 404 returned error can't find the container with id b0c6f45c3d5598696d15a843c403e0de45ccf321905fdedbfb6b670b4b454e75 Jan 03 05:53:10 crc kubenswrapper[4854]: I0103 05:53:10.496099 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-bd45dfbc8-vmrll" event={"ID":"0c7ed8af-66a8-4ce9-95bd-4818cc646245","Type":"ContainerStarted","Data":"b0c6f45c3d5598696d15a843c403e0de45ccf321905fdedbfb6b670b4b454e75"} Jan 03 05:53:10 crc kubenswrapper[4854]: I0103 05:53:10.958955 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-79cf69ddc8-24swh"] Jan 03 05:53:10 crc kubenswrapper[4854]: I0103 05:53:10.959948 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-24swh" Jan 03 05:53:10 crc kubenswrapper[4854]: I0103 05:53:10.962024 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-hmzb9" Jan 03 05:53:10 crc kubenswrapper[4854]: I0103 05:53:10.962307 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Jan 03 05:53:10 crc kubenswrapper[4854]: I0103 05:53:10.962611 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Jan 03 05:53:10 crc kubenswrapper[4854]: I0103 05:53:10.975836 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-79cf69ddc8-24swh"] Jan 03 05:53:11 crc kubenswrapper[4854]: I0103 05:53:11.001126 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmlgk\" (UniqueName: \"kubernetes.io/projected/d65264c9-97db-44f5-b218-7855a64ce8f7-kube-api-access-lmlgk\") pod \"cluster-logging-operator-79cf69ddc8-24swh\" (UID: \"d65264c9-97db-44f5-b218-7855a64ce8f7\") " pod="openshift-logging/cluster-logging-operator-79cf69ddc8-24swh" Jan 03 05:53:11 crc kubenswrapper[4854]: I0103 05:53:11.102095 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmlgk\" (UniqueName: \"kubernetes.io/projected/d65264c9-97db-44f5-b218-7855a64ce8f7-kube-api-access-lmlgk\") pod \"cluster-logging-operator-79cf69ddc8-24swh\" (UID: \"d65264c9-97db-44f5-b218-7855a64ce8f7\") " pod="openshift-logging/cluster-logging-operator-79cf69ddc8-24swh" Jan 03 05:53:11 crc kubenswrapper[4854]: I0103 05:53:11.162276 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmlgk\" (UniqueName: \"kubernetes.io/projected/d65264c9-97db-44f5-b218-7855a64ce8f7-kube-api-access-lmlgk\") pod \"cluster-logging-operator-79cf69ddc8-24swh\" (UID: \"d65264c9-97db-44f5-b218-7855a64ce8f7\") " pod="openshift-logging/cluster-logging-operator-79cf69ddc8-24swh" Jan 03 05:53:11 crc kubenswrapper[4854]: I0103 05:53:11.279331 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-24swh" Jan 03 05:53:11 crc kubenswrapper[4854]: I0103 05:53:11.636294 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-79cf69ddc8-24swh"] Jan 03 05:53:11 crc kubenswrapper[4854]: W0103 05:53:11.648904 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd65264c9_97db_44f5_b218_7855a64ce8f7.slice/crio-3b5504613b0d2e46ef7c12bc8c662256a6c354d685286785a48da0b466c46700 WatchSource:0}: Error finding container 3b5504613b0d2e46ef7c12bc8c662256a6c354d685286785a48da0b466c46700: Status 404 returned error can't find the container with id 3b5504613b0d2e46ef7c12bc8c662256a6c354d685286785a48da0b466c46700 Jan 03 05:53:12 crc kubenswrapper[4854]: I0103 05:53:12.515031 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-24swh" event={"ID":"d65264c9-97db-44f5-b218-7855a64ce8f7","Type":"ContainerStarted","Data":"3b5504613b0d2e46ef7c12bc8c662256a6c354d685286785a48da0b466c46700"} Jan 03 05:53:19 crc kubenswrapper[4854]: I0103 05:53:19.563524 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-24swh" event={"ID":"d65264c9-97db-44f5-b218-7855a64ce8f7","Type":"ContainerStarted","Data":"e202aa28f3469e5a1bc68dfd0922894a2ec59b24c0273b00ea076e7eaa782659"} Jan 03 05:53:19 crc kubenswrapper[4854]: I0103 05:53:19.566992 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-bd45dfbc8-vmrll" event={"ID":"0c7ed8af-66a8-4ce9-95bd-4818cc646245","Type":"ContainerStarted","Data":"a957a4826fca42b35b6e9ebf213d5830ccdff686e48afdeea502621addf72ba0"} Jan 03 05:53:19 crc kubenswrapper[4854]: I0103 05:53:19.591471 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-24swh" podStartSLOduration=2.033275118 podStartE2EDuration="9.591452971s" podCreationTimestamp="2026-01-03 05:53:10 +0000 UTC" firstStartedPulling="2026-01-03 05:53:11.651526131 +0000 UTC m=+769.978102703" lastFinishedPulling="2026-01-03 05:53:19.209703984 +0000 UTC m=+777.536280556" observedRunningTime="2026-01-03 05:53:19.588482455 +0000 UTC m=+777.915059037" watchObservedRunningTime="2026-01-03 05:53:19.591452971 +0000 UTC m=+777.918029543" Jan 03 05:53:31 crc kubenswrapper[4854]: I0103 05:53:31.662147 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-bd45dfbc8-vmrll" event={"ID":"0c7ed8af-66a8-4ce9-95bd-4818cc646245","Type":"ContainerStarted","Data":"73b59c7bfed9c28b720c2eca192e2636cdcb7bba938f67b6e0e918759717f9bb"} Jan 03 05:53:31 crc kubenswrapper[4854]: I0103 05:53:31.662780 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-bd45dfbc8-vmrll" Jan 03 05:53:31 crc kubenswrapper[4854]: I0103 05:53:31.666505 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-bd45dfbc8-vmrll" Jan 03 05:53:31 crc kubenswrapper[4854]: I0103 05:53:31.709670 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-bd45dfbc8-vmrll" podStartSLOduration=2.686225962 podStartE2EDuration="23.709642709s" podCreationTimestamp="2026-01-03 05:53:08 +0000 UTC" firstStartedPulling="2026-01-03 05:53:09.613501055 +0000 UTC m=+767.940077627" lastFinishedPulling="2026-01-03 05:53:30.636917792 +0000 UTC m=+788.963494374" observedRunningTime="2026-01-03 05:53:31.70246593 +0000 UTC m=+790.029042532" watchObservedRunningTime="2026-01-03 05:53:31.709642709 +0000 UTC m=+790.036219321" Jan 03 05:53:37 crc kubenswrapper[4854]: I0103 05:53:37.028743 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Jan 03 05:53:37 crc kubenswrapper[4854]: I0103 05:53:37.030364 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Jan 03 05:53:37 crc kubenswrapper[4854]: I0103 05:53:37.034599 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Jan 03 05:53:37 crc kubenswrapper[4854]: I0103 05:53:37.048654 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Jan 03 05:53:37 crc kubenswrapper[4854]: I0103 05:53:37.088750 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Jan 03 05:53:37 crc kubenswrapper[4854]: I0103 05:53:37.188681 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmntt\" (UniqueName: \"kubernetes.io/projected/37e41525-9509-4219-9036-0b0498d8bf14-kube-api-access-qmntt\") pod \"minio\" (UID: \"37e41525-9509-4219-9036-0b0498d8bf14\") " pod="minio-dev/minio" Jan 03 05:53:37 crc kubenswrapper[4854]: I0103 05:53:37.188730 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ce2042a2-cb67-40b6-b76e-aa40a533d8bd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ce2042a2-cb67-40b6-b76e-aa40a533d8bd\") pod \"minio\" (UID: \"37e41525-9509-4219-9036-0b0498d8bf14\") " pod="minio-dev/minio" Jan 03 05:53:37 crc kubenswrapper[4854]: I0103 05:53:37.290401 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmntt\" (UniqueName: \"kubernetes.io/projected/37e41525-9509-4219-9036-0b0498d8bf14-kube-api-access-qmntt\") pod \"minio\" (UID: \"37e41525-9509-4219-9036-0b0498d8bf14\") " pod="minio-dev/minio" Jan 03 05:53:37 crc kubenswrapper[4854]: I0103 05:53:37.290456 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ce2042a2-cb67-40b6-b76e-aa40a533d8bd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ce2042a2-cb67-40b6-b76e-aa40a533d8bd\") pod \"minio\" (UID: \"37e41525-9509-4219-9036-0b0498d8bf14\") " pod="minio-dev/minio" Jan 03 05:53:37 crc kubenswrapper[4854]: I0103 05:53:37.294423 4854 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 03 05:53:37 crc kubenswrapper[4854]: I0103 05:53:37.294464 4854 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ce2042a2-cb67-40b6-b76e-aa40a533d8bd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ce2042a2-cb67-40b6-b76e-aa40a533d8bd\") pod \"minio\" (UID: \"37e41525-9509-4219-9036-0b0498d8bf14\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a74c5aa182e109132d8ed9330d00f76eef13c22181ab35c3e49a8112372adbc4/globalmount\"" pod="minio-dev/minio" Jan 03 05:53:37 crc kubenswrapper[4854]: I0103 05:53:37.321444 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmntt\" (UniqueName: \"kubernetes.io/projected/37e41525-9509-4219-9036-0b0498d8bf14-kube-api-access-qmntt\") pod \"minio\" (UID: \"37e41525-9509-4219-9036-0b0498d8bf14\") " pod="minio-dev/minio" Jan 03 05:53:37 crc kubenswrapper[4854]: I0103 05:53:37.322554 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ce2042a2-cb67-40b6-b76e-aa40a533d8bd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ce2042a2-cb67-40b6-b76e-aa40a533d8bd\") pod \"minio\" (UID: \"37e41525-9509-4219-9036-0b0498d8bf14\") " pod="minio-dev/minio" Jan 03 05:53:37 crc kubenswrapper[4854]: I0103 05:53:37.403123 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Jan 03 05:53:37 crc kubenswrapper[4854]: I0103 05:53:37.686202 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Jan 03 05:53:37 crc kubenswrapper[4854]: I0103 05:53:37.706620 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"37e41525-9509-4219-9036-0b0498d8bf14","Type":"ContainerStarted","Data":"a4ae1a92345b0ea86b1c0d556bf84b55b6e81f472d18d26f37bedb7f0888ab84"} Jan 03 05:53:41 crc kubenswrapper[4854]: I0103 05:53:41.755627 4854 patch_prober.go:28] interesting pod/machine-config-daemon-qdhfx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 05:53:41 crc kubenswrapper[4854]: I0103 05:53:41.756159 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 05:53:55 crc kubenswrapper[4854]: I0103 05:53:55.843450 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"37e41525-9509-4219-9036-0b0498d8bf14","Type":"ContainerStarted","Data":"f8717d9de19c4719e63ed251a59cb0156e28d9a4459ba3dc046662e861a2b8f2"} Jan 03 05:53:55 crc kubenswrapper[4854]: I0103 05:53:55.861808 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=4.17694135 podStartE2EDuration="21.861787779s" podCreationTimestamp="2026-01-03 05:53:34 +0000 UTC" firstStartedPulling="2026-01-03 05:53:37.695001227 +0000 UTC m=+796.021577799" lastFinishedPulling="2026-01-03 05:53:55.379847636 +0000 UTC m=+813.706424228" observedRunningTime="2026-01-03 05:53:55.857147043 +0000 UTC m=+814.183723675" watchObservedRunningTime="2026-01-03 05:53:55.861787779 +0000 UTC m=+814.188364361" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.203499 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-5f678c8dd6-p67sv"] Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.206590 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-p67sv" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.210517 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.212225 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5f678c8dd6-p67sv"] Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.215605 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-tlqzr" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.215861 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.215982 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.218229 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.262845 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/128d93c6-02aa-4f68-aac6-cfcab1896a35-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5f678c8dd6-p67sv\" (UID: \"128d93c6-02aa-4f68-aac6-cfcab1896a35\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-p67sv" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.262900 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/128d93c6-02aa-4f68-aac6-cfcab1896a35-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5f678c8dd6-p67sv\" (UID: \"128d93c6-02aa-4f68-aac6-cfcab1896a35\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-p67sv" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.262921 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjlkp\" (UniqueName: \"kubernetes.io/projected/128d93c6-02aa-4f68-aac6-cfcab1896a35-kube-api-access-pjlkp\") pod \"logging-loki-distributor-5f678c8dd6-p67sv\" (UID: \"128d93c6-02aa-4f68-aac6-cfcab1896a35\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-p67sv" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.262957 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/128d93c6-02aa-4f68-aac6-cfcab1896a35-logging-loki-distributor-http\") pod \"logging-loki-distributor-5f678c8dd6-p67sv\" (UID: \"128d93c6-02aa-4f68-aac6-cfcab1896a35\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-p67sv" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.262989 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/128d93c6-02aa-4f68-aac6-cfcab1896a35-config\") pod \"logging-loki-distributor-5f678c8dd6-p67sv\" (UID: \"128d93c6-02aa-4f68-aac6-cfcab1896a35\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-p67sv" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.364937 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/128d93c6-02aa-4f68-aac6-cfcab1896a35-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5f678c8dd6-p67sv\" (UID: \"128d93c6-02aa-4f68-aac6-cfcab1896a35\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-p67sv" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.365005 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/128d93c6-02aa-4f68-aac6-cfcab1896a35-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5f678c8dd6-p67sv\" (UID: \"128d93c6-02aa-4f68-aac6-cfcab1896a35\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-p67sv" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.365031 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjlkp\" (UniqueName: \"kubernetes.io/projected/128d93c6-02aa-4f68-aac6-cfcab1896a35-kube-api-access-pjlkp\") pod \"logging-loki-distributor-5f678c8dd6-p67sv\" (UID: \"128d93c6-02aa-4f68-aac6-cfcab1896a35\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-p67sv" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.365095 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/128d93c6-02aa-4f68-aac6-cfcab1896a35-logging-loki-distributor-http\") pod \"logging-loki-distributor-5f678c8dd6-p67sv\" (UID: \"128d93c6-02aa-4f68-aac6-cfcab1896a35\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-p67sv" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.365154 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/128d93c6-02aa-4f68-aac6-cfcab1896a35-config\") pod \"logging-loki-distributor-5f678c8dd6-p67sv\" (UID: \"128d93c6-02aa-4f68-aac6-cfcab1896a35\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-p67sv" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.366591 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/128d93c6-02aa-4f68-aac6-cfcab1896a35-config\") pod \"logging-loki-distributor-5f678c8dd6-p67sv\" (UID: \"128d93c6-02aa-4f68-aac6-cfcab1896a35\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-p67sv" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.373621 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/128d93c6-02aa-4f68-aac6-cfcab1896a35-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5f678c8dd6-p67sv\" (UID: \"128d93c6-02aa-4f68-aac6-cfcab1896a35\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-p67sv" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.384884 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/128d93c6-02aa-4f68-aac6-cfcab1896a35-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5f678c8dd6-p67sv\" (UID: \"128d93c6-02aa-4f68-aac6-cfcab1896a35\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-p67sv" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.398038 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/128d93c6-02aa-4f68-aac6-cfcab1896a35-logging-loki-distributor-http\") pod \"logging-loki-distributor-5f678c8dd6-p67sv\" (UID: \"128d93c6-02aa-4f68-aac6-cfcab1896a35\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-p67sv" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.399892 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjlkp\" (UniqueName: \"kubernetes.io/projected/128d93c6-02aa-4f68-aac6-cfcab1896a35-kube-api-access-pjlkp\") pod \"logging-loki-distributor-5f678c8dd6-p67sv\" (UID: \"128d93c6-02aa-4f68-aac6-cfcab1896a35\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-p67sv" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.511660 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-76788598db-b8thp"] Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.518825 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76788598db-b8thp" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.521480 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.524448 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.524756 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.538707 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-p67sv" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.558823 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76788598db-b8thp"] Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.567616 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/66f9492b-16b5-4b86-bb22-560ad0f8001c-logging-loki-s3\") pod \"logging-loki-querier-76788598db-b8thp\" (UID: \"66f9492b-16b5-4b86-bb22-560ad0f8001c\") " pod="openshift-logging/logging-loki-querier-76788598db-b8thp" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.567699 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqxsv\" (UniqueName: \"kubernetes.io/projected/66f9492b-16b5-4b86-bb22-560ad0f8001c-kube-api-access-bqxsv\") pod \"logging-loki-querier-76788598db-b8thp\" (UID: \"66f9492b-16b5-4b86-bb22-560ad0f8001c\") " pod="openshift-logging/logging-loki-querier-76788598db-b8thp" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.567737 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66f9492b-16b5-4b86-bb22-560ad0f8001c-logging-loki-ca-bundle\") pod \"logging-loki-querier-76788598db-b8thp\" (UID: \"66f9492b-16b5-4b86-bb22-560ad0f8001c\") " pod="openshift-logging/logging-loki-querier-76788598db-b8thp" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.567786 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/66f9492b-16b5-4b86-bb22-560ad0f8001c-logging-loki-querier-grpc\") pod \"logging-loki-querier-76788598db-b8thp\" (UID: \"66f9492b-16b5-4b86-bb22-560ad0f8001c\") " pod="openshift-logging/logging-loki-querier-76788598db-b8thp" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.567870 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66f9492b-16b5-4b86-bb22-560ad0f8001c-config\") pod \"logging-loki-querier-76788598db-b8thp\" (UID: \"66f9492b-16b5-4b86-bb22-560ad0f8001c\") " pod="openshift-logging/logging-loki-querier-76788598db-b8thp" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.568046 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/66f9492b-16b5-4b86-bb22-560ad0f8001c-logging-loki-querier-http\") pod \"logging-loki-querier-76788598db-b8thp\" (UID: \"66f9492b-16b5-4b86-bb22-560ad0f8001c\") " pod="openshift-logging/logging-loki-querier-76788598db-b8thp" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.609772 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-69d9546745-42f7g"] Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.614181 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-69d9546745-42f7g" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.624487 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.624658 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.640896 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-69d9546745-42f7g"] Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.670639 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqxsv\" (UniqueName: \"kubernetes.io/projected/66f9492b-16b5-4b86-bb22-560ad0f8001c-kube-api-access-bqxsv\") pod \"logging-loki-querier-76788598db-b8thp\" (UID: \"66f9492b-16b5-4b86-bb22-560ad0f8001c\") " pod="openshift-logging/logging-loki-querier-76788598db-b8thp" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.670745 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/b98c17f7-1569-4c33-ab65-f4c2ba0555ae-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-69d9546745-42f7g\" (UID: \"b98c17f7-1569-4c33-ab65-f4c2ba0555ae\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-42f7g" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.670800 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66f9492b-16b5-4b86-bb22-560ad0f8001c-logging-loki-ca-bundle\") pod \"logging-loki-querier-76788598db-b8thp\" (UID: \"66f9492b-16b5-4b86-bb22-560ad0f8001c\") " pod="openshift-logging/logging-loki-querier-76788598db-b8thp" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.670819 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dnlm\" (UniqueName: \"kubernetes.io/projected/b98c17f7-1569-4c33-ab65-f4c2ba0555ae-kube-api-access-7dnlm\") pod \"logging-loki-query-frontend-69d9546745-42f7g\" (UID: \"b98c17f7-1569-4c33-ab65-f4c2ba0555ae\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-42f7g" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.671053 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b98c17f7-1569-4c33-ab65-f4c2ba0555ae-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-69d9546745-42f7g\" (UID: \"b98c17f7-1569-4c33-ab65-f4c2ba0555ae\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-42f7g" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.671131 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b98c17f7-1569-4c33-ab65-f4c2ba0555ae-config\") pod \"logging-loki-query-frontend-69d9546745-42f7g\" (UID: \"b98c17f7-1569-4c33-ab65-f4c2ba0555ae\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-42f7g" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.671163 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/66f9492b-16b5-4b86-bb22-560ad0f8001c-logging-loki-querier-grpc\") pod \"logging-loki-querier-76788598db-b8thp\" (UID: \"66f9492b-16b5-4b86-bb22-560ad0f8001c\") " pod="openshift-logging/logging-loki-querier-76788598db-b8thp" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.671216 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66f9492b-16b5-4b86-bb22-560ad0f8001c-config\") pod \"logging-loki-querier-76788598db-b8thp\" (UID: \"66f9492b-16b5-4b86-bb22-560ad0f8001c\") " pod="openshift-logging/logging-loki-querier-76788598db-b8thp" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.671282 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/66f9492b-16b5-4b86-bb22-560ad0f8001c-logging-loki-querier-http\") pod \"logging-loki-querier-76788598db-b8thp\" (UID: \"66f9492b-16b5-4b86-bb22-560ad0f8001c\") " pod="openshift-logging/logging-loki-querier-76788598db-b8thp" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.671325 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/66f9492b-16b5-4b86-bb22-560ad0f8001c-logging-loki-s3\") pod \"logging-loki-querier-76788598db-b8thp\" (UID: \"66f9492b-16b5-4b86-bb22-560ad0f8001c\") " pod="openshift-logging/logging-loki-querier-76788598db-b8thp" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.671376 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/b98c17f7-1569-4c33-ab65-f4c2ba0555ae-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-69d9546745-42f7g\" (UID: \"b98c17f7-1569-4c33-ab65-f4c2ba0555ae\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-42f7g" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.673447 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66f9492b-16b5-4b86-bb22-560ad0f8001c-logging-loki-ca-bundle\") pod \"logging-loki-querier-76788598db-b8thp\" (UID: \"66f9492b-16b5-4b86-bb22-560ad0f8001c\") " pod="openshift-logging/logging-loki-querier-76788598db-b8thp" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.675254 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66f9492b-16b5-4b86-bb22-560ad0f8001c-config\") pod \"logging-loki-querier-76788598db-b8thp\" (UID: \"66f9492b-16b5-4b86-bb22-560ad0f8001c\") " pod="openshift-logging/logging-loki-querier-76788598db-b8thp" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.682013 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/66f9492b-16b5-4b86-bb22-560ad0f8001c-logging-loki-querier-grpc\") pod \"logging-loki-querier-76788598db-b8thp\" (UID: \"66f9492b-16b5-4b86-bb22-560ad0f8001c\") " pod="openshift-logging/logging-loki-querier-76788598db-b8thp" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.703262 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/66f9492b-16b5-4b86-bb22-560ad0f8001c-logging-loki-s3\") pod \"logging-loki-querier-76788598db-b8thp\" (UID: \"66f9492b-16b5-4b86-bb22-560ad0f8001c\") " pod="openshift-logging/logging-loki-querier-76788598db-b8thp" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.706442 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/66f9492b-16b5-4b86-bb22-560ad0f8001c-logging-loki-querier-http\") pod \"logging-loki-querier-76788598db-b8thp\" (UID: \"66f9492b-16b5-4b86-bb22-560ad0f8001c\") " pod="openshift-logging/logging-loki-querier-76788598db-b8thp" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.734797 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqxsv\" (UniqueName: \"kubernetes.io/projected/66f9492b-16b5-4b86-bb22-560ad0f8001c-kube-api-access-bqxsv\") pod \"logging-loki-querier-76788598db-b8thp\" (UID: \"66f9492b-16b5-4b86-bb22-560ad0f8001c\") " pod="openshift-logging/logging-loki-querier-76788598db-b8thp" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.739342 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-656bf7cf7c-98v92"] Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.740696 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-656bf7cf7c-98v92" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.748600 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.749048 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.764520 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.764749 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.765156 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.772858 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b98c17f7-1569-4c33-ab65-f4c2ba0555ae-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-69d9546745-42f7g\" (UID: \"b98c17f7-1569-4c33-ab65-f4c2ba0555ae\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-42f7g" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.772912 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/428c2117-0003-47b2-abfa-f4f7930e126c-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-656bf7cf7c-98v92\" (UID: \"428c2117-0003-47b2-abfa-f4f7930e126c\") " pod="openshift-logging/logging-loki-gateway-656bf7cf7c-98v92" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.772957 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b98c17f7-1569-4c33-ab65-f4c2ba0555ae-config\") pod \"logging-loki-query-frontend-69d9546745-42f7g\" (UID: \"b98c17f7-1569-4c33-ab65-f4c2ba0555ae\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-42f7g" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.773019 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/b98c17f7-1569-4c33-ab65-f4c2ba0555ae-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-69d9546745-42f7g\" (UID: \"b98c17f7-1569-4c33-ab65-f4c2ba0555ae\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-42f7g" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.773046 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf729\" (UniqueName: \"kubernetes.io/projected/428c2117-0003-47b2-abfa-f4f7930e126c-kube-api-access-jf729\") pod \"logging-loki-gateway-656bf7cf7c-98v92\" (UID: \"428c2117-0003-47b2-abfa-f4f7930e126c\") " pod="openshift-logging/logging-loki-gateway-656bf7cf7c-98v92" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.773068 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/428c2117-0003-47b2-abfa-f4f7930e126c-tenants\") pod \"logging-loki-gateway-656bf7cf7c-98v92\" (UID: \"428c2117-0003-47b2-abfa-f4f7930e126c\") " pod="openshift-logging/logging-loki-gateway-656bf7cf7c-98v92" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.773109 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/428c2117-0003-47b2-abfa-f4f7930e126c-tls-secret\") pod \"logging-loki-gateway-656bf7cf7c-98v92\" (UID: \"428c2117-0003-47b2-abfa-f4f7930e126c\") " pod="openshift-logging/logging-loki-gateway-656bf7cf7c-98v92" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.774568 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b98c17f7-1569-4c33-ab65-f4c2ba0555ae-config\") pod \"logging-loki-query-frontend-69d9546745-42f7g\" (UID: \"b98c17f7-1569-4c33-ab65-f4c2ba0555ae\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-42f7g" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.774729 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/428c2117-0003-47b2-abfa-f4f7930e126c-rbac\") pod \"logging-loki-gateway-656bf7cf7c-98v92\" (UID: \"428c2117-0003-47b2-abfa-f4f7930e126c\") " pod="openshift-logging/logging-loki-gateway-656bf7cf7c-98v92" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.774760 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/428c2117-0003-47b2-abfa-f4f7930e126c-logging-loki-ca-bundle\") pod \"logging-loki-gateway-656bf7cf7c-98v92\" (UID: \"428c2117-0003-47b2-abfa-f4f7930e126c\") " pod="openshift-logging/logging-loki-gateway-656bf7cf7c-98v92" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.774791 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/428c2117-0003-47b2-abfa-f4f7930e126c-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-656bf7cf7c-98v92\" (UID: \"428c2117-0003-47b2-abfa-f4f7930e126c\") " pod="openshift-logging/logging-loki-gateway-656bf7cf7c-98v92" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.774819 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/428c2117-0003-47b2-abfa-f4f7930e126c-lokistack-gateway\") pod \"logging-loki-gateway-656bf7cf7c-98v92\" (UID: \"428c2117-0003-47b2-abfa-f4f7930e126c\") " pod="openshift-logging/logging-loki-gateway-656bf7cf7c-98v92" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.774866 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/b98c17f7-1569-4c33-ab65-f4c2ba0555ae-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-69d9546745-42f7g\" (UID: \"b98c17f7-1569-4c33-ab65-f4c2ba0555ae\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-42f7g" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.774892 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dnlm\" (UniqueName: \"kubernetes.io/projected/b98c17f7-1569-4c33-ab65-f4c2ba0555ae-kube-api-access-7dnlm\") pod \"logging-loki-query-frontend-69d9546745-42f7g\" (UID: \"b98c17f7-1569-4c33-ab65-f4c2ba0555ae\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-42f7g" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.777574 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b98c17f7-1569-4c33-ab65-f4c2ba0555ae-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-69d9546745-42f7g\" (UID: \"b98c17f7-1569-4c33-ab65-f4c2ba0555ae\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-42f7g" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.785282 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-656bf7cf7c-w49nx"] Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.787313 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-656bf7cf7c-w49nx" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.804850 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/b98c17f7-1569-4c33-ab65-f4c2ba0555ae-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-69d9546745-42f7g\" (UID: \"b98c17f7-1569-4c33-ab65-f4c2ba0555ae\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-42f7g" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.805042 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/b98c17f7-1569-4c33-ab65-f4c2ba0555ae-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-69d9546745-42f7g\" (UID: \"b98c17f7-1569-4c33-ab65-f4c2ba0555ae\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-42f7g" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.811601 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-d7rvw" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.825009 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-656bf7cf7c-98v92"] Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.838955 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dnlm\" (UniqueName: \"kubernetes.io/projected/b98c17f7-1569-4c33-ab65-f4c2ba0555ae-kube-api-access-7dnlm\") pod \"logging-loki-query-frontend-69d9546745-42f7g\" (UID: \"b98c17f7-1569-4c33-ab65-f4c2ba0555ae\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-42f7g" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.852500 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76788598db-b8thp" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.901301 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jf729\" (UniqueName: \"kubernetes.io/projected/428c2117-0003-47b2-abfa-f4f7930e126c-kube-api-access-jf729\") pod \"logging-loki-gateway-656bf7cf7c-98v92\" (UID: \"428c2117-0003-47b2-abfa-f4f7930e126c\") " pod="openshift-logging/logging-loki-gateway-656bf7cf7c-98v92" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.901353 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/428c2117-0003-47b2-abfa-f4f7930e126c-tenants\") pod \"logging-loki-gateway-656bf7cf7c-98v92\" (UID: \"428c2117-0003-47b2-abfa-f4f7930e126c\") " pod="openshift-logging/logging-loki-gateway-656bf7cf7c-98v92" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.901378 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/428c2117-0003-47b2-abfa-f4f7930e126c-tls-secret\") pod \"logging-loki-gateway-656bf7cf7c-98v92\" (UID: \"428c2117-0003-47b2-abfa-f4f7930e126c\") " pod="openshift-logging/logging-loki-gateway-656bf7cf7c-98v92" Jan 03 05:54:02 crc kubenswrapper[4854]: E0103 05:54:02.902859 4854 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Jan 03 05:54:02 crc kubenswrapper[4854]: E0103 05:54:02.902924 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/428c2117-0003-47b2-abfa-f4f7930e126c-tls-secret podName:428c2117-0003-47b2-abfa-f4f7930e126c nodeName:}" failed. No retries permitted until 2026-01-03 05:54:03.402906121 +0000 UTC m=+821.729482683 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/428c2117-0003-47b2-abfa-f4f7930e126c-tls-secret") pod "logging-loki-gateway-656bf7cf7c-98v92" (UID: "428c2117-0003-47b2-abfa-f4f7930e126c") : secret "logging-loki-gateway-http" not found Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.903247 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/428c2117-0003-47b2-abfa-f4f7930e126c-rbac\") pod \"logging-loki-gateway-656bf7cf7c-98v92\" (UID: \"428c2117-0003-47b2-abfa-f4f7930e126c\") " pod="openshift-logging/logging-loki-gateway-656bf7cf7c-98v92" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.903288 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/428c2117-0003-47b2-abfa-f4f7930e126c-logging-loki-ca-bundle\") pod \"logging-loki-gateway-656bf7cf7c-98v92\" (UID: \"428c2117-0003-47b2-abfa-f4f7930e126c\") " pod="openshift-logging/logging-loki-gateway-656bf7cf7c-98v92" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.903332 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/428c2117-0003-47b2-abfa-f4f7930e126c-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-656bf7cf7c-98v92\" (UID: \"428c2117-0003-47b2-abfa-f4f7930e126c\") " pod="openshift-logging/logging-loki-gateway-656bf7cf7c-98v92" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.903353 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/428c2117-0003-47b2-abfa-f4f7930e126c-lokistack-gateway\") pod \"logging-loki-gateway-656bf7cf7c-98v92\" (UID: \"428c2117-0003-47b2-abfa-f4f7930e126c\") " pod="openshift-logging/logging-loki-gateway-656bf7cf7c-98v92" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.903487 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/428c2117-0003-47b2-abfa-f4f7930e126c-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-656bf7cf7c-98v92\" (UID: \"428c2117-0003-47b2-abfa-f4f7930e126c\") " pod="openshift-logging/logging-loki-gateway-656bf7cf7c-98v92" Jan 03 05:54:02 crc kubenswrapper[4854]: E0103 05:54:02.903617 4854 configmap.go:193] Couldn't get configMap openshift-logging/logging-loki-gateway-ca-bundle: configmap "logging-loki-gateway-ca-bundle" not found Jan 03 05:54:02 crc kubenswrapper[4854]: E0103 05:54:02.903642 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/428c2117-0003-47b2-abfa-f4f7930e126c-logging-loki-gateway-ca-bundle podName:428c2117-0003-47b2-abfa-f4f7930e126c nodeName:}" failed. No retries permitted until 2026-01-03 05:54:03.403634049 +0000 UTC m=+821.730210621 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "logging-loki-gateway-ca-bundle" (UniqueName: "kubernetes.io/configmap/428c2117-0003-47b2-abfa-f4f7930e126c-logging-loki-gateway-ca-bundle") pod "logging-loki-gateway-656bf7cf7c-98v92" (UID: "428c2117-0003-47b2-abfa-f4f7930e126c") : configmap "logging-loki-gateway-ca-bundle" not found Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.904479 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-656bf7cf7c-w49nx"] Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.904559 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/428c2117-0003-47b2-abfa-f4f7930e126c-rbac\") pod \"logging-loki-gateway-656bf7cf7c-98v92\" (UID: \"428c2117-0003-47b2-abfa-f4f7930e126c\") " pod="openshift-logging/logging-loki-gateway-656bf7cf7c-98v92" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.905308 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/428c2117-0003-47b2-abfa-f4f7930e126c-lokistack-gateway\") pod \"logging-loki-gateway-656bf7cf7c-98v92\" (UID: \"428c2117-0003-47b2-abfa-f4f7930e126c\") " pod="openshift-logging/logging-loki-gateway-656bf7cf7c-98v92" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.905794 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/428c2117-0003-47b2-abfa-f4f7930e126c-logging-loki-ca-bundle\") pod \"logging-loki-gateway-656bf7cf7c-98v92\" (UID: \"428c2117-0003-47b2-abfa-f4f7930e126c\") " pod="openshift-logging/logging-loki-gateway-656bf7cf7c-98v92" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.915213 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/428c2117-0003-47b2-abfa-f4f7930e126c-tenants\") pod \"logging-loki-gateway-656bf7cf7c-98v92\" (UID: \"428c2117-0003-47b2-abfa-f4f7930e126c\") " pod="openshift-logging/logging-loki-gateway-656bf7cf7c-98v92" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.915273 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/428c2117-0003-47b2-abfa-f4f7930e126c-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-656bf7cf7c-98v92\" (UID: \"428c2117-0003-47b2-abfa-f4f7930e126c\") " pod="openshift-logging/logging-loki-gateway-656bf7cf7c-98v92" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.955920 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-69d9546745-42f7g" Jan 03 05:54:02 crc kubenswrapper[4854]: I0103 05:54:02.959024 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jf729\" (UniqueName: \"kubernetes.io/projected/428c2117-0003-47b2-abfa-f4f7930e126c-kube-api-access-jf729\") pod \"logging-loki-gateway-656bf7cf7c-98v92\" (UID: \"428c2117-0003-47b2-abfa-f4f7930e126c\") " pod="openshift-logging/logging-loki-gateway-656bf7cf7c-98v92" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.004104 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/4de190f3-1f91-4bd7-9d46-df7235633d58-tenants\") pod \"logging-loki-gateway-656bf7cf7c-w49nx\" (UID: \"4de190f3-1f91-4bd7-9d46-df7235633d58\") " pod="openshift-logging/logging-loki-gateway-656bf7cf7c-w49nx" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.004156 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4de190f3-1f91-4bd7-9d46-df7235633d58-logging-loki-ca-bundle\") pod \"logging-loki-gateway-656bf7cf7c-w49nx\" (UID: \"4de190f3-1f91-4bd7-9d46-df7235633d58\") " pod="openshift-logging/logging-loki-gateway-656bf7cf7c-w49nx" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.004177 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/4de190f3-1f91-4bd7-9d46-df7235633d58-tls-secret\") pod \"logging-loki-gateway-656bf7cf7c-w49nx\" (UID: \"4de190f3-1f91-4bd7-9d46-df7235633d58\") " pod="openshift-logging/logging-loki-gateway-656bf7cf7c-w49nx" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.004210 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/4de190f3-1f91-4bd7-9d46-df7235633d58-rbac\") pod \"logging-loki-gateway-656bf7cf7c-w49nx\" (UID: \"4de190f3-1f91-4bd7-9d46-df7235633d58\") " pod="openshift-logging/logging-loki-gateway-656bf7cf7c-w49nx" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.004232 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/4de190f3-1f91-4bd7-9d46-df7235633d58-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-656bf7cf7c-w49nx\" (UID: \"4de190f3-1f91-4bd7-9d46-df7235633d58\") " pod="openshift-logging/logging-loki-gateway-656bf7cf7c-w49nx" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.004363 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vzvf\" (UniqueName: \"kubernetes.io/projected/4de190f3-1f91-4bd7-9d46-df7235633d58-kube-api-access-6vzvf\") pod \"logging-loki-gateway-656bf7cf7c-w49nx\" (UID: \"4de190f3-1f91-4bd7-9d46-df7235633d58\") " pod="openshift-logging/logging-loki-gateway-656bf7cf7c-w49nx" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.004401 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/4de190f3-1f91-4bd7-9d46-df7235633d58-lokistack-gateway\") pod \"logging-loki-gateway-656bf7cf7c-w49nx\" (UID: \"4de190f3-1f91-4bd7-9d46-df7235633d58\") " pod="openshift-logging/logging-loki-gateway-656bf7cf7c-w49nx" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.004418 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4de190f3-1f91-4bd7-9d46-df7235633d58-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-656bf7cf7c-w49nx\" (UID: \"4de190f3-1f91-4bd7-9d46-df7235633d58\") " pod="openshift-logging/logging-loki-gateway-656bf7cf7c-w49nx" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.105507 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vzvf\" (UniqueName: \"kubernetes.io/projected/4de190f3-1f91-4bd7-9d46-df7235633d58-kube-api-access-6vzvf\") pod \"logging-loki-gateway-656bf7cf7c-w49nx\" (UID: \"4de190f3-1f91-4bd7-9d46-df7235633d58\") " pod="openshift-logging/logging-loki-gateway-656bf7cf7c-w49nx" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.105563 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/4de190f3-1f91-4bd7-9d46-df7235633d58-lokistack-gateway\") pod \"logging-loki-gateway-656bf7cf7c-w49nx\" (UID: \"4de190f3-1f91-4bd7-9d46-df7235633d58\") " pod="openshift-logging/logging-loki-gateway-656bf7cf7c-w49nx" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.105585 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4de190f3-1f91-4bd7-9d46-df7235633d58-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-656bf7cf7c-w49nx\" (UID: \"4de190f3-1f91-4bd7-9d46-df7235633d58\") " pod="openshift-logging/logging-loki-gateway-656bf7cf7c-w49nx" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.105625 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/4de190f3-1f91-4bd7-9d46-df7235633d58-tenants\") pod \"logging-loki-gateway-656bf7cf7c-w49nx\" (UID: \"4de190f3-1f91-4bd7-9d46-df7235633d58\") " pod="openshift-logging/logging-loki-gateway-656bf7cf7c-w49nx" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.105644 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4de190f3-1f91-4bd7-9d46-df7235633d58-logging-loki-ca-bundle\") pod \"logging-loki-gateway-656bf7cf7c-w49nx\" (UID: \"4de190f3-1f91-4bd7-9d46-df7235633d58\") " pod="openshift-logging/logging-loki-gateway-656bf7cf7c-w49nx" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.105661 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/4de190f3-1f91-4bd7-9d46-df7235633d58-tls-secret\") pod \"logging-loki-gateway-656bf7cf7c-w49nx\" (UID: \"4de190f3-1f91-4bd7-9d46-df7235633d58\") " pod="openshift-logging/logging-loki-gateway-656bf7cf7c-w49nx" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.105694 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/4de190f3-1f91-4bd7-9d46-df7235633d58-rbac\") pod \"logging-loki-gateway-656bf7cf7c-w49nx\" (UID: \"4de190f3-1f91-4bd7-9d46-df7235633d58\") " pod="openshift-logging/logging-loki-gateway-656bf7cf7c-w49nx" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.105713 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/4de190f3-1f91-4bd7-9d46-df7235633d58-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-656bf7cf7c-w49nx\" (UID: \"4de190f3-1f91-4bd7-9d46-df7235633d58\") " pod="openshift-logging/logging-loki-gateway-656bf7cf7c-w49nx" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.109500 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/4de190f3-1f91-4bd7-9d46-df7235633d58-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-656bf7cf7c-w49nx\" (UID: \"4de190f3-1f91-4bd7-9d46-df7235633d58\") " pod="openshift-logging/logging-loki-gateway-656bf7cf7c-w49nx" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.111510 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/4de190f3-1f91-4bd7-9d46-df7235633d58-lokistack-gateway\") pod \"logging-loki-gateway-656bf7cf7c-w49nx\" (UID: \"4de190f3-1f91-4bd7-9d46-df7235633d58\") " pod="openshift-logging/logging-loki-gateway-656bf7cf7c-w49nx" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.112885 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4de190f3-1f91-4bd7-9d46-df7235633d58-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-656bf7cf7c-w49nx\" (UID: \"4de190f3-1f91-4bd7-9d46-df7235633d58\") " pod="openshift-logging/logging-loki-gateway-656bf7cf7c-w49nx" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.118346 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/4de190f3-1f91-4bd7-9d46-df7235633d58-tenants\") pod \"logging-loki-gateway-656bf7cf7c-w49nx\" (UID: \"4de190f3-1f91-4bd7-9d46-df7235633d58\") " pod="openshift-logging/logging-loki-gateway-656bf7cf7c-w49nx" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.119160 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4de190f3-1f91-4bd7-9d46-df7235633d58-logging-loki-ca-bundle\") pod \"logging-loki-gateway-656bf7cf7c-w49nx\" (UID: \"4de190f3-1f91-4bd7-9d46-df7235633d58\") " pod="openshift-logging/logging-loki-gateway-656bf7cf7c-w49nx" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.122319 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/4de190f3-1f91-4bd7-9d46-df7235633d58-tls-secret\") pod \"logging-loki-gateway-656bf7cf7c-w49nx\" (UID: \"4de190f3-1f91-4bd7-9d46-df7235633d58\") " pod="openshift-logging/logging-loki-gateway-656bf7cf7c-w49nx" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.123068 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/4de190f3-1f91-4bd7-9d46-df7235633d58-rbac\") pod \"logging-loki-gateway-656bf7cf7c-w49nx\" (UID: \"4de190f3-1f91-4bd7-9d46-df7235633d58\") " pod="openshift-logging/logging-loki-gateway-656bf7cf7c-w49nx" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.134176 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vzvf\" (UniqueName: \"kubernetes.io/projected/4de190f3-1f91-4bd7-9d46-df7235633d58-kube-api-access-6vzvf\") pod \"logging-loki-gateway-656bf7cf7c-w49nx\" (UID: \"4de190f3-1f91-4bd7-9d46-df7235633d58\") " pod="openshift-logging/logging-loki-gateway-656bf7cf7c-w49nx" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.208921 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-656bf7cf7c-w49nx" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.405842 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.406884 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.412318 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.412746 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.416104 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.427795 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/428c2117-0003-47b2-abfa-f4f7930e126c-tls-secret\") pod \"logging-loki-gateway-656bf7cf7c-98v92\" (UID: \"428c2117-0003-47b2-abfa-f4f7930e126c\") " pod="openshift-logging/logging-loki-gateway-656bf7cf7c-98v92" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.427923 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/428c2117-0003-47b2-abfa-f4f7930e126c-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-656bf7cf7c-98v92\" (UID: \"428c2117-0003-47b2-abfa-f4f7930e126c\") " pod="openshift-logging/logging-loki-gateway-656bf7cf7c-98v92" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.428944 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/428c2117-0003-47b2-abfa-f4f7930e126c-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-656bf7cf7c-98v92\" (UID: \"428c2117-0003-47b2-abfa-f4f7930e126c\") " pod="openshift-logging/logging-loki-gateway-656bf7cf7c-98v92" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.440486 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/428c2117-0003-47b2-abfa-f4f7930e126c-tls-secret\") pod \"logging-loki-gateway-656bf7cf7c-98v92\" (UID: \"428c2117-0003-47b2-abfa-f4f7930e126c\") " pod="openshift-logging/logging-loki-gateway-656bf7cf7c-98v92" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.473446 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76788598db-b8thp"] Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.517574 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5f678c8dd6-p67sv"] Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.529135 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/78ad3d84-530d-45e9-928d-c552448aec20-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"78ad3d84-530d-45e9-928d-c552448aec20\") " pod="openshift-logging/logging-loki-ingester-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.529179 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-72475b1a-e333-47b6-9122-2cc81e2043ab\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-72475b1a-e333-47b6-9122-2cc81e2043ab\") pod \"logging-loki-ingester-0\" (UID: \"78ad3d84-530d-45e9-928d-c552448aec20\") " pod="openshift-logging/logging-loki-ingester-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.529226 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/78ad3d84-530d-45e9-928d-c552448aec20-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"78ad3d84-530d-45e9-928d-c552448aec20\") " pod="openshift-logging/logging-loki-ingester-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.529266 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-89758075-f36e-45b6-8df4-681936387c1c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89758075-f36e-45b6-8df4-681936387c1c\") pod \"logging-loki-ingester-0\" (UID: \"78ad3d84-530d-45e9-928d-c552448aec20\") " pod="openshift-logging/logging-loki-ingester-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.529298 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhzjc\" (UniqueName: \"kubernetes.io/projected/78ad3d84-530d-45e9-928d-c552448aec20-kube-api-access-xhzjc\") pod \"logging-loki-ingester-0\" (UID: \"78ad3d84-530d-45e9-928d-c552448aec20\") " pod="openshift-logging/logging-loki-ingester-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.529329 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78ad3d84-530d-45e9-928d-c552448aec20-config\") pod \"logging-loki-ingester-0\" (UID: \"78ad3d84-530d-45e9-928d-c552448aec20\") " pod="openshift-logging/logging-loki-ingester-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.529363 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78ad3d84-530d-45e9-928d-c552448aec20-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"78ad3d84-530d-45e9-928d-c552448aec20\") " pod="openshift-logging/logging-loki-ingester-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.529398 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/78ad3d84-530d-45e9-928d-c552448aec20-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"78ad3d84-530d-45e9-928d-c552448aec20\") " pod="openshift-logging/logging-loki-ingester-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.574740 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.575916 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.593158 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.595036 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.595876 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.630411 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhzjc\" (UniqueName: \"kubernetes.io/projected/78ad3d84-530d-45e9-928d-c552448aec20-kube-api-access-xhzjc\") pod \"logging-loki-ingester-0\" (UID: \"78ad3d84-530d-45e9-928d-c552448aec20\") " pod="openshift-logging/logging-loki-ingester-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.630469 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78ad3d84-530d-45e9-928d-c552448aec20-config\") pod \"logging-loki-ingester-0\" (UID: \"78ad3d84-530d-45e9-928d-c552448aec20\") " pod="openshift-logging/logging-loki-ingester-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.630511 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78ad3d84-530d-45e9-928d-c552448aec20-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"78ad3d84-530d-45e9-928d-c552448aec20\") " pod="openshift-logging/logging-loki-ingester-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.630548 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/78ad3d84-530d-45e9-928d-c552448aec20-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"78ad3d84-530d-45e9-928d-c552448aec20\") " pod="openshift-logging/logging-loki-ingester-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.630595 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/78ad3d84-530d-45e9-928d-c552448aec20-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"78ad3d84-530d-45e9-928d-c552448aec20\") " pod="openshift-logging/logging-loki-ingester-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.630617 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-72475b1a-e333-47b6-9122-2cc81e2043ab\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-72475b1a-e333-47b6-9122-2cc81e2043ab\") pod \"logging-loki-ingester-0\" (UID: \"78ad3d84-530d-45e9-928d-c552448aec20\") " pod="openshift-logging/logging-loki-ingester-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.630644 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/78ad3d84-530d-45e9-928d-c552448aec20-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"78ad3d84-530d-45e9-928d-c552448aec20\") " pod="openshift-logging/logging-loki-ingester-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.630683 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-89758075-f36e-45b6-8df4-681936387c1c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89758075-f36e-45b6-8df4-681936387c1c\") pod \"logging-loki-ingester-0\" (UID: \"78ad3d84-530d-45e9-928d-c552448aec20\") " pod="openshift-logging/logging-loki-ingester-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.635798 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78ad3d84-530d-45e9-928d-c552448aec20-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"78ad3d84-530d-45e9-928d-c552448aec20\") " pod="openshift-logging/logging-loki-ingester-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.640194 4854 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.640270 4854 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-89758075-f36e-45b6-8df4-681936387c1c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89758075-f36e-45b6-8df4-681936387c1c\") pod \"logging-loki-ingester-0\" (UID: \"78ad3d84-530d-45e9-928d-c552448aec20\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/178d2c88c3430bd293266f16fa827c1e436e1da2ba533f62b8a09cadfd2cadff/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.640909 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/78ad3d84-530d-45e9-928d-c552448aec20-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"78ad3d84-530d-45e9-928d-c552448aec20\") " pod="openshift-logging/logging-loki-ingester-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.641652 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/78ad3d84-530d-45e9-928d-c552448aec20-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"78ad3d84-530d-45e9-928d-c552448aec20\") " pod="openshift-logging/logging-loki-ingester-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.644525 4854 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.644588 4854 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-72475b1a-e333-47b6-9122-2cc81e2043ab\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-72475b1a-e333-47b6-9122-2cc81e2043ab\") pod \"logging-loki-ingester-0\" (UID: \"78ad3d84-530d-45e9-928d-c552448aec20\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/94572e61204e52cd39172f1f25c1fc5f6fe24f21e330d1a33a8710c428a7c409/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.653291 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78ad3d84-530d-45e9-928d-c552448aec20-config\") pod \"logging-loki-ingester-0\" (UID: \"78ad3d84-530d-45e9-928d-c552448aec20\") " pod="openshift-logging/logging-loki-ingester-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.662378 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/78ad3d84-530d-45e9-928d-c552448aec20-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"78ad3d84-530d-45e9-928d-c552448aec20\") " pod="openshift-logging/logging-loki-ingester-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.667692 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhzjc\" (UniqueName: \"kubernetes.io/projected/78ad3d84-530d-45e9-928d-c552448aec20-kube-api-access-xhzjc\") pod \"logging-loki-ingester-0\" (UID: \"78ad3d84-530d-45e9-928d-c552448aec20\") " pod="openshift-logging/logging-loki-ingester-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.678207 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-72475b1a-e333-47b6-9122-2cc81e2043ab\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-72475b1a-e333-47b6-9122-2cc81e2043ab\") pod \"logging-loki-ingester-0\" (UID: \"78ad3d84-530d-45e9-928d-c552448aec20\") " pod="openshift-logging/logging-loki-ingester-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.685692 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-89758075-f36e-45b6-8df4-681936387c1c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89758075-f36e-45b6-8df4-681936387c1c\") pod \"logging-loki-ingester-0\" (UID: \"78ad3d84-530d-45e9-928d-c552448aec20\") " pod="openshift-logging/logging-loki-ingester-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.702984 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-656bf7cf7c-98v92" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.727379 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.729155 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.732351 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/7fb7ba42-5d69-44aa-87b2-28130157852b-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"7fb7ba42-5d69-44aa-87b2-28130157852b\") " pod="openshift-logging/logging-loki-compactor-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.732415 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fb7ba42-5d69-44aa-87b2-28130157852b-config\") pod \"logging-loki-compactor-0\" (UID: \"7fb7ba42-5d69-44aa-87b2-28130157852b\") " pod="openshift-logging/logging-loki-compactor-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.732440 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7fb7ba42-5d69-44aa-87b2-28130157852b-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"7fb7ba42-5d69-44aa-87b2-28130157852b\") " pod="openshift-logging/logging-loki-compactor-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.732477 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b14f64df-5e16-4cee-902d-0f57d6e062fb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b14f64df-5e16-4cee-902d-0f57d6e062fb\") pod \"logging-loki-compactor-0\" (UID: \"7fb7ba42-5d69-44aa-87b2-28130157852b\") " pod="openshift-logging/logging-loki-compactor-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.732507 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-798kt\" (UniqueName: \"kubernetes.io/projected/7fb7ba42-5d69-44aa-87b2-28130157852b-kube-api-access-798kt\") pod \"logging-loki-compactor-0\" (UID: \"7fb7ba42-5d69-44aa-87b2-28130157852b\") " pod="openshift-logging/logging-loki-compactor-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.732534 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/7fb7ba42-5d69-44aa-87b2-28130157852b-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"7fb7ba42-5d69-44aa-87b2-28130157852b\") " pod="openshift-logging/logging-loki-compactor-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.732557 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/7fb7ba42-5d69-44aa-87b2-28130157852b-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"7fb7ba42-5d69-44aa-87b2-28130157852b\") " pod="openshift-logging/logging-loki-compactor-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.732760 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.735593 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.735783 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.775708 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.813967 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-69d9546745-42f7g"] Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.833801 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c61cab0d-5846-418e-94ca-35e8a6c31ca0-config\") pod \"logging-loki-index-gateway-0\" (UID: \"c61cab0d-5846-418e-94ca-35e8a6c31ca0\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.833848 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-67f780fe-681f-4f07-b8f2-69fd094a92cc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-67f780fe-681f-4f07-b8f2-69fd094a92cc\") pod \"logging-loki-index-gateway-0\" (UID: \"c61cab0d-5846-418e-94ca-35e8a6c31ca0\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.834126 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/c61cab0d-5846-418e-94ca-35e8a6c31ca0-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"c61cab0d-5846-418e-94ca-35e8a6c31ca0\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.834159 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c61cab0d-5846-418e-94ca-35e8a6c31ca0-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"c61cab0d-5846-418e-94ca-35e8a6c31ca0\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.834185 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fb7ba42-5d69-44aa-87b2-28130157852b-config\") pod \"logging-loki-compactor-0\" (UID: \"7fb7ba42-5d69-44aa-87b2-28130157852b\") " pod="openshift-logging/logging-loki-compactor-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.834207 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7fb7ba42-5d69-44aa-87b2-28130157852b-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"7fb7ba42-5d69-44aa-87b2-28130157852b\") " pod="openshift-logging/logging-loki-compactor-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.834234 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b14f64df-5e16-4cee-902d-0f57d6e062fb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b14f64df-5e16-4cee-902d-0f57d6e062fb\") pod \"logging-loki-compactor-0\" (UID: \"7fb7ba42-5d69-44aa-87b2-28130157852b\") " pod="openshift-logging/logging-loki-compactor-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.834256 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx44w\" (UniqueName: \"kubernetes.io/projected/c61cab0d-5846-418e-94ca-35e8a6c31ca0-kube-api-access-fx44w\") pod \"logging-loki-index-gateway-0\" (UID: \"c61cab0d-5846-418e-94ca-35e8a6c31ca0\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.834274 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-798kt\" (UniqueName: \"kubernetes.io/projected/7fb7ba42-5d69-44aa-87b2-28130157852b-kube-api-access-798kt\") pod \"logging-loki-compactor-0\" (UID: \"7fb7ba42-5d69-44aa-87b2-28130157852b\") " pod="openshift-logging/logging-loki-compactor-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.834304 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/c61cab0d-5846-418e-94ca-35e8a6c31ca0-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"c61cab0d-5846-418e-94ca-35e8a6c31ca0\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.834337 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/7fb7ba42-5d69-44aa-87b2-28130157852b-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"7fb7ba42-5d69-44aa-87b2-28130157852b\") " pod="openshift-logging/logging-loki-compactor-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.834361 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/c61cab0d-5846-418e-94ca-35e8a6c31ca0-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"c61cab0d-5846-418e-94ca-35e8a6c31ca0\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.834400 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/7fb7ba42-5d69-44aa-87b2-28130157852b-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"7fb7ba42-5d69-44aa-87b2-28130157852b\") " pod="openshift-logging/logging-loki-compactor-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.834424 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/7fb7ba42-5d69-44aa-87b2-28130157852b-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"7fb7ba42-5d69-44aa-87b2-28130157852b\") " pod="openshift-logging/logging-loki-compactor-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.837074 4854 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.837130 4854 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b14f64df-5e16-4cee-902d-0f57d6e062fb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b14f64df-5e16-4cee-902d-0f57d6e062fb\") pod \"logging-loki-compactor-0\" (UID: \"7fb7ba42-5d69-44aa-87b2-28130157852b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ca9cef0e387c18504a5f4ae98e0f0efedf9abd494e9216797ae8d26a77357354/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.838529 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fb7ba42-5d69-44aa-87b2-28130157852b-config\") pod \"logging-loki-compactor-0\" (UID: \"7fb7ba42-5d69-44aa-87b2-28130157852b\") " pod="openshift-logging/logging-loki-compactor-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.839224 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7fb7ba42-5d69-44aa-87b2-28130157852b-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"7fb7ba42-5d69-44aa-87b2-28130157852b\") " pod="openshift-logging/logging-loki-compactor-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.840154 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/7fb7ba42-5d69-44aa-87b2-28130157852b-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"7fb7ba42-5d69-44aa-87b2-28130157852b\") " pod="openshift-logging/logging-loki-compactor-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.840729 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/7fb7ba42-5d69-44aa-87b2-28130157852b-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"7fb7ba42-5d69-44aa-87b2-28130157852b\") " pod="openshift-logging/logging-loki-compactor-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.842224 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/7fb7ba42-5d69-44aa-87b2-28130157852b-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"7fb7ba42-5d69-44aa-87b2-28130157852b\") " pod="openshift-logging/logging-loki-compactor-0" Jan 03 05:54:03 crc kubenswrapper[4854]: W0103 05:54:03.845456 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb98c17f7_1569_4c33_ab65_f4c2ba0555ae.slice/crio-9751a4195ed77d9868c1e4a2c76730f8a01bd0dd4b8cb873953d3aa31a0b68c7 WatchSource:0}: Error finding container 9751a4195ed77d9868c1e4a2c76730f8a01bd0dd4b8cb873953d3aa31a0b68c7: Status 404 returned error can't find the container with id 9751a4195ed77d9868c1e4a2c76730f8a01bd0dd4b8cb873953d3aa31a0b68c7 Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.851371 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-798kt\" (UniqueName: \"kubernetes.io/projected/7fb7ba42-5d69-44aa-87b2-28130157852b-kube-api-access-798kt\") pod \"logging-loki-compactor-0\" (UID: \"7fb7ba42-5d69-44aa-87b2-28130157852b\") " pod="openshift-logging/logging-loki-compactor-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.863135 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b14f64df-5e16-4cee-902d-0f57d6e062fb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b14f64df-5e16-4cee-902d-0f57d6e062fb\") pod \"logging-loki-compactor-0\" (UID: \"7fb7ba42-5d69-44aa-87b2-28130157852b\") " pod="openshift-logging/logging-loki-compactor-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.927670 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.933365 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-p67sv" event={"ID":"128d93c6-02aa-4f68-aac6-cfcab1896a35","Type":"ContainerStarted","Data":"d0ff57d3185d99b1ed2ce2dd11c07a30c90948062e53a4684395df30a573865c"} Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.936112 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/c61cab0d-5846-418e-94ca-35e8a6c31ca0-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"c61cab0d-5846-418e-94ca-35e8a6c31ca0\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.936186 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/c61cab0d-5846-418e-94ca-35e8a6c31ca0-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"c61cab0d-5846-418e-94ca-35e8a6c31ca0\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.936230 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c61cab0d-5846-418e-94ca-35e8a6c31ca0-config\") pod \"logging-loki-index-gateway-0\" (UID: \"c61cab0d-5846-418e-94ca-35e8a6c31ca0\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.936255 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-67f780fe-681f-4f07-b8f2-69fd094a92cc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-67f780fe-681f-4f07-b8f2-69fd094a92cc\") pod \"logging-loki-index-gateway-0\" (UID: \"c61cab0d-5846-418e-94ca-35e8a6c31ca0\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.936299 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/c61cab0d-5846-418e-94ca-35e8a6c31ca0-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"c61cab0d-5846-418e-94ca-35e8a6c31ca0\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.936323 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c61cab0d-5846-418e-94ca-35e8a6c31ca0-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"c61cab0d-5846-418e-94ca-35e8a6c31ca0\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.936358 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fx44w\" (UniqueName: \"kubernetes.io/projected/c61cab0d-5846-418e-94ca-35e8a6c31ca0-kube-api-access-fx44w\") pod \"logging-loki-index-gateway-0\" (UID: \"c61cab0d-5846-418e-94ca-35e8a6c31ca0\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.937515 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c61cab0d-5846-418e-94ca-35e8a6c31ca0-config\") pod \"logging-loki-index-gateway-0\" (UID: \"c61cab0d-5846-418e-94ca-35e8a6c31ca0\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.940272 4854 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.940315 4854 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-67f780fe-681f-4f07-b8f2-69fd094a92cc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-67f780fe-681f-4f07-b8f2-69fd094a92cc\") pod \"logging-loki-index-gateway-0\" (UID: \"c61cab0d-5846-418e-94ca-35e8a6c31ca0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7027252d949e4232c061dec401c0cbbc1d9b399208dc2cc3253ea615c0241607/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.941114 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-69d9546745-42f7g" event={"ID":"b98c17f7-1569-4c33-ab65-f4c2ba0555ae","Type":"ContainerStarted","Data":"9751a4195ed77d9868c1e4a2c76730f8a01bd0dd4b8cb873953d3aa31a0b68c7"} Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.941562 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c61cab0d-5846-418e-94ca-35e8a6c31ca0-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"c61cab0d-5846-418e-94ca-35e8a6c31ca0\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.945303 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76788598db-b8thp" event={"ID":"66f9492b-16b5-4b86-bb22-560ad0f8001c","Type":"ContainerStarted","Data":"73a39fc6b59d1c76ab2594d5e6c85d7b49371c18333c74ba6f28df956396db2b"} Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.946485 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/c61cab0d-5846-418e-94ca-35e8a6c31ca0-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"c61cab0d-5846-418e-94ca-35e8a6c31ca0\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.956731 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fx44w\" (UniqueName: \"kubernetes.io/projected/c61cab0d-5846-418e-94ca-35e8a6c31ca0-kube-api-access-fx44w\") pod \"logging-loki-index-gateway-0\" (UID: \"c61cab0d-5846-418e-94ca-35e8a6c31ca0\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.957098 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/c61cab0d-5846-418e-94ca-35e8a6c31ca0-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"c61cab0d-5846-418e-94ca-35e8a6c31ca0\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.958918 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/c61cab0d-5846-418e-94ca-35e8a6c31ca0-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"c61cab0d-5846-418e-94ca-35e8a6c31ca0\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.969374 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-656bf7cf7c-w49nx"] Jan 03 05:54:03 crc kubenswrapper[4854]: I0103 05:54:03.978839 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-67f780fe-681f-4f07-b8f2-69fd094a92cc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-67f780fe-681f-4f07-b8f2-69fd094a92cc\") pod \"logging-loki-index-gateway-0\" (UID: \"c61cab0d-5846-418e-94ca-35e8a6c31ca0\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 03 05:54:04 crc kubenswrapper[4854]: I0103 05:54:04.056418 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Jan 03 05:54:04 crc kubenswrapper[4854]: I0103 05:54:04.157037 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-656bf7cf7c-98v92"] Jan 03 05:54:04 crc kubenswrapper[4854]: I0103 05:54:04.314609 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Jan 03 05:54:04 crc kubenswrapper[4854]: I0103 05:54:04.437275 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Jan 03 05:54:04 crc kubenswrapper[4854]: I0103 05:54:04.446296 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Jan 03 05:54:04 crc kubenswrapper[4854]: W0103 05:54:04.456428 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc61cab0d_5846_418e_94ca_35e8a6c31ca0.slice/crio-51ee298fad56a99697400a1cca1115fc3e4234f55146821e707d8798c0a9dabd WatchSource:0}: Error finding container 51ee298fad56a99697400a1cca1115fc3e4234f55146821e707d8798c0a9dabd: Status 404 returned error can't find the container with id 51ee298fad56a99697400a1cca1115fc3e4234f55146821e707d8798c0a9dabd Jan 03 05:54:04 crc kubenswrapper[4854]: I0103 05:54:04.961141 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"7fb7ba42-5d69-44aa-87b2-28130157852b","Type":"ContainerStarted","Data":"c6576bd18abacc0287b1877d6659aff8faa5cc056e9676537538abf1e9736d7e"} Jan 03 05:54:04 crc kubenswrapper[4854]: I0103 05:54:04.963055 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-656bf7cf7c-w49nx" event={"ID":"4de190f3-1f91-4bd7-9d46-df7235633d58","Type":"ContainerStarted","Data":"9888b6478b768b6cc8407445a23f61ccfa1d790f0d17662fe7df393cd9b03e4f"} Jan 03 05:54:04 crc kubenswrapper[4854]: I0103 05:54:04.965393 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"78ad3d84-530d-45e9-928d-c552448aec20","Type":"ContainerStarted","Data":"b5a3c7dc1ff9da6605c6a7a5e3539adaebee18644279684be901b046d14ec142"} Jan 03 05:54:04 crc kubenswrapper[4854]: I0103 05:54:04.966905 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"c61cab0d-5846-418e-94ca-35e8a6c31ca0","Type":"ContainerStarted","Data":"51ee298fad56a99697400a1cca1115fc3e4234f55146821e707d8798c0a9dabd"} Jan 03 05:54:04 crc kubenswrapper[4854]: I0103 05:54:04.968691 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-656bf7cf7c-98v92" event={"ID":"428c2117-0003-47b2-abfa-f4f7930e126c","Type":"ContainerStarted","Data":"cd01d0d0f286cada5fe1588e31ba54071d15cc7ef88a3de2c361659e7deda1f6"} Jan 03 05:54:11 crc kubenswrapper[4854]: I0103 05:54:11.755419 4854 patch_prober.go:28] interesting pod/machine-config-daemon-qdhfx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 05:54:11 crc kubenswrapper[4854]: I0103 05:54:11.755957 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 05:54:15 crc kubenswrapper[4854]: I0103 05:54:15.295688 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"c61cab0d-5846-418e-94ca-35e8a6c31ca0","Type":"ContainerStarted","Data":"1a81491da5366309bf5334ffa7cf6bc6e111dc8501852dcdf9d36452c3c1402a"} Jan 03 05:54:15 crc kubenswrapper[4854]: I0103 05:54:15.296488 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Jan 03 05:54:15 crc kubenswrapper[4854]: I0103 05:54:15.298251 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-656bf7cf7c-98v92" event={"ID":"428c2117-0003-47b2-abfa-f4f7930e126c","Type":"ContainerStarted","Data":"afc72991c1f6b30966f6245b88cef92e377ae71e672cdb7b70fe4ae372d9433f"} Jan 03 05:54:15 crc kubenswrapper[4854]: I0103 05:54:15.299695 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"7fb7ba42-5d69-44aa-87b2-28130157852b","Type":"ContainerStarted","Data":"4ea00cd94f3f727781b54222da084897ab79879224843fe177967c10b6517a8d"} Jan 03 05:54:15 crc kubenswrapper[4854]: I0103 05:54:15.299834 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Jan 03 05:54:15 crc kubenswrapper[4854]: I0103 05:54:15.302045 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-656bf7cf7c-w49nx" event={"ID":"4de190f3-1f91-4bd7-9d46-df7235633d58","Type":"ContainerStarted","Data":"97172b625de99383ee62354eed45e0fdc1ebc93060e8f2cdb8a5f6a75c7df47d"} Jan 03 05:54:15 crc kubenswrapper[4854]: I0103 05:54:15.304949 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-p67sv" event={"ID":"128d93c6-02aa-4f68-aac6-cfcab1896a35","Type":"ContainerStarted","Data":"38064fbfd722124667ab5c03797c9d637e5867f32e735c064bc1ee0791552a22"} Jan 03 05:54:15 crc kubenswrapper[4854]: I0103 05:54:15.305901 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-p67sv" Jan 03 05:54:15 crc kubenswrapper[4854]: I0103 05:54:15.307603 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-69d9546745-42f7g" event={"ID":"b98c17f7-1569-4c33-ab65-f4c2ba0555ae","Type":"ContainerStarted","Data":"858e50665cc2f78ad8c1a72b602033881c4fc12dd9f8a21fe560e3ccc1799b95"} Jan 03 05:54:15 crc kubenswrapper[4854]: I0103 05:54:15.308213 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-69d9546745-42f7g" Jan 03 05:54:15 crc kubenswrapper[4854]: I0103 05:54:15.310593 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"78ad3d84-530d-45e9-928d-c552448aec20","Type":"ContainerStarted","Data":"4c3c9571da9bc93b0b3574f0c639c5c6825e474ff9be41fc2cedb952ae322d1a"} Jan 03 05:54:15 crc kubenswrapper[4854]: I0103 05:54:15.311108 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Jan 03 05:54:15 crc kubenswrapper[4854]: I0103 05:54:15.313073 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76788598db-b8thp" event={"ID":"66f9492b-16b5-4b86-bb22-560ad0f8001c","Type":"ContainerStarted","Data":"67d9bdeecdd7124ebae6ff2d531a7737f92abe4b254e5cae9b9b8fe75b80f551"} Jan 03 05:54:15 crc kubenswrapper[4854]: I0103 05:54:15.313677 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76788598db-b8thp" Jan 03 05:54:15 crc kubenswrapper[4854]: I0103 05:54:15.323136 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=3.53929656 podStartE2EDuration="13.323089281s" podCreationTimestamp="2026-01-03 05:54:02 +0000 UTC" firstStartedPulling="2026-01-03 05:54:04.460622996 +0000 UTC m=+822.787199568" lastFinishedPulling="2026-01-03 05:54:14.244415717 +0000 UTC m=+832.570992289" observedRunningTime="2026-01-03 05:54:15.319635905 +0000 UTC m=+833.646212477" watchObservedRunningTime="2026-01-03 05:54:15.323089281 +0000 UTC m=+833.649665863" Jan 03 05:54:15 crc kubenswrapper[4854]: I0103 05:54:15.346301 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=3.499738876 podStartE2EDuration="13.346270719s" podCreationTimestamp="2026-01-03 05:54:02 +0000 UTC" firstStartedPulling="2026-01-03 05:54:04.449008407 +0000 UTC m=+822.775584979" lastFinishedPulling="2026-01-03 05:54:14.29554025 +0000 UTC m=+832.622116822" observedRunningTime="2026-01-03 05:54:15.344976306 +0000 UTC m=+833.671552898" watchObservedRunningTime="2026-01-03 05:54:15.346270719 +0000 UTC m=+833.672847311" Jan 03 05:54:15 crc kubenswrapper[4854]: I0103 05:54:15.363120 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-p67sv" podStartSLOduration=2.50613642 podStartE2EDuration="13.363101298s" podCreationTimestamp="2026-01-03 05:54:02 +0000 UTC" firstStartedPulling="2026-01-03 05:54:03.528243256 +0000 UTC m=+821.854819828" lastFinishedPulling="2026-01-03 05:54:14.385208094 +0000 UTC m=+832.711784706" observedRunningTime="2026-01-03 05:54:15.362348489 +0000 UTC m=+833.688925061" watchObservedRunningTime="2026-01-03 05:54:15.363101298 +0000 UTC m=+833.689677880" Jan 03 05:54:15 crc kubenswrapper[4854]: I0103 05:54:15.398833 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=3.358793266 podStartE2EDuration="13.398815837s" podCreationTimestamp="2026-01-03 05:54:02 +0000 UTC" firstStartedPulling="2026-01-03 05:54:04.345253444 +0000 UTC m=+822.671830026" lastFinishedPulling="2026-01-03 05:54:14.385276025 +0000 UTC m=+832.711852597" observedRunningTime="2026-01-03 05:54:15.39490339 +0000 UTC m=+833.721479962" watchObservedRunningTime="2026-01-03 05:54:15.398815837 +0000 UTC m=+833.725392399" Jan 03 05:54:15 crc kubenswrapper[4854]: I0103 05:54:15.415845 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-76788598db-b8thp" podStartSLOduration=2.6028678899999997 podStartE2EDuration="13.415810801s" podCreationTimestamp="2026-01-03 05:54:02 +0000 UTC" firstStartedPulling="2026-01-03 05:54:03.51193776 +0000 UTC m=+821.838514332" lastFinishedPulling="2026-01-03 05:54:14.324880641 +0000 UTC m=+832.651457243" observedRunningTime="2026-01-03 05:54:15.415507393 +0000 UTC m=+833.742083975" watchObservedRunningTime="2026-01-03 05:54:15.415810801 +0000 UTC m=+833.742387373" Jan 03 05:54:19 crc kubenswrapper[4854]: I0103 05:54:19.375907 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-656bf7cf7c-w49nx" event={"ID":"4de190f3-1f91-4bd7-9d46-df7235633d58","Type":"ContainerStarted","Data":"6de6cb6203d572cfcc97a0c5cd2b45caf1e19913a2e506d4d1b35553520987eb"} Jan 03 05:54:19 crc kubenswrapper[4854]: I0103 05:54:19.379536 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-656bf7cf7c-w49nx" Jan 03 05:54:19 crc kubenswrapper[4854]: I0103 05:54:19.380261 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-656bf7cf7c-w49nx" Jan 03 05:54:19 crc kubenswrapper[4854]: I0103 05:54:19.388323 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-656bf7cf7c-98v92" event={"ID":"428c2117-0003-47b2-abfa-f4f7930e126c","Type":"ContainerStarted","Data":"75b4c84df2019e6ed1b0fd093e71f6e8e421b3eebe2cfc692441109cefe69634"} Jan 03 05:54:19 crc kubenswrapper[4854]: I0103 05:54:19.388722 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-656bf7cf7c-98v92" Jan 03 05:54:19 crc kubenswrapper[4854]: I0103 05:54:19.411688 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-656bf7cf7c-98v92" Jan 03 05:54:19 crc kubenswrapper[4854]: I0103 05:54:19.420997 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-656bf7cf7c-w49nx" podStartSLOduration=3.008642055 podStartE2EDuration="17.420973421s" podCreationTimestamp="2026-01-03 05:54:02 +0000 UTC" firstStartedPulling="2026-01-03 05:54:03.993303498 +0000 UTC m=+822.319880070" lastFinishedPulling="2026-01-03 05:54:18.405634864 +0000 UTC m=+836.732211436" observedRunningTime="2026-01-03 05:54:19.414011448 +0000 UTC m=+837.740588100" watchObservedRunningTime="2026-01-03 05:54:19.420973421 +0000 UTC m=+837.747550033" Jan 03 05:54:19 crc kubenswrapper[4854]: I0103 05:54:19.424343 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-69d9546745-42f7g" podStartSLOduration=6.864820765 podStartE2EDuration="17.424332565s" podCreationTimestamp="2026-01-03 05:54:02 +0000 UTC" firstStartedPulling="2026-01-03 05:54:03.85727091 +0000 UTC m=+822.183847482" lastFinishedPulling="2026-01-03 05:54:14.41678271 +0000 UTC m=+832.743359282" observedRunningTime="2026-01-03 05:54:15.440245089 +0000 UTC m=+833.766821671" watchObservedRunningTime="2026-01-03 05:54:19.424332565 +0000 UTC m=+837.750909147" Jan 03 05:54:19 crc kubenswrapper[4854]: I0103 05:54:19.425508 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-656bf7cf7c-w49nx" Jan 03 05:54:19 crc kubenswrapper[4854]: I0103 05:54:19.426268 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-656bf7cf7c-w49nx" Jan 03 05:54:19 crc kubenswrapper[4854]: I0103 05:54:19.456045 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-656bf7cf7c-98v92" podStartSLOduration=3.21721905 podStartE2EDuration="17.456019664s" podCreationTimestamp="2026-01-03 05:54:02 +0000 UTC" firstStartedPulling="2026-01-03 05:54:04.161879297 +0000 UTC m=+822.488455869" lastFinishedPulling="2026-01-03 05:54:18.400679911 +0000 UTC m=+836.727256483" observedRunningTime="2026-01-03 05:54:19.446471716 +0000 UTC m=+837.773048318" watchObservedRunningTime="2026-01-03 05:54:19.456019664 +0000 UTC m=+837.782596276" Jan 03 05:54:20 crc kubenswrapper[4854]: I0103 05:54:20.395470 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-656bf7cf7c-98v92" Jan 03 05:54:20 crc kubenswrapper[4854]: I0103 05:54:20.411176 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-656bf7cf7c-98v92" Jan 03 05:54:32 crc kubenswrapper[4854]: I0103 05:54:32.547006 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-p67sv" Jan 03 05:54:32 crc kubenswrapper[4854]: I0103 05:54:32.857566 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76788598db-b8thp" Jan 03 05:54:32 crc kubenswrapper[4854]: I0103 05:54:32.966850 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-69d9546745-42f7g" Jan 03 05:54:33 crc kubenswrapper[4854]: I0103 05:54:33.784289 4854 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Jan 03 05:54:33 crc kubenswrapper[4854]: I0103 05:54:33.784668 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="78ad3d84-530d-45e9-928d-c552448aec20" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 03 05:54:33 crc kubenswrapper[4854]: I0103 05:54:33.937690 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Jan 03 05:54:34 crc kubenswrapper[4854]: I0103 05:54:34.064278 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Jan 03 05:54:41 crc kubenswrapper[4854]: I0103 05:54:41.756269 4854 patch_prober.go:28] interesting pod/machine-config-daemon-qdhfx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 05:54:41 crc kubenswrapper[4854]: I0103 05:54:41.757019 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 05:54:41 crc kubenswrapper[4854]: I0103 05:54:41.757133 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" Jan 03 05:54:41 crc kubenswrapper[4854]: I0103 05:54:41.758154 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e01a9e027959d17e8604f32720a945b283546578a7ff1bc2cd05356d9cba66ad"} pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 03 05:54:41 crc kubenswrapper[4854]: I0103 05:54:41.758255 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" containerID="cri-o://e01a9e027959d17e8604f32720a945b283546578a7ff1bc2cd05356d9cba66ad" gracePeriod=600 Jan 03 05:54:42 crc kubenswrapper[4854]: I0103 05:54:42.589762 4854 generic.go:334] "Generic (PLEG): container finished" podID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerID="e01a9e027959d17e8604f32720a945b283546578a7ff1bc2cd05356d9cba66ad" exitCode=0 Jan 03 05:54:42 crc kubenswrapper[4854]: I0103 05:54:42.589840 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" event={"ID":"e8c88d7d-092b-44f7-b4c8-3540be3c0e8b","Type":"ContainerDied","Data":"e01a9e027959d17e8604f32720a945b283546578a7ff1bc2cd05356d9cba66ad"} Jan 03 05:54:42 crc kubenswrapper[4854]: I0103 05:54:42.590164 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" event={"ID":"e8c88d7d-092b-44f7-b4c8-3540be3c0e8b","Type":"ContainerStarted","Data":"382eac6c86719b2cf06557df9d71397fec24546fd4a1359e257bb73a0fbe3ef6"} Jan 03 05:54:42 crc kubenswrapper[4854]: I0103 05:54:42.590190 4854 scope.go:117] "RemoveContainer" containerID="75bb4ac621ba37ad54638b77615a99cc4b805eef98715dac578d470754e1c858" Jan 03 05:54:43 crc kubenswrapper[4854]: I0103 05:54:43.781009 4854 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Jan 03 05:54:43 crc kubenswrapper[4854]: I0103 05:54:43.781381 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="78ad3d84-530d-45e9-928d-c552448aec20" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 03 05:54:44 crc kubenswrapper[4854]: I0103 05:54:44.534226 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ps7v4"] Jan 03 05:54:44 crc kubenswrapper[4854]: I0103 05:54:44.537133 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ps7v4" Jan 03 05:54:44 crc kubenswrapper[4854]: I0103 05:54:44.556262 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ps7v4"] Jan 03 05:54:44 crc kubenswrapper[4854]: I0103 05:54:44.655552 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/476034b9-cce1-44fa-b56a-2afa2e90f59d-catalog-content\") pod \"certified-operators-ps7v4\" (UID: \"476034b9-cce1-44fa-b56a-2afa2e90f59d\") " pod="openshift-marketplace/certified-operators-ps7v4" Jan 03 05:54:44 crc kubenswrapper[4854]: I0103 05:54:44.655843 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/476034b9-cce1-44fa-b56a-2afa2e90f59d-utilities\") pod \"certified-operators-ps7v4\" (UID: \"476034b9-cce1-44fa-b56a-2afa2e90f59d\") " pod="openshift-marketplace/certified-operators-ps7v4" Jan 03 05:54:44 crc kubenswrapper[4854]: I0103 05:54:44.656354 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk72n\" (UniqueName: \"kubernetes.io/projected/476034b9-cce1-44fa-b56a-2afa2e90f59d-kube-api-access-xk72n\") pod \"certified-operators-ps7v4\" (UID: \"476034b9-cce1-44fa-b56a-2afa2e90f59d\") " pod="openshift-marketplace/certified-operators-ps7v4" Jan 03 05:54:44 crc kubenswrapper[4854]: I0103 05:54:44.757988 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/476034b9-cce1-44fa-b56a-2afa2e90f59d-catalog-content\") pod \"certified-operators-ps7v4\" (UID: \"476034b9-cce1-44fa-b56a-2afa2e90f59d\") " pod="openshift-marketplace/certified-operators-ps7v4" Jan 03 05:54:44 crc kubenswrapper[4854]: I0103 05:54:44.758044 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/476034b9-cce1-44fa-b56a-2afa2e90f59d-utilities\") pod \"certified-operators-ps7v4\" (UID: \"476034b9-cce1-44fa-b56a-2afa2e90f59d\") " pod="openshift-marketplace/certified-operators-ps7v4" Jan 03 05:54:44 crc kubenswrapper[4854]: I0103 05:54:44.758110 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk72n\" (UniqueName: \"kubernetes.io/projected/476034b9-cce1-44fa-b56a-2afa2e90f59d-kube-api-access-xk72n\") pod \"certified-operators-ps7v4\" (UID: \"476034b9-cce1-44fa-b56a-2afa2e90f59d\") " pod="openshift-marketplace/certified-operators-ps7v4" Jan 03 05:54:44 crc kubenswrapper[4854]: I0103 05:54:44.758637 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/476034b9-cce1-44fa-b56a-2afa2e90f59d-catalog-content\") pod \"certified-operators-ps7v4\" (UID: \"476034b9-cce1-44fa-b56a-2afa2e90f59d\") " pod="openshift-marketplace/certified-operators-ps7v4" Jan 03 05:54:44 crc kubenswrapper[4854]: I0103 05:54:44.758836 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/476034b9-cce1-44fa-b56a-2afa2e90f59d-utilities\") pod \"certified-operators-ps7v4\" (UID: \"476034b9-cce1-44fa-b56a-2afa2e90f59d\") " pod="openshift-marketplace/certified-operators-ps7v4" Jan 03 05:54:44 crc kubenswrapper[4854]: I0103 05:54:44.787695 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk72n\" (UniqueName: \"kubernetes.io/projected/476034b9-cce1-44fa-b56a-2afa2e90f59d-kube-api-access-xk72n\") pod \"certified-operators-ps7v4\" (UID: \"476034b9-cce1-44fa-b56a-2afa2e90f59d\") " pod="openshift-marketplace/certified-operators-ps7v4" Jan 03 05:54:44 crc kubenswrapper[4854]: I0103 05:54:44.902612 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ps7v4" Jan 03 05:54:45 crc kubenswrapper[4854]: I0103 05:54:45.544502 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ps7v4"] Jan 03 05:54:45 crc kubenswrapper[4854]: W0103 05:54:45.550310 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod476034b9_cce1_44fa_b56a_2afa2e90f59d.slice/crio-3973521e322a73dbf8d49db8c41bc86a6e114a857c11106130c5992176c24803 WatchSource:0}: Error finding container 3973521e322a73dbf8d49db8c41bc86a6e114a857c11106130c5992176c24803: Status 404 returned error can't find the container with id 3973521e322a73dbf8d49db8c41bc86a6e114a857c11106130c5992176c24803 Jan 03 05:54:45 crc kubenswrapper[4854]: I0103 05:54:45.621514 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ps7v4" event={"ID":"476034b9-cce1-44fa-b56a-2afa2e90f59d","Type":"ContainerStarted","Data":"3973521e322a73dbf8d49db8c41bc86a6e114a857c11106130c5992176c24803"} Jan 03 05:54:46 crc kubenswrapper[4854]: I0103 05:54:46.632409 4854 generic.go:334] "Generic (PLEG): container finished" podID="476034b9-cce1-44fa-b56a-2afa2e90f59d" containerID="fb50a6f0e65043db1c193d93951e76046cc48f0de7972afba29f0a75c4427913" exitCode=0 Jan 03 05:54:46 crc kubenswrapper[4854]: I0103 05:54:46.632482 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ps7v4" event={"ID":"476034b9-cce1-44fa-b56a-2afa2e90f59d","Type":"ContainerDied","Data":"fb50a6f0e65043db1c193d93951e76046cc48f0de7972afba29f0a75c4427913"} Jan 03 05:54:48 crc kubenswrapper[4854]: I0103 05:54:48.649877 4854 generic.go:334] "Generic (PLEG): container finished" podID="476034b9-cce1-44fa-b56a-2afa2e90f59d" containerID="c1a12c8bfa438d0a87b56a5c17c6bc274b5abb22564b90b995d4859fbc486bcb" exitCode=0 Jan 03 05:54:48 crc kubenswrapper[4854]: I0103 05:54:48.649937 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ps7v4" event={"ID":"476034b9-cce1-44fa-b56a-2afa2e90f59d","Type":"ContainerDied","Data":"c1a12c8bfa438d0a87b56a5c17c6bc274b5abb22564b90b995d4859fbc486bcb"} Jan 03 05:54:51 crc kubenswrapper[4854]: I0103 05:54:51.410162 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vcdkk"] Jan 03 05:54:51 crc kubenswrapper[4854]: I0103 05:54:51.412819 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vcdkk" Jan 03 05:54:51 crc kubenswrapper[4854]: I0103 05:54:51.431349 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vcdkk"] Jan 03 05:54:51 crc kubenswrapper[4854]: I0103 05:54:51.528843 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efa487fc-6641-45d4-9671-f23f9dcc18aa-catalog-content\") pod \"redhat-marketplace-vcdkk\" (UID: \"efa487fc-6641-45d4-9671-f23f9dcc18aa\") " pod="openshift-marketplace/redhat-marketplace-vcdkk" Jan 03 05:54:51 crc kubenswrapper[4854]: I0103 05:54:51.529182 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efa487fc-6641-45d4-9671-f23f9dcc18aa-utilities\") pod \"redhat-marketplace-vcdkk\" (UID: \"efa487fc-6641-45d4-9671-f23f9dcc18aa\") " pod="openshift-marketplace/redhat-marketplace-vcdkk" Jan 03 05:54:51 crc kubenswrapper[4854]: I0103 05:54:51.529295 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv46g\" (UniqueName: \"kubernetes.io/projected/efa487fc-6641-45d4-9671-f23f9dcc18aa-kube-api-access-nv46g\") pod \"redhat-marketplace-vcdkk\" (UID: \"efa487fc-6641-45d4-9671-f23f9dcc18aa\") " pod="openshift-marketplace/redhat-marketplace-vcdkk" Jan 03 05:54:51 crc kubenswrapper[4854]: I0103 05:54:51.631306 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efa487fc-6641-45d4-9671-f23f9dcc18aa-utilities\") pod \"redhat-marketplace-vcdkk\" (UID: \"efa487fc-6641-45d4-9671-f23f9dcc18aa\") " pod="openshift-marketplace/redhat-marketplace-vcdkk" Jan 03 05:54:51 crc kubenswrapper[4854]: I0103 05:54:51.631369 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nv46g\" (UniqueName: \"kubernetes.io/projected/efa487fc-6641-45d4-9671-f23f9dcc18aa-kube-api-access-nv46g\") pod \"redhat-marketplace-vcdkk\" (UID: \"efa487fc-6641-45d4-9671-f23f9dcc18aa\") " pod="openshift-marketplace/redhat-marketplace-vcdkk" Jan 03 05:54:51 crc kubenswrapper[4854]: I0103 05:54:51.631482 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efa487fc-6641-45d4-9671-f23f9dcc18aa-catalog-content\") pod \"redhat-marketplace-vcdkk\" (UID: \"efa487fc-6641-45d4-9671-f23f9dcc18aa\") " pod="openshift-marketplace/redhat-marketplace-vcdkk" Jan 03 05:54:51 crc kubenswrapper[4854]: I0103 05:54:51.631893 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efa487fc-6641-45d4-9671-f23f9dcc18aa-utilities\") pod \"redhat-marketplace-vcdkk\" (UID: \"efa487fc-6641-45d4-9671-f23f9dcc18aa\") " pod="openshift-marketplace/redhat-marketplace-vcdkk" Jan 03 05:54:51 crc kubenswrapper[4854]: I0103 05:54:51.632005 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efa487fc-6641-45d4-9671-f23f9dcc18aa-catalog-content\") pod \"redhat-marketplace-vcdkk\" (UID: \"efa487fc-6641-45d4-9671-f23f9dcc18aa\") " pod="openshift-marketplace/redhat-marketplace-vcdkk" Jan 03 05:54:51 crc kubenswrapper[4854]: I0103 05:54:51.662216 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv46g\" (UniqueName: \"kubernetes.io/projected/efa487fc-6641-45d4-9671-f23f9dcc18aa-kube-api-access-nv46g\") pod \"redhat-marketplace-vcdkk\" (UID: \"efa487fc-6641-45d4-9671-f23f9dcc18aa\") " pod="openshift-marketplace/redhat-marketplace-vcdkk" Jan 03 05:54:51 crc kubenswrapper[4854]: I0103 05:54:51.675333 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ps7v4" event={"ID":"476034b9-cce1-44fa-b56a-2afa2e90f59d","Type":"ContainerStarted","Data":"8804c7a79a06c795c3279a7360d096de4fe28522eea2d7a1b69e08a7b6c7b69d"} Jan 03 05:54:51 crc kubenswrapper[4854]: I0103 05:54:51.699448 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ps7v4" podStartSLOduration=3.769493514 podStartE2EDuration="7.699432111s" podCreationTimestamp="2026-01-03 05:54:44 +0000 UTC" firstStartedPulling="2026-01-03 05:54:46.635316668 +0000 UTC m=+864.961893250" lastFinishedPulling="2026-01-03 05:54:50.565255275 +0000 UTC m=+868.891831847" observedRunningTime="2026-01-03 05:54:51.695504053 +0000 UTC m=+870.022080625" watchObservedRunningTime="2026-01-03 05:54:51.699432111 +0000 UTC m=+870.026008683" Jan 03 05:54:51 crc kubenswrapper[4854]: I0103 05:54:51.738525 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vcdkk" Jan 03 05:54:52 crc kubenswrapper[4854]: I0103 05:54:52.320236 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vcdkk"] Jan 03 05:54:52 crc kubenswrapper[4854]: I0103 05:54:52.683898 4854 generic.go:334] "Generic (PLEG): container finished" podID="efa487fc-6641-45d4-9671-f23f9dcc18aa" containerID="381652707b43373ad879d72212b50cdc204c353c415b6691579b24d1e8707152" exitCode=0 Jan 03 05:54:52 crc kubenswrapper[4854]: I0103 05:54:52.684068 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vcdkk" event={"ID":"efa487fc-6641-45d4-9671-f23f9dcc18aa","Type":"ContainerDied","Data":"381652707b43373ad879d72212b50cdc204c353c415b6691579b24d1e8707152"} Jan 03 05:54:52 crc kubenswrapper[4854]: I0103 05:54:52.684684 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vcdkk" event={"ID":"efa487fc-6641-45d4-9671-f23f9dcc18aa","Type":"ContainerStarted","Data":"ddde244f1cfdc6f6d99c63720b3573ee781a7e229d6fe0aeaf70900496e64edc"} Jan 03 05:54:53 crc kubenswrapper[4854]: I0103 05:54:53.781100 4854 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Jan 03 05:54:53 crc kubenswrapper[4854]: I0103 05:54:53.781527 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="78ad3d84-530d-45e9-928d-c552448aec20" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 03 05:54:54 crc kubenswrapper[4854]: I0103 05:54:54.927951 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ps7v4" Jan 03 05:54:54 crc kubenswrapper[4854]: I0103 05:54:54.928381 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ps7v4" Jan 03 05:54:55 crc kubenswrapper[4854]: I0103 05:54:55.007619 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ps7v4" Jan 03 05:54:55 crc kubenswrapper[4854]: I0103 05:54:55.730434 4854 generic.go:334] "Generic (PLEG): container finished" podID="efa487fc-6641-45d4-9671-f23f9dcc18aa" containerID="b01f2456f56f0ea4a1302cb5cfb792bf9b403c4a0a5222c457b5837bcd5f172e" exitCode=0 Jan 03 05:54:55 crc kubenswrapper[4854]: I0103 05:54:55.730505 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vcdkk" event={"ID":"efa487fc-6641-45d4-9671-f23f9dcc18aa","Type":"ContainerDied","Data":"b01f2456f56f0ea4a1302cb5cfb792bf9b403c4a0a5222c457b5837bcd5f172e"} Jan 03 05:54:55 crc kubenswrapper[4854]: I0103 05:54:55.793982 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ps7v4" Jan 03 05:54:56 crc kubenswrapper[4854]: I0103 05:54:56.738994 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vcdkk" event={"ID":"efa487fc-6641-45d4-9671-f23f9dcc18aa","Type":"ContainerStarted","Data":"380b455cc7c6566ea37d9b6856ecb180dbbda5a12a40576b498d74ced32c1961"} Jan 03 05:54:56 crc kubenswrapper[4854]: I0103 05:54:56.759976 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vcdkk" podStartSLOduration=2.293954433 podStartE2EDuration="5.759954186s" podCreationTimestamp="2026-01-03 05:54:51 +0000 UTC" firstStartedPulling="2026-01-03 05:54:52.685947341 +0000 UTC m=+871.012523913" lastFinishedPulling="2026-01-03 05:54:56.151947094 +0000 UTC m=+874.478523666" observedRunningTime="2026-01-03 05:54:56.759142316 +0000 UTC m=+875.085718908" watchObservedRunningTime="2026-01-03 05:54:56.759954186 +0000 UTC m=+875.086530768" Jan 03 05:54:58 crc kubenswrapper[4854]: I0103 05:54:58.186886 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ps7v4"] Jan 03 05:54:58 crc kubenswrapper[4854]: I0103 05:54:58.187105 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ps7v4" podUID="476034b9-cce1-44fa-b56a-2afa2e90f59d" containerName="registry-server" containerID="cri-o://8804c7a79a06c795c3279a7360d096de4fe28522eea2d7a1b69e08a7b6c7b69d" gracePeriod=2 Jan 03 05:54:59 crc kubenswrapper[4854]: I0103 05:54:59.679581 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ps7v4" Jan 03 05:54:59 crc kubenswrapper[4854]: I0103 05:54:59.764418 4854 generic.go:334] "Generic (PLEG): container finished" podID="476034b9-cce1-44fa-b56a-2afa2e90f59d" containerID="8804c7a79a06c795c3279a7360d096de4fe28522eea2d7a1b69e08a7b6c7b69d" exitCode=0 Jan 03 05:54:59 crc kubenswrapper[4854]: I0103 05:54:59.764479 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ps7v4" event={"ID":"476034b9-cce1-44fa-b56a-2afa2e90f59d","Type":"ContainerDied","Data":"8804c7a79a06c795c3279a7360d096de4fe28522eea2d7a1b69e08a7b6c7b69d"} Jan 03 05:54:59 crc kubenswrapper[4854]: I0103 05:54:59.764532 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ps7v4" event={"ID":"476034b9-cce1-44fa-b56a-2afa2e90f59d","Type":"ContainerDied","Data":"3973521e322a73dbf8d49db8c41bc86a6e114a857c11106130c5992176c24803"} Jan 03 05:54:59 crc kubenswrapper[4854]: I0103 05:54:59.764550 4854 scope.go:117] "RemoveContainer" containerID="8804c7a79a06c795c3279a7360d096de4fe28522eea2d7a1b69e08a7b6c7b69d" Jan 03 05:54:59 crc kubenswrapper[4854]: I0103 05:54:59.764553 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ps7v4" Jan 03 05:54:59 crc kubenswrapper[4854]: I0103 05:54:59.775486 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/476034b9-cce1-44fa-b56a-2afa2e90f59d-catalog-content\") pod \"476034b9-cce1-44fa-b56a-2afa2e90f59d\" (UID: \"476034b9-cce1-44fa-b56a-2afa2e90f59d\") " Jan 03 05:54:59 crc kubenswrapper[4854]: I0103 05:54:59.775540 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/476034b9-cce1-44fa-b56a-2afa2e90f59d-utilities\") pod \"476034b9-cce1-44fa-b56a-2afa2e90f59d\" (UID: \"476034b9-cce1-44fa-b56a-2afa2e90f59d\") " Jan 03 05:54:59 crc kubenswrapper[4854]: I0103 05:54:59.775645 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xk72n\" (UniqueName: \"kubernetes.io/projected/476034b9-cce1-44fa-b56a-2afa2e90f59d-kube-api-access-xk72n\") pod \"476034b9-cce1-44fa-b56a-2afa2e90f59d\" (UID: \"476034b9-cce1-44fa-b56a-2afa2e90f59d\") " Jan 03 05:54:59 crc kubenswrapper[4854]: I0103 05:54:59.776452 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/476034b9-cce1-44fa-b56a-2afa2e90f59d-utilities" (OuterVolumeSpecName: "utilities") pod "476034b9-cce1-44fa-b56a-2afa2e90f59d" (UID: "476034b9-cce1-44fa-b56a-2afa2e90f59d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:54:59 crc kubenswrapper[4854]: I0103 05:54:59.785513 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/476034b9-cce1-44fa-b56a-2afa2e90f59d-kube-api-access-xk72n" (OuterVolumeSpecName: "kube-api-access-xk72n") pod "476034b9-cce1-44fa-b56a-2afa2e90f59d" (UID: "476034b9-cce1-44fa-b56a-2afa2e90f59d"). InnerVolumeSpecName "kube-api-access-xk72n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:54:59 crc kubenswrapper[4854]: I0103 05:54:59.785552 4854 scope.go:117] "RemoveContainer" containerID="c1a12c8bfa438d0a87b56a5c17c6bc274b5abb22564b90b995d4859fbc486bcb" Jan 03 05:54:59 crc kubenswrapper[4854]: I0103 05:54:59.821850 4854 scope.go:117] "RemoveContainer" containerID="fb50a6f0e65043db1c193d93951e76046cc48f0de7972afba29f0a75c4427913" Jan 03 05:54:59 crc kubenswrapper[4854]: I0103 05:54:59.841285 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/476034b9-cce1-44fa-b56a-2afa2e90f59d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "476034b9-cce1-44fa-b56a-2afa2e90f59d" (UID: "476034b9-cce1-44fa-b56a-2afa2e90f59d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:54:59 crc kubenswrapper[4854]: I0103 05:54:59.850938 4854 scope.go:117] "RemoveContainer" containerID="8804c7a79a06c795c3279a7360d096de4fe28522eea2d7a1b69e08a7b6c7b69d" Jan 03 05:54:59 crc kubenswrapper[4854]: E0103 05:54:59.851393 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8804c7a79a06c795c3279a7360d096de4fe28522eea2d7a1b69e08a7b6c7b69d\": container with ID starting with 8804c7a79a06c795c3279a7360d096de4fe28522eea2d7a1b69e08a7b6c7b69d not found: ID does not exist" containerID="8804c7a79a06c795c3279a7360d096de4fe28522eea2d7a1b69e08a7b6c7b69d" Jan 03 05:54:59 crc kubenswrapper[4854]: I0103 05:54:59.851433 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8804c7a79a06c795c3279a7360d096de4fe28522eea2d7a1b69e08a7b6c7b69d"} err="failed to get container status \"8804c7a79a06c795c3279a7360d096de4fe28522eea2d7a1b69e08a7b6c7b69d\": rpc error: code = NotFound desc = could not find container \"8804c7a79a06c795c3279a7360d096de4fe28522eea2d7a1b69e08a7b6c7b69d\": container with ID starting with 8804c7a79a06c795c3279a7360d096de4fe28522eea2d7a1b69e08a7b6c7b69d not found: ID does not exist" Jan 03 05:54:59 crc kubenswrapper[4854]: I0103 05:54:59.851461 4854 scope.go:117] "RemoveContainer" containerID="c1a12c8bfa438d0a87b56a5c17c6bc274b5abb22564b90b995d4859fbc486bcb" Jan 03 05:54:59 crc kubenswrapper[4854]: E0103 05:54:59.851672 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1a12c8bfa438d0a87b56a5c17c6bc274b5abb22564b90b995d4859fbc486bcb\": container with ID starting with c1a12c8bfa438d0a87b56a5c17c6bc274b5abb22564b90b995d4859fbc486bcb not found: ID does not exist" containerID="c1a12c8bfa438d0a87b56a5c17c6bc274b5abb22564b90b995d4859fbc486bcb" Jan 03 05:54:59 crc kubenswrapper[4854]: I0103 05:54:59.851699 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1a12c8bfa438d0a87b56a5c17c6bc274b5abb22564b90b995d4859fbc486bcb"} err="failed to get container status \"c1a12c8bfa438d0a87b56a5c17c6bc274b5abb22564b90b995d4859fbc486bcb\": rpc error: code = NotFound desc = could not find container \"c1a12c8bfa438d0a87b56a5c17c6bc274b5abb22564b90b995d4859fbc486bcb\": container with ID starting with c1a12c8bfa438d0a87b56a5c17c6bc274b5abb22564b90b995d4859fbc486bcb not found: ID does not exist" Jan 03 05:54:59 crc kubenswrapper[4854]: I0103 05:54:59.851716 4854 scope.go:117] "RemoveContainer" containerID="fb50a6f0e65043db1c193d93951e76046cc48f0de7972afba29f0a75c4427913" Jan 03 05:54:59 crc kubenswrapper[4854]: E0103 05:54:59.851907 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb50a6f0e65043db1c193d93951e76046cc48f0de7972afba29f0a75c4427913\": container with ID starting with fb50a6f0e65043db1c193d93951e76046cc48f0de7972afba29f0a75c4427913 not found: ID does not exist" containerID="fb50a6f0e65043db1c193d93951e76046cc48f0de7972afba29f0a75c4427913" Jan 03 05:54:59 crc kubenswrapper[4854]: I0103 05:54:59.851929 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb50a6f0e65043db1c193d93951e76046cc48f0de7972afba29f0a75c4427913"} err="failed to get container status \"fb50a6f0e65043db1c193d93951e76046cc48f0de7972afba29f0a75c4427913\": rpc error: code = NotFound desc = could not find container \"fb50a6f0e65043db1c193d93951e76046cc48f0de7972afba29f0a75c4427913\": container with ID starting with fb50a6f0e65043db1c193d93951e76046cc48f0de7972afba29f0a75c4427913 not found: ID does not exist" Jan 03 05:54:59 crc kubenswrapper[4854]: I0103 05:54:59.877311 4854 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/476034b9-cce1-44fa-b56a-2afa2e90f59d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 05:54:59 crc kubenswrapper[4854]: I0103 05:54:59.877349 4854 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/476034b9-cce1-44fa-b56a-2afa2e90f59d-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 05:54:59 crc kubenswrapper[4854]: I0103 05:54:59.877362 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xk72n\" (UniqueName: \"kubernetes.io/projected/476034b9-cce1-44fa-b56a-2afa2e90f59d-kube-api-access-xk72n\") on node \"crc\" DevicePath \"\"" Jan 03 05:55:00 crc kubenswrapper[4854]: I0103 05:55:00.098600 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ps7v4"] Jan 03 05:55:00 crc kubenswrapper[4854]: I0103 05:55:00.103884 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ps7v4"] Jan 03 05:55:00 crc kubenswrapper[4854]: I0103 05:55:00.126563 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="476034b9-cce1-44fa-b56a-2afa2e90f59d" path="/var/lib/kubelet/pods/476034b9-cce1-44fa-b56a-2afa2e90f59d/volumes" Jan 03 05:55:01 crc kubenswrapper[4854]: I0103 05:55:01.601393 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mrl5b"] Jan 03 05:55:01 crc kubenswrapper[4854]: E0103 05:55:01.602288 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="476034b9-cce1-44fa-b56a-2afa2e90f59d" containerName="registry-server" Jan 03 05:55:01 crc kubenswrapper[4854]: I0103 05:55:01.602311 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="476034b9-cce1-44fa-b56a-2afa2e90f59d" containerName="registry-server" Jan 03 05:55:01 crc kubenswrapper[4854]: E0103 05:55:01.602339 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="476034b9-cce1-44fa-b56a-2afa2e90f59d" containerName="extract-utilities" Jan 03 05:55:01 crc kubenswrapper[4854]: I0103 05:55:01.602353 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="476034b9-cce1-44fa-b56a-2afa2e90f59d" containerName="extract-utilities" Jan 03 05:55:01 crc kubenswrapper[4854]: E0103 05:55:01.602397 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="476034b9-cce1-44fa-b56a-2afa2e90f59d" containerName="extract-content" Jan 03 05:55:01 crc kubenswrapper[4854]: I0103 05:55:01.602411 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="476034b9-cce1-44fa-b56a-2afa2e90f59d" containerName="extract-content" Jan 03 05:55:01 crc kubenswrapper[4854]: I0103 05:55:01.602645 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="476034b9-cce1-44fa-b56a-2afa2e90f59d" containerName="registry-server" Jan 03 05:55:01 crc kubenswrapper[4854]: I0103 05:55:01.604648 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mrl5b" Jan 03 05:55:01 crc kubenswrapper[4854]: I0103 05:55:01.626828 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mrl5b"] Jan 03 05:55:01 crc kubenswrapper[4854]: I0103 05:55:01.707889 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6149b111-4eee-45bc-b18f-900493d313fd-catalog-content\") pod \"community-operators-mrl5b\" (UID: \"6149b111-4eee-45bc-b18f-900493d313fd\") " pod="openshift-marketplace/community-operators-mrl5b" Jan 03 05:55:01 crc kubenswrapper[4854]: I0103 05:55:01.707968 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6149b111-4eee-45bc-b18f-900493d313fd-utilities\") pod \"community-operators-mrl5b\" (UID: \"6149b111-4eee-45bc-b18f-900493d313fd\") " pod="openshift-marketplace/community-operators-mrl5b" Jan 03 05:55:01 crc kubenswrapper[4854]: I0103 05:55:01.708210 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnwff\" (UniqueName: \"kubernetes.io/projected/6149b111-4eee-45bc-b18f-900493d313fd-kube-api-access-vnwff\") pod \"community-operators-mrl5b\" (UID: \"6149b111-4eee-45bc-b18f-900493d313fd\") " pod="openshift-marketplace/community-operators-mrl5b" Jan 03 05:55:01 crc kubenswrapper[4854]: I0103 05:55:01.739196 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vcdkk" Jan 03 05:55:01 crc kubenswrapper[4854]: I0103 05:55:01.739247 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vcdkk" Jan 03 05:55:01 crc kubenswrapper[4854]: I0103 05:55:01.793482 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vcdkk" Jan 03 05:55:01 crc kubenswrapper[4854]: I0103 05:55:01.810514 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6149b111-4eee-45bc-b18f-900493d313fd-catalog-content\") pod \"community-operators-mrl5b\" (UID: \"6149b111-4eee-45bc-b18f-900493d313fd\") " pod="openshift-marketplace/community-operators-mrl5b" Jan 03 05:55:01 crc kubenswrapper[4854]: I0103 05:55:01.810574 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6149b111-4eee-45bc-b18f-900493d313fd-utilities\") pod \"community-operators-mrl5b\" (UID: \"6149b111-4eee-45bc-b18f-900493d313fd\") " pod="openshift-marketplace/community-operators-mrl5b" Jan 03 05:55:01 crc kubenswrapper[4854]: I0103 05:55:01.810666 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnwff\" (UniqueName: \"kubernetes.io/projected/6149b111-4eee-45bc-b18f-900493d313fd-kube-api-access-vnwff\") pod \"community-operators-mrl5b\" (UID: \"6149b111-4eee-45bc-b18f-900493d313fd\") " pod="openshift-marketplace/community-operators-mrl5b" Jan 03 05:55:01 crc kubenswrapper[4854]: I0103 05:55:01.811227 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6149b111-4eee-45bc-b18f-900493d313fd-catalog-content\") pod \"community-operators-mrl5b\" (UID: \"6149b111-4eee-45bc-b18f-900493d313fd\") " pod="openshift-marketplace/community-operators-mrl5b" Jan 03 05:55:01 crc kubenswrapper[4854]: I0103 05:55:01.811255 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6149b111-4eee-45bc-b18f-900493d313fd-utilities\") pod \"community-operators-mrl5b\" (UID: \"6149b111-4eee-45bc-b18f-900493d313fd\") " pod="openshift-marketplace/community-operators-mrl5b" Jan 03 05:55:01 crc kubenswrapper[4854]: I0103 05:55:01.840342 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnwff\" (UniqueName: \"kubernetes.io/projected/6149b111-4eee-45bc-b18f-900493d313fd-kube-api-access-vnwff\") pod \"community-operators-mrl5b\" (UID: \"6149b111-4eee-45bc-b18f-900493d313fd\") " pod="openshift-marketplace/community-operators-mrl5b" Jan 03 05:55:01 crc kubenswrapper[4854]: I0103 05:55:01.854785 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vcdkk" Jan 03 05:55:01 crc kubenswrapper[4854]: I0103 05:55:01.927281 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mrl5b" Jan 03 05:55:02 crc kubenswrapper[4854]: I0103 05:55:02.439574 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mrl5b"] Jan 03 05:55:02 crc kubenswrapper[4854]: I0103 05:55:02.792842 4854 generic.go:334] "Generic (PLEG): container finished" podID="6149b111-4eee-45bc-b18f-900493d313fd" containerID="8238cecd4b32b47b0f3eb189365bd47a9edf38581ba566ec729415bd0564a9ce" exitCode=0 Jan 03 05:55:02 crc kubenswrapper[4854]: I0103 05:55:02.792916 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mrl5b" event={"ID":"6149b111-4eee-45bc-b18f-900493d313fd","Type":"ContainerDied","Data":"8238cecd4b32b47b0f3eb189365bd47a9edf38581ba566ec729415bd0564a9ce"} Jan 03 05:55:02 crc kubenswrapper[4854]: I0103 05:55:02.792948 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mrl5b" event={"ID":"6149b111-4eee-45bc-b18f-900493d313fd","Type":"ContainerStarted","Data":"bcd6140c00e1e63443204793e0acd4120e09d0978024ac86040543e8345a225d"} Jan 03 05:55:03 crc kubenswrapper[4854]: I0103 05:55:03.792147 4854 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Jan 03 05:55:03 crc kubenswrapper[4854]: I0103 05:55:03.792837 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="78ad3d84-530d-45e9-928d-c552448aec20" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 03 05:55:04 crc kubenswrapper[4854]: I0103 05:55:04.595061 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vcdkk"] Jan 03 05:55:04 crc kubenswrapper[4854]: I0103 05:55:04.595816 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vcdkk" podUID="efa487fc-6641-45d4-9671-f23f9dcc18aa" containerName="registry-server" containerID="cri-o://380b455cc7c6566ea37d9b6856ecb180dbbda5a12a40576b498d74ced32c1961" gracePeriod=2 Jan 03 05:55:04 crc kubenswrapper[4854]: I0103 05:55:04.815922 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mrl5b" event={"ID":"6149b111-4eee-45bc-b18f-900493d313fd","Type":"ContainerStarted","Data":"25309409774bab5bfa94c0e3caf00754714efc15b47a31d70a4fa93d38b86b7e"} Jan 03 05:55:05 crc kubenswrapper[4854]: I0103 05:55:05.832705 4854 generic.go:334] "Generic (PLEG): container finished" podID="6149b111-4eee-45bc-b18f-900493d313fd" containerID="25309409774bab5bfa94c0e3caf00754714efc15b47a31d70a4fa93d38b86b7e" exitCode=0 Jan 03 05:55:05 crc kubenswrapper[4854]: I0103 05:55:05.832804 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mrl5b" event={"ID":"6149b111-4eee-45bc-b18f-900493d313fd","Type":"ContainerDied","Data":"25309409774bab5bfa94c0e3caf00754714efc15b47a31d70a4fa93d38b86b7e"} Jan 03 05:55:05 crc kubenswrapper[4854]: I0103 05:55:05.835945 4854 generic.go:334] "Generic (PLEG): container finished" podID="efa487fc-6641-45d4-9671-f23f9dcc18aa" containerID="380b455cc7c6566ea37d9b6856ecb180dbbda5a12a40576b498d74ced32c1961" exitCode=0 Jan 03 05:55:05 crc kubenswrapper[4854]: I0103 05:55:05.836000 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vcdkk" event={"ID":"efa487fc-6641-45d4-9671-f23f9dcc18aa","Type":"ContainerDied","Data":"380b455cc7c6566ea37d9b6856ecb180dbbda5a12a40576b498d74ced32c1961"} Jan 03 05:55:06 crc kubenswrapper[4854]: I0103 05:55:06.024537 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vcdkk" Jan 03 05:55:06 crc kubenswrapper[4854]: I0103 05:55:06.090960 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efa487fc-6641-45d4-9671-f23f9dcc18aa-catalog-content\") pod \"efa487fc-6641-45d4-9671-f23f9dcc18aa\" (UID: \"efa487fc-6641-45d4-9671-f23f9dcc18aa\") " Jan 03 05:55:06 crc kubenswrapper[4854]: I0103 05:55:06.091066 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efa487fc-6641-45d4-9671-f23f9dcc18aa-utilities\") pod \"efa487fc-6641-45d4-9671-f23f9dcc18aa\" (UID: \"efa487fc-6641-45d4-9671-f23f9dcc18aa\") " Jan 03 05:55:06 crc kubenswrapper[4854]: I0103 05:55:06.091278 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nv46g\" (UniqueName: \"kubernetes.io/projected/efa487fc-6641-45d4-9671-f23f9dcc18aa-kube-api-access-nv46g\") pod \"efa487fc-6641-45d4-9671-f23f9dcc18aa\" (UID: \"efa487fc-6641-45d4-9671-f23f9dcc18aa\") " Jan 03 05:55:06 crc kubenswrapper[4854]: I0103 05:55:06.092498 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efa487fc-6641-45d4-9671-f23f9dcc18aa-utilities" (OuterVolumeSpecName: "utilities") pod "efa487fc-6641-45d4-9671-f23f9dcc18aa" (UID: "efa487fc-6641-45d4-9671-f23f9dcc18aa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:55:06 crc kubenswrapper[4854]: I0103 05:55:06.097356 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efa487fc-6641-45d4-9671-f23f9dcc18aa-kube-api-access-nv46g" (OuterVolumeSpecName: "kube-api-access-nv46g") pod "efa487fc-6641-45d4-9671-f23f9dcc18aa" (UID: "efa487fc-6641-45d4-9671-f23f9dcc18aa"). InnerVolumeSpecName "kube-api-access-nv46g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:55:06 crc kubenswrapper[4854]: I0103 05:55:06.111185 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efa487fc-6641-45d4-9671-f23f9dcc18aa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "efa487fc-6641-45d4-9671-f23f9dcc18aa" (UID: "efa487fc-6641-45d4-9671-f23f9dcc18aa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:55:06 crc kubenswrapper[4854]: I0103 05:55:06.193185 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nv46g\" (UniqueName: \"kubernetes.io/projected/efa487fc-6641-45d4-9671-f23f9dcc18aa-kube-api-access-nv46g\") on node \"crc\" DevicePath \"\"" Jan 03 05:55:06 crc kubenswrapper[4854]: I0103 05:55:06.193232 4854 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efa487fc-6641-45d4-9671-f23f9dcc18aa-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 05:55:06 crc kubenswrapper[4854]: I0103 05:55:06.193246 4854 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efa487fc-6641-45d4-9671-f23f9dcc18aa-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 05:55:06 crc kubenswrapper[4854]: I0103 05:55:06.854431 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vcdkk" Jan 03 05:55:06 crc kubenswrapper[4854]: I0103 05:55:06.854680 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vcdkk" event={"ID":"efa487fc-6641-45d4-9671-f23f9dcc18aa","Type":"ContainerDied","Data":"ddde244f1cfdc6f6d99c63720b3573ee781a7e229d6fe0aeaf70900496e64edc"} Jan 03 05:55:06 crc kubenswrapper[4854]: I0103 05:55:06.855097 4854 scope.go:117] "RemoveContainer" containerID="380b455cc7c6566ea37d9b6856ecb180dbbda5a12a40576b498d74ced32c1961" Jan 03 05:55:06 crc kubenswrapper[4854]: I0103 05:55:06.858524 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mrl5b" event={"ID":"6149b111-4eee-45bc-b18f-900493d313fd","Type":"ContainerStarted","Data":"21d25c83427ca3b94bc83320d85927f84bc1a79853526c8c315dc3ea9c799e88"} Jan 03 05:55:06 crc kubenswrapper[4854]: I0103 05:55:06.885916 4854 scope.go:117] "RemoveContainer" containerID="b01f2456f56f0ea4a1302cb5cfb792bf9b403c4a0a5222c457b5837bcd5f172e" Jan 03 05:55:06 crc kubenswrapper[4854]: I0103 05:55:06.904270 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mrl5b" podStartSLOduration=2.237235416 podStartE2EDuration="5.904251645s" podCreationTimestamp="2026-01-03 05:55:01 +0000 UTC" firstStartedPulling="2026-01-03 05:55:02.795374891 +0000 UTC m=+881.121951463" lastFinishedPulling="2026-01-03 05:55:06.46239111 +0000 UTC m=+884.788967692" observedRunningTime="2026-01-03 05:55:06.901891696 +0000 UTC m=+885.228468298" watchObservedRunningTime="2026-01-03 05:55:06.904251645 +0000 UTC m=+885.230828237" Jan 03 05:55:06 crc kubenswrapper[4854]: I0103 05:55:06.923034 4854 scope.go:117] "RemoveContainer" containerID="381652707b43373ad879d72212b50cdc204c353c415b6691579b24d1e8707152" Jan 03 05:55:06 crc kubenswrapper[4854]: I0103 05:55:06.946430 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vcdkk"] Jan 03 05:55:06 crc kubenswrapper[4854]: I0103 05:55:06.956591 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vcdkk"] Jan 03 05:55:08 crc kubenswrapper[4854]: I0103 05:55:08.131691 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efa487fc-6641-45d4-9671-f23f9dcc18aa" path="/var/lib/kubelet/pods/efa487fc-6641-45d4-9671-f23f9dcc18aa/volumes" Jan 03 05:55:11 crc kubenswrapper[4854]: I0103 05:55:11.927824 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mrl5b" Jan 03 05:55:11 crc kubenswrapper[4854]: I0103 05:55:11.928675 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mrl5b" Jan 03 05:55:11 crc kubenswrapper[4854]: I0103 05:55:11.982880 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mrl5b" Jan 03 05:55:12 crc kubenswrapper[4854]: I0103 05:55:12.987667 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mrl5b" Jan 03 05:55:13 crc kubenswrapper[4854]: I0103 05:55:13.035252 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mrl5b"] Jan 03 05:55:13 crc kubenswrapper[4854]: I0103 05:55:13.784001 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Jan 03 05:55:14 crc kubenswrapper[4854]: I0103 05:55:14.936424 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mrl5b" podUID="6149b111-4eee-45bc-b18f-900493d313fd" containerName="registry-server" containerID="cri-o://21d25c83427ca3b94bc83320d85927f84bc1a79853526c8c315dc3ea9c799e88" gracePeriod=2 Jan 03 05:55:15 crc kubenswrapper[4854]: I0103 05:55:15.943003 4854 generic.go:334] "Generic (PLEG): container finished" podID="6149b111-4eee-45bc-b18f-900493d313fd" containerID="21d25c83427ca3b94bc83320d85927f84bc1a79853526c8c315dc3ea9c799e88" exitCode=0 Jan 03 05:55:15 crc kubenswrapper[4854]: I0103 05:55:15.943304 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mrl5b" event={"ID":"6149b111-4eee-45bc-b18f-900493d313fd","Type":"ContainerDied","Data":"21d25c83427ca3b94bc83320d85927f84bc1a79853526c8c315dc3ea9c799e88"} Jan 03 05:55:15 crc kubenswrapper[4854]: I0103 05:55:15.995443 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mrl5b" Jan 03 05:55:16 crc kubenswrapper[4854]: I0103 05:55:16.069402 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6149b111-4eee-45bc-b18f-900493d313fd-utilities\") pod \"6149b111-4eee-45bc-b18f-900493d313fd\" (UID: \"6149b111-4eee-45bc-b18f-900493d313fd\") " Jan 03 05:55:16 crc kubenswrapper[4854]: I0103 05:55:16.069482 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6149b111-4eee-45bc-b18f-900493d313fd-catalog-content\") pod \"6149b111-4eee-45bc-b18f-900493d313fd\" (UID: \"6149b111-4eee-45bc-b18f-900493d313fd\") " Jan 03 05:55:16 crc kubenswrapper[4854]: I0103 05:55:16.069540 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnwff\" (UniqueName: \"kubernetes.io/projected/6149b111-4eee-45bc-b18f-900493d313fd-kube-api-access-vnwff\") pod \"6149b111-4eee-45bc-b18f-900493d313fd\" (UID: \"6149b111-4eee-45bc-b18f-900493d313fd\") " Jan 03 05:55:16 crc kubenswrapper[4854]: I0103 05:55:16.074314 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6149b111-4eee-45bc-b18f-900493d313fd-kube-api-access-vnwff" (OuterVolumeSpecName: "kube-api-access-vnwff") pod "6149b111-4eee-45bc-b18f-900493d313fd" (UID: "6149b111-4eee-45bc-b18f-900493d313fd"). InnerVolumeSpecName "kube-api-access-vnwff". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:55:16 crc kubenswrapper[4854]: I0103 05:55:16.077824 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6149b111-4eee-45bc-b18f-900493d313fd-utilities" (OuterVolumeSpecName: "utilities") pod "6149b111-4eee-45bc-b18f-900493d313fd" (UID: "6149b111-4eee-45bc-b18f-900493d313fd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:55:16 crc kubenswrapper[4854]: I0103 05:55:16.139166 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6149b111-4eee-45bc-b18f-900493d313fd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6149b111-4eee-45bc-b18f-900493d313fd" (UID: "6149b111-4eee-45bc-b18f-900493d313fd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:55:16 crc kubenswrapper[4854]: I0103 05:55:16.171508 4854 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6149b111-4eee-45bc-b18f-900493d313fd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 05:55:16 crc kubenswrapper[4854]: I0103 05:55:16.171543 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnwff\" (UniqueName: \"kubernetes.io/projected/6149b111-4eee-45bc-b18f-900493d313fd-kube-api-access-vnwff\") on node \"crc\" DevicePath \"\"" Jan 03 05:55:16 crc kubenswrapper[4854]: I0103 05:55:16.171553 4854 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6149b111-4eee-45bc-b18f-900493d313fd-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 05:55:16 crc kubenswrapper[4854]: I0103 05:55:16.957945 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mrl5b" event={"ID":"6149b111-4eee-45bc-b18f-900493d313fd","Type":"ContainerDied","Data":"bcd6140c00e1e63443204793e0acd4120e09d0978024ac86040543e8345a225d"} Jan 03 05:55:16 crc kubenswrapper[4854]: I0103 05:55:16.958017 4854 scope.go:117] "RemoveContainer" containerID="21d25c83427ca3b94bc83320d85927f84bc1a79853526c8c315dc3ea9c799e88" Jan 03 05:55:16 crc kubenswrapper[4854]: I0103 05:55:16.958043 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mrl5b" Jan 03 05:55:16 crc kubenswrapper[4854]: I0103 05:55:16.987367 4854 scope.go:117] "RemoveContainer" containerID="25309409774bab5bfa94c0e3caf00754714efc15b47a31d70a4fa93d38b86b7e" Jan 03 05:55:16 crc kubenswrapper[4854]: I0103 05:55:16.997252 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mrl5b"] Jan 03 05:55:17 crc kubenswrapper[4854]: I0103 05:55:17.012486 4854 scope.go:117] "RemoveContainer" containerID="8238cecd4b32b47b0f3eb189365bd47a9edf38581ba566ec729415bd0564a9ce" Jan 03 05:55:17 crc kubenswrapper[4854]: I0103 05:55:17.017040 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mrl5b"] Jan 03 05:55:18 crc kubenswrapper[4854]: I0103 05:55:18.136615 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6149b111-4eee-45bc-b18f-900493d313fd" path="/var/lib/kubelet/pods/6149b111-4eee-45bc-b18f-900493d313fd/volumes" Jan 03 05:55:20 crc kubenswrapper[4854]: I0103 05:55:20.782869 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-qdbbp"] Jan 03 05:55:20 crc kubenswrapper[4854]: E0103 05:55:20.783663 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6149b111-4eee-45bc-b18f-900493d313fd" containerName="extract-content" Jan 03 05:55:20 crc kubenswrapper[4854]: I0103 05:55:20.783676 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="6149b111-4eee-45bc-b18f-900493d313fd" containerName="extract-content" Jan 03 05:55:20 crc kubenswrapper[4854]: E0103 05:55:20.783685 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efa487fc-6641-45d4-9671-f23f9dcc18aa" containerName="extract-content" Jan 03 05:55:20 crc kubenswrapper[4854]: I0103 05:55:20.783691 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="efa487fc-6641-45d4-9671-f23f9dcc18aa" containerName="extract-content" Jan 03 05:55:20 crc kubenswrapper[4854]: E0103 05:55:20.783700 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efa487fc-6641-45d4-9671-f23f9dcc18aa" containerName="registry-server" Jan 03 05:55:20 crc kubenswrapper[4854]: I0103 05:55:20.783706 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="efa487fc-6641-45d4-9671-f23f9dcc18aa" containerName="registry-server" Jan 03 05:55:20 crc kubenswrapper[4854]: E0103 05:55:20.783716 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efa487fc-6641-45d4-9671-f23f9dcc18aa" containerName="extract-utilities" Jan 03 05:55:20 crc kubenswrapper[4854]: I0103 05:55:20.783723 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="efa487fc-6641-45d4-9671-f23f9dcc18aa" containerName="extract-utilities" Jan 03 05:55:20 crc kubenswrapper[4854]: E0103 05:55:20.783741 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6149b111-4eee-45bc-b18f-900493d313fd" containerName="registry-server" Jan 03 05:55:20 crc kubenswrapper[4854]: I0103 05:55:20.783746 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="6149b111-4eee-45bc-b18f-900493d313fd" containerName="registry-server" Jan 03 05:55:20 crc kubenswrapper[4854]: E0103 05:55:20.783755 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6149b111-4eee-45bc-b18f-900493d313fd" containerName="extract-utilities" Jan 03 05:55:20 crc kubenswrapper[4854]: I0103 05:55:20.783761 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="6149b111-4eee-45bc-b18f-900493d313fd" containerName="extract-utilities" Jan 03 05:55:20 crc kubenswrapper[4854]: I0103 05:55:20.783876 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="6149b111-4eee-45bc-b18f-900493d313fd" containerName="registry-server" Jan 03 05:55:20 crc kubenswrapper[4854]: I0103 05:55:20.783889 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="efa487fc-6641-45d4-9671-f23f9dcc18aa" containerName="registry-server" Jan 03 05:55:20 crc kubenswrapper[4854]: I0103 05:55:20.790349 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-qdbbp" Jan 03 05:55:20 crc kubenswrapper[4854]: I0103 05:55:20.794334 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Jan 03 05:55:20 crc kubenswrapper[4854]: I0103 05:55:20.794518 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Jan 03 05:55:20 crc kubenswrapper[4854]: I0103 05:55:20.794755 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Jan 03 05:55:20 crc kubenswrapper[4854]: I0103 05:55:20.794869 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-bstsj" Jan 03 05:55:20 crc kubenswrapper[4854]: I0103 05:55:20.794977 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Jan 03 05:55:20 crc kubenswrapper[4854]: I0103 05:55:20.805359 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Jan 03 05:55:20 crc kubenswrapper[4854]: I0103 05:55:20.811954 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-qdbbp"] Jan 03 05:55:20 crc kubenswrapper[4854]: I0103 05:55:20.846581 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-entrypoint\") pod \"collector-qdbbp\" (UID: \"cbf0394a-6374-4db4-8b4a-0014e7c20a2e\") " pod="openshift-logging/collector-qdbbp" Jan 03 05:55:20 crc kubenswrapper[4854]: I0103 05:55:20.846936 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-tmp\") pod \"collector-qdbbp\" (UID: \"cbf0394a-6374-4db4-8b4a-0014e7c20a2e\") " pod="openshift-logging/collector-qdbbp" Jan 03 05:55:20 crc kubenswrapper[4854]: I0103 05:55:20.847034 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-collector-token\") pod \"collector-qdbbp\" (UID: \"cbf0394a-6374-4db4-8b4a-0014e7c20a2e\") " pod="openshift-logging/collector-qdbbp" Jan 03 05:55:20 crc kubenswrapper[4854]: I0103 05:55:20.847158 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-config-openshift-service-cacrt\") pod \"collector-qdbbp\" (UID: \"cbf0394a-6374-4db4-8b4a-0014e7c20a2e\") " pod="openshift-logging/collector-qdbbp" Jan 03 05:55:20 crc kubenswrapper[4854]: I0103 05:55:20.847282 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-collector-syslog-receiver\") pod \"collector-qdbbp\" (UID: \"cbf0394a-6374-4db4-8b4a-0014e7c20a2e\") " pod="openshift-logging/collector-qdbbp" Jan 03 05:55:20 crc kubenswrapper[4854]: I0103 05:55:20.847386 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-config\") pod \"collector-qdbbp\" (UID: \"cbf0394a-6374-4db4-8b4a-0014e7c20a2e\") " pod="openshift-logging/collector-qdbbp" Jan 03 05:55:20 crc kubenswrapper[4854]: I0103 05:55:20.847474 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-datadir\") pod \"collector-qdbbp\" (UID: \"cbf0394a-6374-4db4-8b4a-0014e7c20a2e\") " pod="openshift-logging/collector-qdbbp" Jan 03 05:55:20 crc kubenswrapper[4854]: I0103 05:55:20.847568 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-trusted-ca\") pod \"collector-qdbbp\" (UID: \"cbf0394a-6374-4db4-8b4a-0014e7c20a2e\") " pod="openshift-logging/collector-qdbbp" Jan 03 05:55:20 crc kubenswrapper[4854]: I0103 05:55:20.847641 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-sa-token\") pod \"collector-qdbbp\" (UID: \"cbf0394a-6374-4db4-8b4a-0014e7c20a2e\") " pod="openshift-logging/collector-qdbbp" Jan 03 05:55:20 crc kubenswrapper[4854]: I0103 05:55:20.847730 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82529\" (UniqueName: \"kubernetes.io/projected/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-kube-api-access-82529\") pod \"collector-qdbbp\" (UID: \"cbf0394a-6374-4db4-8b4a-0014e7c20a2e\") " pod="openshift-logging/collector-qdbbp" Jan 03 05:55:20 crc kubenswrapper[4854]: I0103 05:55:20.847851 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-metrics\") pod \"collector-qdbbp\" (UID: \"cbf0394a-6374-4db4-8b4a-0014e7c20a2e\") " pod="openshift-logging/collector-qdbbp" Jan 03 05:55:20 crc kubenswrapper[4854]: I0103 05:55:20.864828 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-qdbbp"] Jan 03 05:55:20 crc kubenswrapper[4854]: E0103 05:55:20.865496 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-82529 metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-logging/collector-qdbbp" podUID="cbf0394a-6374-4db4-8b4a-0014e7c20a2e" Jan 03 05:55:20 crc kubenswrapper[4854]: I0103 05:55:20.949828 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-config-openshift-service-cacrt\") pod \"collector-qdbbp\" (UID: \"cbf0394a-6374-4db4-8b4a-0014e7c20a2e\") " pod="openshift-logging/collector-qdbbp" Jan 03 05:55:20 crc kubenswrapper[4854]: I0103 05:55:20.949931 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-collector-syslog-receiver\") pod \"collector-qdbbp\" (UID: \"cbf0394a-6374-4db4-8b4a-0014e7c20a2e\") " pod="openshift-logging/collector-qdbbp" Jan 03 05:55:20 crc kubenswrapper[4854]: I0103 05:55:20.949991 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-config\") pod \"collector-qdbbp\" (UID: \"cbf0394a-6374-4db4-8b4a-0014e7c20a2e\") " pod="openshift-logging/collector-qdbbp" Jan 03 05:55:20 crc kubenswrapper[4854]: I0103 05:55:20.950043 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-datadir\") pod \"collector-qdbbp\" (UID: \"cbf0394a-6374-4db4-8b4a-0014e7c20a2e\") " pod="openshift-logging/collector-qdbbp" Jan 03 05:55:20 crc kubenswrapper[4854]: I0103 05:55:20.950137 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-trusted-ca\") pod \"collector-qdbbp\" (UID: \"cbf0394a-6374-4db4-8b4a-0014e7c20a2e\") " pod="openshift-logging/collector-qdbbp" Jan 03 05:55:20 crc kubenswrapper[4854]: I0103 05:55:20.950191 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-sa-token\") pod \"collector-qdbbp\" (UID: \"cbf0394a-6374-4db4-8b4a-0014e7c20a2e\") " pod="openshift-logging/collector-qdbbp" Jan 03 05:55:20 crc kubenswrapper[4854]: I0103 05:55:20.950214 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82529\" (UniqueName: \"kubernetes.io/projected/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-kube-api-access-82529\") pod \"collector-qdbbp\" (UID: \"cbf0394a-6374-4db4-8b4a-0014e7c20a2e\") " pod="openshift-logging/collector-qdbbp" Jan 03 05:55:20 crc kubenswrapper[4854]: I0103 05:55:20.950274 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-metrics\") pod \"collector-qdbbp\" (UID: \"cbf0394a-6374-4db4-8b4a-0014e7c20a2e\") " pod="openshift-logging/collector-qdbbp" Jan 03 05:55:20 crc kubenswrapper[4854]: I0103 05:55:20.950320 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-entrypoint\") pod \"collector-qdbbp\" (UID: \"cbf0394a-6374-4db4-8b4a-0014e7c20a2e\") " pod="openshift-logging/collector-qdbbp" Jan 03 05:55:20 crc kubenswrapper[4854]: I0103 05:55:20.950373 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-tmp\") pod \"collector-qdbbp\" (UID: \"cbf0394a-6374-4db4-8b4a-0014e7c20a2e\") " pod="openshift-logging/collector-qdbbp" Jan 03 05:55:20 crc kubenswrapper[4854]: I0103 05:55:20.950402 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-collector-token\") pod \"collector-qdbbp\" (UID: \"cbf0394a-6374-4db4-8b4a-0014e7c20a2e\") " pod="openshift-logging/collector-qdbbp" Jan 03 05:55:20 crc kubenswrapper[4854]: E0103 05:55:20.950533 4854 secret.go:188] Couldn't get secret openshift-logging/collector-metrics: secret "collector-metrics" not found Jan 03 05:55:20 crc kubenswrapper[4854]: I0103 05:55:20.950641 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-datadir\") pod \"collector-qdbbp\" (UID: \"cbf0394a-6374-4db4-8b4a-0014e7c20a2e\") " pod="openshift-logging/collector-qdbbp" Jan 03 05:55:20 crc kubenswrapper[4854]: E0103 05:55:20.950966 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-metrics podName:cbf0394a-6374-4db4-8b4a-0014e7c20a2e nodeName:}" failed. No retries permitted until 2026-01-03 05:55:21.450944414 +0000 UTC m=+899.777521036 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics" (UniqueName: "kubernetes.io/secret/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-metrics") pod "collector-qdbbp" (UID: "cbf0394a-6374-4db4-8b4a-0014e7c20a2e") : secret "collector-metrics" not found Jan 03 05:55:20 crc kubenswrapper[4854]: E0103 05:55:20.950601 4854 secret.go:188] Couldn't get secret openshift-logging/collector-syslog-receiver: secret "collector-syslog-receiver" not found Jan 03 05:55:20 crc kubenswrapper[4854]: I0103 05:55:20.951461 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-config\") pod \"collector-qdbbp\" (UID: \"cbf0394a-6374-4db4-8b4a-0014e7c20a2e\") " pod="openshift-logging/collector-qdbbp" Jan 03 05:55:20 crc kubenswrapper[4854]: I0103 05:55:20.951662 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-trusted-ca\") pod \"collector-qdbbp\" (UID: \"cbf0394a-6374-4db4-8b4a-0014e7c20a2e\") " pod="openshift-logging/collector-qdbbp" Jan 03 05:55:20 crc kubenswrapper[4854]: I0103 05:55:20.952320 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-entrypoint\") pod \"collector-qdbbp\" (UID: \"cbf0394a-6374-4db4-8b4a-0014e7c20a2e\") " pod="openshift-logging/collector-qdbbp" Jan 03 05:55:20 crc kubenswrapper[4854]: E0103 05:55:20.952414 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-collector-syslog-receiver podName:cbf0394a-6374-4db4-8b4a-0014e7c20a2e nodeName:}" failed. No retries permitted until 2026-01-03 05:55:21.45239917 +0000 UTC m=+899.778975742 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "collector-syslog-receiver" (UniqueName: "kubernetes.io/secret/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-collector-syslog-receiver") pod "collector-qdbbp" (UID: "cbf0394a-6374-4db4-8b4a-0014e7c20a2e") : secret "collector-syslog-receiver" not found Jan 03 05:55:20 crc kubenswrapper[4854]: I0103 05:55:20.954654 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-config-openshift-service-cacrt\") pod \"collector-qdbbp\" (UID: \"cbf0394a-6374-4db4-8b4a-0014e7c20a2e\") " pod="openshift-logging/collector-qdbbp" Jan 03 05:55:20 crc kubenswrapper[4854]: I0103 05:55:20.956406 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-tmp\") pod \"collector-qdbbp\" (UID: \"cbf0394a-6374-4db4-8b4a-0014e7c20a2e\") " pod="openshift-logging/collector-qdbbp" Jan 03 05:55:20 crc kubenswrapper[4854]: I0103 05:55:20.957886 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-collector-token\") pod \"collector-qdbbp\" (UID: \"cbf0394a-6374-4db4-8b4a-0014e7c20a2e\") " pod="openshift-logging/collector-qdbbp" Jan 03 05:55:20 crc kubenswrapper[4854]: I0103 05:55:20.976477 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-sa-token\") pod \"collector-qdbbp\" (UID: \"cbf0394a-6374-4db4-8b4a-0014e7c20a2e\") " pod="openshift-logging/collector-qdbbp" Jan 03 05:55:20 crc kubenswrapper[4854]: I0103 05:55:20.977091 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82529\" (UniqueName: \"kubernetes.io/projected/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-kube-api-access-82529\") pod \"collector-qdbbp\" (UID: \"cbf0394a-6374-4db4-8b4a-0014e7c20a2e\") " pod="openshift-logging/collector-qdbbp" Jan 03 05:55:21 crc kubenswrapper[4854]: I0103 05:55:21.000860 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-qdbbp" Jan 03 05:55:21 crc kubenswrapper[4854]: I0103 05:55:21.027412 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-qdbbp" Jan 03 05:55:21 crc kubenswrapper[4854]: I0103 05:55:21.053218 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-entrypoint\") pod \"cbf0394a-6374-4db4-8b4a-0014e7c20a2e\" (UID: \"cbf0394a-6374-4db4-8b4a-0014e7c20a2e\") " Jan 03 05:55:21 crc kubenswrapper[4854]: I0103 05:55:21.053286 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82529\" (UniqueName: \"kubernetes.io/projected/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-kube-api-access-82529\") pod \"cbf0394a-6374-4db4-8b4a-0014e7c20a2e\" (UID: \"cbf0394a-6374-4db4-8b4a-0014e7c20a2e\") " Jan 03 05:55:21 crc kubenswrapper[4854]: I0103 05:55:21.053356 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-config-openshift-service-cacrt\") pod \"cbf0394a-6374-4db4-8b4a-0014e7c20a2e\" (UID: \"cbf0394a-6374-4db4-8b4a-0014e7c20a2e\") " Jan 03 05:55:21 crc kubenswrapper[4854]: I0103 05:55:21.053404 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-tmp\") pod \"cbf0394a-6374-4db4-8b4a-0014e7c20a2e\" (UID: \"cbf0394a-6374-4db4-8b4a-0014e7c20a2e\") " Jan 03 05:55:21 crc kubenswrapper[4854]: I0103 05:55:21.053437 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-trusted-ca\") pod \"cbf0394a-6374-4db4-8b4a-0014e7c20a2e\" (UID: \"cbf0394a-6374-4db4-8b4a-0014e7c20a2e\") " Jan 03 05:55:21 crc kubenswrapper[4854]: I0103 05:55:21.053455 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-datadir\") pod \"cbf0394a-6374-4db4-8b4a-0014e7c20a2e\" (UID: \"cbf0394a-6374-4db4-8b4a-0014e7c20a2e\") " Jan 03 05:55:21 crc kubenswrapper[4854]: I0103 05:55:21.053518 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-collector-token\") pod \"cbf0394a-6374-4db4-8b4a-0014e7c20a2e\" (UID: \"cbf0394a-6374-4db4-8b4a-0014e7c20a2e\") " Jan 03 05:55:21 crc kubenswrapper[4854]: I0103 05:55:21.053612 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-config\") pod \"cbf0394a-6374-4db4-8b4a-0014e7c20a2e\" (UID: \"cbf0394a-6374-4db4-8b4a-0014e7c20a2e\") " Jan 03 05:55:21 crc kubenswrapper[4854]: I0103 05:55:21.053635 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-sa-token\") pod \"cbf0394a-6374-4db4-8b4a-0014e7c20a2e\" (UID: \"cbf0394a-6374-4db4-8b4a-0014e7c20a2e\") " Jan 03 05:55:21 crc kubenswrapper[4854]: I0103 05:55:21.054547 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-config" (OuterVolumeSpecName: "config") pod "cbf0394a-6374-4db4-8b4a-0014e7c20a2e" (UID: "cbf0394a-6374-4db4-8b4a-0014e7c20a2e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:55:21 crc kubenswrapper[4854]: I0103 05:55:21.054671 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "cbf0394a-6374-4db4-8b4a-0014e7c20a2e" (UID: "cbf0394a-6374-4db4-8b4a-0014e7c20a2e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:55:21 crc kubenswrapper[4854]: I0103 05:55:21.054738 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "cbf0394a-6374-4db4-8b4a-0014e7c20a2e" (UID: "cbf0394a-6374-4db4-8b4a-0014e7c20a2e"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:55:21 crc kubenswrapper[4854]: I0103 05:55:21.054820 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-datadir" (OuterVolumeSpecName: "datadir") pod "cbf0394a-6374-4db4-8b4a-0014e7c20a2e" (UID: "cbf0394a-6374-4db4-8b4a-0014e7c20a2e"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 05:55:21 crc kubenswrapper[4854]: I0103 05:55:21.054873 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "cbf0394a-6374-4db4-8b4a-0014e7c20a2e" (UID: "cbf0394a-6374-4db4-8b4a-0014e7c20a2e"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:55:21 crc kubenswrapper[4854]: I0103 05:55:21.058526 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-collector-token" (OuterVolumeSpecName: "collector-token") pod "cbf0394a-6374-4db4-8b4a-0014e7c20a2e" (UID: "cbf0394a-6374-4db4-8b4a-0014e7c20a2e"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:55:21 crc kubenswrapper[4854]: I0103 05:55:21.058851 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-kube-api-access-82529" (OuterVolumeSpecName: "kube-api-access-82529") pod "cbf0394a-6374-4db4-8b4a-0014e7c20a2e" (UID: "cbf0394a-6374-4db4-8b4a-0014e7c20a2e"). InnerVolumeSpecName "kube-api-access-82529". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:55:21 crc kubenswrapper[4854]: I0103 05:55:21.061179 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-sa-token" (OuterVolumeSpecName: "sa-token") pod "cbf0394a-6374-4db4-8b4a-0014e7c20a2e" (UID: "cbf0394a-6374-4db4-8b4a-0014e7c20a2e"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:55:21 crc kubenswrapper[4854]: I0103 05:55:21.062939 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-tmp" (OuterVolumeSpecName: "tmp") pod "cbf0394a-6374-4db4-8b4a-0014e7c20a2e" (UID: "cbf0394a-6374-4db4-8b4a-0014e7c20a2e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:55:21 crc kubenswrapper[4854]: I0103 05:55:21.155788 4854 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-sa-token\") on node \"crc\" DevicePath \"\"" Jan 03 05:55:21 crc kubenswrapper[4854]: I0103 05:55:21.155833 4854 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-entrypoint\") on node \"crc\" DevicePath \"\"" Jan 03 05:55:21 crc kubenswrapper[4854]: I0103 05:55:21.155847 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82529\" (UniqueName: \"kubernetes.io/projected/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-kube-api-access-82529\") on node \"crc\" DevicePath \"\"" Jan 03 05:55:21 crc kubenswrapper[4854]: I0103 05:55:21.155862 4854 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Jan 03 05:55:21 crc kubenswrapper[4854]: I0103 05:55:21.155874 4854 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-tmp\") on node \"crc\" DevicePath \"\"" Jan 03 05:55:21 crc kubenswrapper[4854]: I0103 05:55:21.155885 4854 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 03 05:55:21 crc kubenswrapper[4854]: I0103 05:55:21.155897 4854 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-datadir\") on node \"crc\" DevicePath \"\"" Jan 03 05:55:21 crc kubenswrapper[4854]: I0103 05:55:21.155907 4854 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-collector-token\") on node \"crc\" DevicePath \"\"" Jan 03 05:55:21 crc kubenswrapper[4854]: I0103 05:55:21.155918 4854 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-config\") on node \"crc\" DevicePath \"\"" Jan 03 05:55:21 crc kubenswrapper[4854]: I0103 05:55:21.460715 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-collector-syslog-receiver\") pod \"collector-qdbbp\" (UID: \"cbf0394a-6374-4db4-8b4a-0014e7c20a2e\") " pod="openshift-logging/collector-qdbbp" Jan 03 05:55:21 crc kubenswrapper[4854]: I0103 05:55:21.461173 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-metrics\") pod \"collector-qdbbp\" (UID: \"cbf0394a-6374-4db4-8b4a-0014e7c20a2e\") " pod="openshift-logging/collector-qdbbp" Jan 03 05:55:21 crc kubenswrapper[4854]: I0103 05:55:21.464408 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-metrics\") pod \"collector-qdbbp\" (UID: \"cbf0394a-6374-4db4-8b4a-0014e7c20a2e\") " pod="openshift-logging/collector-qdbbp" Jan 03 05:55:21 crc kubenswrapper[4854]: I0103 05:55:21.467490 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-collector-syslog-receiver\") pod \"collector-qdbbp\" (UID: \"cbf0394a-6374-4db4-8b4a-0014e7c20a2e\") " pod="openshift-logging/collector-qdbbp" Jan 03 05:55:21 crc kubenswrapper[4854]: I0103 05:55:21.561715 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-collector-syslog-receiver\") pod \"cbf0394a-6374-4db4-8b4a-0014e7c20a2e\" (UID: \"cbf0394a-6374-4db4-8b4a-0014e7c20a2e\") " Jan 03 05:55:21 crc kubenswrapper[4854]: I0103 05:55:21.561761 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-metrics\") pod \"cbf0394a-6374-4db4-8b4a-0014e7c20a2e\" (UID: \"cbf0394a-6374-4db4-8b4a-0014e7c20a2e\") " Jan 03 05:55:21 crc kubenswrapper[4854]: I0103 05:55:21.565142 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "cbf0394a-6374-4db4-8b4a-0014e7c20a2e" (UID: "cbf0394a-6374-4db4-8b4a-0014e7c20a2e"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:55:21 crc kubenswrapper[4854]: I0103 05:55:21.565174 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-metrics" (OuterVolumeSpecName: "metrics") pod "cbf0394a-6374-4db4-8b4a-0014e7c20a2e" (UID: "cbf0394a-6374-4db4-8b4a-0014e7c20a2e"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:55:21 crc kubenswrapper[4854]: I0103 05:55:21.663779 4854 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Jan 03 05:55:21 crc kubenswrapper[4854]: I0103 05:55:21.663816 4854 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/cbf0394a-6374-4db4-8b4a-0014e7c20a2e-metrics\") on node \"crc\" DevicePath \"\"" Jan 03 05:55:22 crc kubenswrapper[4854]: I0103 05:55:22.008964 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-qdbbp" Jan 03 05:55:22 crc kubenswrapper[4854]: I0103 05:55:22.073008 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-qdbbp"] Jan 03 05:55:22 crc kubenswrapper[4854]: I0103 05:55:22.089846 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-qdbbp"] Jan 03 05:55:22 crc kubenswrapper[4854]: I0103 05:55:22.101116 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-5k6mm"] Jan 03 05:55:22 crc kubenswrapper[4854]: I0103 05:55:22.107549 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-5k6mm" Jan 03 05:55:22 crc kubenswrapper[4854]: I0103 05:55:22.112225 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-bstsj" Jan 03 05:55:22 crc kubenswrapper[4854]: I0103 05:55:22.112681 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Jan 03 05:55:22 crc kubenswrapper[4854]: I0103 05:55:22.112885 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Jan 03 05:55:22 crc kubenswrapper[4854]: I0103 05:55:22.113014 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Jan 03 05:55:22 crc kubenswrapper[4854]: I0103 05:55:22.115466 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Jan 03 05:55:22 crc kubenswrapper[4854]: I0103 05:55:22.124832 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Jan 03 05:55:22 crc kubenswrapper[4854]: I0103 05:55:22.147555 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbf0394a-6374-4db4-8b4a-0014e7c20a2e" path="/var/lib/kubelet/pods/cbf0394a-6374-4db4-8b4a-0014e7c20a2e/volumes" Jan 03 05:55:22 crc kubenswrapper[4854]: I0103 05:55:22.147984 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-5k6mm"] Jan 03 05:55:22 crc kubenswrapper[4854]: I0103 05:55:22.174734 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/8b117764-4aa1-40eb-bd8f-516dc663ba89-config-openshift-service-cacrt\") pod \"collector-5k6mm\" (UID: \"8b117764-4aa1-40eb-bd8f-516dc663ba89\") " pod="openshift-logging/collector-5k6mm" Jan 03 05:55:22 crc kubenswrapper[4854]: I0103 05:55:22.174809 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/8b117764-4aa1-40eb-bd8f-516dc663ba89-datadir\") pod \"collector-5k6mm\" (UID: \"8b117764-4aa1-40eb-bd8f-516dc663ba89\") " pod="openshift-logging/collector-5k6mm" Jan 03 05:55:22 crc kubenswrapper[4854]: I0103 05:55:22.174966 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/8b117764-4aa1-40eb-bd8f-516dc663ba89-sa-token\") pod \"collector-5k6mm\" (UID: \"8b117764-4aa1-40eb-bd8f-516dc663ba89\") " pod="openshift-logging/collector-5k6mm" Jan 03 05:55:22 crc kubenswrapper[4854]: I0103 05:55:22.175030 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/8b117764-4aa1-40eb-bd8f-516dc663ba89-entrypoint\") pod \"collector-5k6mm\" (UID: \"8b117764-4aa1-40eb-bd8f-516dc663ba89\") " pod="openshift-logging/collector-5k6mm" Jan 03 05:55:22 crc kubenswrapper[4854]: I0103 05:55:22.175098 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/8b117764-4aa1-40eb-bd8f-516dc663ba89-collector-syslog-receiver\") pod \"collector-5k6mm\" (UID: \"8b117764-4aa1-40eb-bd8f-516dc663ba89\") " pod="openshift-logging/collector-5k6mm" Jan 03 05:55:22 crc kubenswrapper[4854]: I0103 05:55:22.175134 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8b117764-4aa1-40eb-bd8f-516dc663ba89-tmp\") pod \"collector-5k6mm\" (UID: \"8b117764-4aa1-40eb-bd8f-516dc663ba89\") " pod="openshift-logging/collector-5k6mm" Jan 03 05:55:22 crc kubenswrapper[4854]: I0103 05:55:22.175157 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b117764-4aa1-40eb-bd8f-516dc663ba89-config\") pod \"collector-5k6mm\" (UID: \"8b117764-4aa1-40eb-bd8f-516dc663ba89\") " pod="openshift-logging/collector-5k6mm" Jan 03 05:55:22 crc kubenswrapper[4854]: I0103 05:55:22.175228 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/8b117764-4aa1-40eb-bd8f-516dc663ba89-collector-token\") pod \"collector-5k6mm\" (UID: \"8b117764-4aa1-40eb-bd8f-516dc663ba89\") " pod="openshift-logging/collector-5k6mm" Jan 03 05:55:22 crc kubenswrapper[4854]: I0103 05:55:22.175253 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt2g6\" (UniqueName: \"kubernetes.io/projected/8b117764-4aa1-40eb-bd8f-516dc663ba89-kube-api-access-pt2g6\") pod \"collector-5k6mm\" (UID: \"8b117764-4aa1-40eb-bd8f-516dc663ba89\") " pod="openshift-logging/collector-5k6mm" Jan 03 05:55:22 crc kubenswrapper[4854]: I0103 05:55:22.175270 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/8b117764-4aa1-40eb-bd8f-516dc663ba89-metrics\") pod \"collector-5k6mm\" (UID: \"8b117764-4aa1-40eb-bd8f-516dc663ba89\") " pod="openshift-logging/collector-5k6mm" Jan 03 05:55:22 crc kubenswrapper[4854]: I0103 05:55:22.175336 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8b117764-4aa1-40eb-bd8f-516dc663ba89-trusted-ca\") pod \"collector-5k6mm\" (UID: \"8b117764-4aa1-40eb-bd8f-516dc663ba89\") " pod="openshift-logging/collector-5k6mm" Jan 03 05:55:22 crc kubenswrapper[4854]: I0103 05:55:22.279921 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/8b117764-4aa1-40eb-bd8f-516dc663ba89-collector-syslog-receiver\") pod \"collector-5k6mm\" (UID: \"8b117764-4aa1-40eb-bd8f-516dc663ba89\") " pod="openshift-logging/collector-5k6mm" Jan 03 05:55:22 crc kubenswrapper[4854]: I0103 05:55:22.279999 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8b117764-4aa1-40eb-bd8f-516dc663ba89-tmp\") pod \"collector-5k6mm\" (UID: \"8b117764-4aa1-40eb-bd8f-516dc663ba89\") " pod="openshift-logging/collector-5k6mm" Jan 03 05:55:22 crc kubenswrapper[4854]: I0103 05:55:22.280037 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b117764-4aa1-40eb-bd8f-516dc663ba89-config\") pod \"collector-5k6mm\" (UID: \"8b117764-4aa1-40eb-bd8f-516dc663ba89\") " pod="openshift-logging/collector-5k6mm" Jan 03 05:55:22 crc kubenswrapper[4854]: I0103 05:55:22.280101 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/8b117764-4aa1-40eb-bd8f-516dc663ba89-collector-token\") pod \"collector-5k6mm\" (UID: \"8b117764-4aa1-40eb-bd8f-516dc663ba89\") " pod="openshift-logging/collector-5k6mm" Jan 03 05:55:22 crc kubenswrapper[4854]: I0103 05:55:22.280128 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt2g6\" (UniqueName: \"kubernetes.io/projected/8b117764-4aa1-40eb-bd8f-516dc663ba89-kube-api-access-pt2g6\") pod \"collector-5k6mm\" (UID: \"8b117764-4aa1-40eb-bd8f-516dc663ba89\") " pod="openshift-logging/collector-5k6mm" Jan 03 05:55:22 crc kubenswrapper[4854]: I0103 05:55:22.280152 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/8b117764-4aa1-40eb-bd8f-516dc663ba89-metrics\") pod \"collector-5k6mm\" (UID: \"8b117764-4aa1-40eb-bd8f-516dc663ba89\") " pod="openshift-logging/collector-5k6mm" Jan 03 05:55:22 crc kubenswrapper[4854]: I0103 05:55:22.280201 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8b117764-4aa1-40eb-bd8f-516dc663ba89-trusted-ca\") pod \"collector-5k6mm\" (UID: \"8b117764-4aa1-40eb-bd8f-516dc663ba89\") " pod="openshift-logging/collector-5k6mm" Jan 03 05:55:22 crc kubenswrapper[4854]: I0103 05:55:22.280239 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/8b117764-4aa1-40eb-bd8f-516dc663ba89-config-openshift-service-cacrt\") pod \"collector-5k6mm\" (UID: \"8b117764-4aa1-40eb-bd8f-516dc663ba89\") " pod="openshift-logging/collector-5k6mm" Jan 03 05:55:22 crc kubenswrapper[4854]: I0103 05:55:22.280272 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/8b117764-4aa1-40eb-bd8f-516dc663ba89-datadir\") pod \"collector-5k6mm\" (UID: \"8b117764-4aa1-40eb-bd8f-516dc663ba89\") " pod="openshift-logging/collector-5k6mm" Jan 03 05:55:22 crc kubenswrapper[4854]: I0103 05:55:22.280312 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/8b117764-4aa1-40eb-bd8f-516dc663ba89-sa-token\") pod \"collector-5k6mm\" (UID: \"8b117764-4aa1-40eb-bd8f-516dc663ba89\") " pod="openshift-logging/collector-5k6mm" Jan 03 05:55:22 crc kubenswrapper[4854]: I0103 05:55:22.280339 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/8b117764-4aa1-40eb-bd8f-516dc663ba89-entrypoint\") pod \"collector-5k6mm\" (UID: \"8b117764-4aa1-40eb-bd8f-516dc663ba89\") " pod="openshift-logging/collector-5k6mm" Jan 03 05:55:22 crc kubenswrapper[4854]: I0103 05:55:22.281371 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/8b117764-4aa1-40eb-bd8f-516dc663ba89-entrypoint\") pod \"collector-5k6mm\" (UID: \"8b117764-4aa1-40eb-bd8f-516dc663ba89\") " pod="openshift-logging/collector-5k6mm" Jan 03 05:55:22 crc kubenswrapper[4854]: I0103 05:55:22.281443 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/8b117764-4aa1-40eb-bd8f-516dc663ba89-datadir\") pod \"collector-5k6mm\" (UID: \"8b117764-4aa1-40eb-bd8f-516dc663ba89\") " pod="openshift-logging/collector-5k6mm" Jan 03 05:55:22 crc kubenswrapper[4854]: I0103 05:55:22.285176 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/8b117764-4aa1-40eb-bd8f-516dc663ba89-config-openshift-service-cacrt\") pod \"collector-5k6mm\" (UID: \"8b117764-4aa1-40eb-bd8f-516dc663ba89\") " pod="openshift-logging/collector-5k6mm" Jan 03 05:55:22 crc kubenswrapper[4854]: I0103 05:55:22.285314 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b117764-4aa1-40eb-bd8f-516dc663ba89-config\") pod \"collector-5k6mm\" (UID: \"8b117764-4aa1-40eb-bd8f-516dc663ba89\") " pod="openshift-logging/collector-5k6mm" Jan 03 05:55:22 crc kubenswrapper[4854]: I0103 05:55:22.285622 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8b117764-4aa1-40eb-bd8f-516dc663ba89-trusted-ca\") pod \"collector-5k6mm\" (UID: \"8b117764-4aa1-40eb-bd8f-516dc663ba89\") " pod="openshift-logging/collector-5k6mm" Jan 03 05:55:22 crc kubenswrapper[4854]: I0103 05:55:22.291588 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/8b117764-4aa1-40eb-bd8f-516dc663ba89-collector-syslog-receiver\") pod \"collector-5k6mm\" (UID: \"8b117764-4aa1-40eb-bd8f-516dc663ba89\") " pod="openshift-logging/collector-5k6mm" Jan 03 05:55:22 crc kubenswrapper[4854]: I0103 05:55:22.292505 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/8b117764-4aa1-40eb-bd8f-516dc663ba89-metrics\") pod \"collector-5k6mm\" (UID: \"8b117764-4aa1-40eb-bd8f-516dc663ba89\") " pod="openshift-logging/collector-5k6mm" Jan 03 05:55:22 crc kubenswrapper[4854]: I0103 05:55:22.294433 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/8b117764-4aa1-40eb-bd8f-516dc663ba89-collector-token\") pod \"collector-5k6mm\" (UID: \"8b117764-4aa1-40eb-bd8f-516dc663ba89\") " pod="openshift-logging/collector-5k6mm" Jan 03 05:55:22 crc kubenswrapper[4854]: I0103 05:55:22.308390 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8b117764-4aa1-40eb-bd8f-516dc663ba89-tmp\") pod \"collector-5k6mm\" (UID: \"8b117764-4aa1-40eb-bd8f-516dc663ba89\") " pod="openshift-logging/collector-5k6mm" Jan 03 05:55:22 crc kubenswrapper[4854]: I0103 05:55:22.308694 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt2g6\" (UniqueName: \"kubernetes.io/projected/8b117764-4aa1-40eb-bd8f-516dc663ba89-kube-api-access-pt2g6\") pod \"collector-5k6mm\" (UID: \"8b117764-4aa1-40eb-bd8f-516dc663ba89\") " pod="openshift-logging/collector-5k6mm" Jan 03 05:55:22 crc kubenswrapper[4854]: I0103 05:55:22.318360 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/8b117764-4aa1-40eb-bd8f-516dc663ba89-sa-token\") pod \"collector-5k6mm\" (UID: \"8b117764-4aa1-40eb-bd8f-516dc663ba89\") " pod="openshift-logging/collector-5k6mm" Jan 03 05:55:22 crc kubenswrapper[4854]: I0103 05:55:22.440624 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-5k6mm" Jan 03 05:55:22 crc kubenswrapper[4854]: I0103 05:55:22.905741 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-5k6mm"] Jan 03 05:55:23 crc kubenswrapper[4854]: I0103 05:55:23.016164 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-5k6mm" event={"ID":"8b117764-4aa1-40eb-bd8f-516dc663ba89","Type":"ContainerStarted","Data":"f8c7019aeaa98ea42ef7ec81902f6be64bc83194596a8181940c42a3586a94d6"} Jan 03 05:55:23 crc kubenswrapper[4854]: E0103 05:55:23.220115 4854 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbf0394a_6374_4db4_8b4a_0014e7c20a2e.slice\": RecentStats: unable to find data in memory cache]" Jan 03 05:55:24 crc kubenswrapper[4854]: E0103 05:55:24.099893 4854 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbf0394a_6374_4db4_8b4a_0014e7c20a2e.slice\": RecentStats: unable to find data in memory cache]" Jan 03 05:55:32 crc kubenswrapper[4854]: I0103 05:55:32.133897 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-5k6mm" event={"ID":"8b117764-4aa1-40eb-bd8f-516dc663ba89","Type":"ContainerStarted","Data":"9742f9b895cc0a66d9960b93afdc852986a8b8a8dfee813bed54911d8944ffa1"} Jan 03 05:55:32 crc kubenswrapper[4854]: I0103 05:55:32.183585 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-5k6mm" podStartSLOduration=2.108582347 podStartE2EDuration="10.183557258s" podCreationTimestamp="2026-01-03 05:55:22 +0000 UTC" firstStartedPulling="2026-01-03 05:55:22.91459759 +0000 UTC m=+901.241174172" lastFinishedPulling="2026-01-03 05:55:30.989572471 +0000 UTC m=+909.316149083" observedRunningTime="2026-01-03 05:55:32.179702292 +0000 UTC m=+910.506278904" watchObservedRunningTime="2026-01-03 05:55:32.183557258 +0000 UTC m=+910.510133850" Jan 03 05:55:33 crc kubenswrapper[4854]: E0103 05:55:33.427423 4854 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbf0394a_6374_4db4_8b4a_0014e7c20a2e.slice\": RecentStats: unable to find data in memory cache]" Jan 03 05:55:39 crc kubenswrapper[4854]: E0103 05:55:39.271001 4854 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbf0394a_6374_4db4_8b4a_0014e7c20a2e.slice\": RecentStats: unable to find data in memory cache]" Jan 03 05:55:43 crc kubenswrapper[4854]: E0103 05:55:43.484452 4854 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbf0394a_6374_4db4_8b4a_0014e7c20a2e.slice\": RecentStats: unable to find data in memory cache]" Jan 03 05:55:48 crc kubenswrapper[4854]: E0103 05:55:48.244064 4854 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbf0394a_6374_4db4_8b4a_0014e7c20a2e.slice\": RecentStats: unable to find data in memory cache]" Jan 03 05:55:48 crc kubenswrapper[4854]: E0103 05:55:48.244213 4854 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbf0394a_6374_4db4_8b4a_0014e7c20a2e.slice\": RecentStats: unable to find data in memory cache]" Jan 03 05:55:53 crc kubenswrapper[4854]: E0103 05:55:53.700805 4854 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbf0394a_6374_4db4_8b4a_0014e7c20a2e.slice\": RecentStats: unable to find data in memory cache]" Jan 03 05:55:54 crc kubenswrapper[4854]: E0103 05:55:54.090979 4854 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbf0394a_6374_4db4_8b4a_0014e7c20a2e.slice\": RecentStats: unable to find data in memory cache]" Jan 03 05:56:03 crc kubenswrapper[4854]: E0103 05:56:03.734713 4854 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbf0394a_6374_4db4_8b4a_0014e7c20a2e.slice\": RecentStats: unable to find data in memory cache]" Jan 03 05:56:04 crc kubenswrapper[4854]: I0103 05:56:04.079152 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82gsns"] Jan 03 05:56:04 crc kubenswrapper[4854]: I0103 05:56:04.080603 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82gsns" Jan 03 05:56:04 crc kubenswrapper[4854]: I0103 05:56:04.082708 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 03 05:56:04 crc kubenswrapper[4854]: I0103 05:56:04.099751 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82gsns"] Jan 03 05:56:04 crc kubenswrapper[4854]: I0103 05:56:04.164620 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aff9403d-691a-422b-baec-f78bc1688caa-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82gsns\" (UID: \"aff9403d-691a-422b-baec-f78bc1688caa\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82gsns" Jan 03 05:56:04 crc kubenswrapper[4854]: I0103 05:56:04.165133 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m77tn\" (UniqueName: \"kubernetes.io/projected/aff9403d-691a-422b-baec-f78bc1688caa-kube-api-access-m77tn\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82gsns\" (UID: \"aff9403d-691a-422b-baec-f78bc1688caa\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82gsns" Jan 03 05:56:04 crc kubenswrapper[4854]: I0103 05:56:04.165279 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aff9403d-691a-422b-baec-f78bc1688caa-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82gsns\" (UID: \"aff9403d-691a-422b-baec-f78bc1688caa\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82gsns" Jan 03 05:56:04 crc kubenswrapper[4854]: I0103 05:56:04.266413 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m77tn\" (UniqueName: \"kubernetes.io/projected/aff9403d-691a-422b-baec-f78bc1688caa-kube-api-access-m77tn\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82gsns\" (UID: \"aff9403d-691a-422b-baec-f78bc1688caa\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82gsns" Jan 03 05:56:04 crc kubenswrapper[4854]: I0103 05:56:04.266457 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aff9403d-691a-422b-baec-f78bc1688caa-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82gsns\" (UID: \"aff9403d-691a-422b-baec-f78bc1688caa\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82gsns" Jan 03 05:56:04 crc kubenswrapper[4854]: I0103 05:56:04.266482 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aff9403d-691a-422b-baec-f78bc1688caa-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82gsns\" (UID: \"aff9403d-691a-422b-baec-f78bc1688caa\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82gsns" Jan 03 05:56:04 crc kubenswrapper[4854]: I0103 05:56:04.267025 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aff9403d-691a-422b-baec-f78bc1688caa-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82gsns\" (UID: \"aff9403d-691a-422b-baec-f78bc1688caa\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82gsns" Jan 03 05:56:04 crc kubenswrapper[4854]: I0103 05:56:04.267040 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aff9403d-691a-422b-baec-f78bc1688caa-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82gsns\" (UID: \"aff9403d-691a-422b-baec-f78bc1688caa\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82gsns" Jan 03 05:56:04 crc kubenswrapper[4854]: I0103 05:56:04.288931 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m77tn\" (UniqueName: \"kubernetes.io/projected/aff9403d-691a-422b-baec-f78bc1688caa-kube-api-access-m77tn\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82gsns\" (UID: \"aff9403d-691a-422b-baec-f78bc1688caa\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82gsns" Jan 03 05:56:04 crc kubenswrapper[4854]: I0103 05:56:04.414399 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82gsns" Jan 03 05:56:05 crc kubenswrapper[4854]: I0103 05:56:05.048424 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82gsns"] Jan 03 05:56:05 crc kubenswrapper[4854]: I0103 05:56:05.429534 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82gsns" event={"ID":"aff9403d-691a-422b-baec-f78bc1688caa","Type":"ContainerStarted","Data":"735573d4ea410c15ab79e7d899518cc7258ee00203a2c5084e040a12f76106d2"} Jan 03 05:56:06 crc kubenswrapper[4854]: I0103 05:56:06.444256 4854 generic.go:334] "Generic (PLEG): container finished" podID="aff9403d-691a-422b-baec-f78bc1688caa" containerID="2eb82245e5fb570266eb83e2d5cde464f36352dbcb01a05dcdf2260437582314" exitCode=0 Jan 03 05:56:06 crc kubenswrapper[4854]: I0103 05:56:06.444488 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82gsns" event={"ID":"aff9403d-691a-422b-baec-f78bc1688caa","Type":"ContainerDied","Data":"2eb82245e5fb570266eb83e2d5cde464f36352dbcb01a05dcdf2260437582314"} Jan 03 05:56:06 crc kubenswrapper[4854]: I0103 05:56:06.447811 4854 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 03 05:56:09 crc kubenswrapper[4854]: E0103 05:56:09.360971 4854 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbf0394a_6374_4db4_8b4a_0014e7c20a2e.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaff9403d_691a_422b_baec_f78bc1688caa.slice/crio-conmon-dcf3eb5190245b4491b3bb942af821352c9091795816228b5f17b203706096a7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaff9403d_691a_422b_baec_f78bc1688caa.slice/crio-dcf3eb5190245b4491b3bb942af821352c9091795816228b5f17b203706096a7.scope\": RecentStats: unable to find data in memory cache]" Jan 03 05:56:09 crc kubenswrapper[4854]: I0103 05:56:09.467864 4854 generic.go:334] "Generic (PLEG): container finished" podID="aff9403d-691a-422b-baec-f78bc1688caa" containerID="dcf3eb5190245b4491b3bb942af821352c9091795816228b5f17b203706096a7" exitCode=0 Jan 03 05:56:09 crc kubenswrapper[4854]: I0103 05:56:09.467907 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82gsns" event={"ID":"aff9403d-691a-422b-baec-f78bc1688caa","Type":"ContainerDied","Data":"dcf3eb5190245b4491b3bb942af821352c9091795816228b5f17b203706096a7"} Jan 03 05:56:10 crc kubenswrapper[4854]: I0103 05:56:10.476355 4854 generic.go:334] "Generic (PLEG): container finished" podID="aff9403d-691a-422b-baec-f78bc1688caa" containerID="656a9b1b22fb9f95a7e8d9c077b509616663ef0596c3001b74edc88e734d7475" exitCode=0 Jan 03 05:56:10 crc kubenswrapper[4854]: I0103 05:56:10.476406 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82gsns" event={"ID":"aff9403d-691a-422b-baec-f78bc1688caa","Type":"ContainerDied","Data":"656a9b1b22fb9f95a7e8d9c077b509616663ef0596c3001b74edc88e734d7475"} Jan 03 05:56:11 crc kubenswrapper[4854]: I0103 05:56:11.930179 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82gsns" Jan 03 05:56:12 crc kubenswrapper[4854]: I0103 05:56:12.020466 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aff9403d-691a-422b-baec-f78bc1688caa-bundle\") pod \"aff9403d-691a-422b-baec-f78bc1688caa\" (UID: \"aff9403d-691a-422b-baec-f78bc1688caa\") " Jan 03 05:56:12 crc kubenswrapper[4854]: I0103 05:56:12.020645 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m77tn\" (UniqueName: \"kubernetes.io/projected/aff9403d-691a-422b-baec-f78bc1688caa-kube-api-access-m77tn\") pod \"aff9403d-691a-422b-baec-f78bc1688caa\" (UID: \"aff9403d-691a-422b-baec-f78bc1688caa\") " Jan 03 05:56:12 crc kubenswrapper[4854]: I0103 05:56:12.020707 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aff9403d-691a-422b-baec-f78bc1688caa-util\") pod \"aff9403d-691a-422b-baec-f78bc1688caa\" (UID: \"aff9403d-691a-422b-baec-f78bc1688caa\") " Jan 03 05:56:12 crc kubenswrapper[4854]: I0103 05:56:12.021346 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aff9403d-691a-422b-baec-f78bc1688caa-bundle" (OuterVolumeSpecName: "bundle") pod "aff9403d-691a-422b-baec-f78bc1688caa" (UID: "aff9403d-691a-422b-baec-f78bc1688caa"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:56:12 crc kubenswrapper[4854]: I0103 05:56:12.021538 4854 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aff9403d-691a-422b-baec-f78bc1688caa-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 05:56:12 crc kubenswrapper[4854]: I0103 05:56:12.031185 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aff9403d-691a-422b-baec-f78bc1688caa-util" (OuterVolumeSpecName: "util") pod "aff9403d-691a-422b-baec-f78bc1688caa" (UID: "aff9403d-691a-422b-baec-f78bc1688caa"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:56:12 crc kubenswrapper[4854]: I0103 05:56:12.031694 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aff9403d-691a-422b-baec-f78bc1688caa-kube-api-access-m77tn" (OuterVolumeSpecName: "kube-api-access-m77tn") pod "aff9403d-691a-422b-baec-f78bc1688caa" (UID: "aff9403d-691a-422b-baec-f78bc1688caa"). InnerVolumeSpecName "kube-api-access-m77tn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:56:12 crc kubenswrapper[4854]: I0103 05:56:12.123204 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m77tn\" (UniqueName: \"kubernetes.io/projected/aff9403d-691a-422b-baec-f78bc1688caa-kube-api-access-m77tn\") on node \"crc\" DevicePath \"\"" Jan 03 05:56:12 crc kubenswrapper[4854]: I0103 05:56:12.123253 4854 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aff9403d-691a-422b-baec-f78bc1688caa-util\") on node \"crc\" DevicePath \"\"" Jan 03 05:56:12 crc kubenswrapper[4854]: I0103 05:56:12.499607 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82gsns" event={"ID":"aff9403d-691a-422b-baec-f78bc1688caa","Type":"ContainerDied","Data":"735573d4ea410c15ab79e7d899518cc7258ee00203a2c5084e040a12f76106d2"} Jan 03 05:56:12 crc kubenswrapper[4854]: I0103 05:56:12.499663 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="735573d4ea410c15ab79e7d899518cc7258ee00203a2c5084e040a12f76106d2" Jan 03 05:56:12 crc kubenswrapper[4854]: I0103 05:56:12.499689 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82gsns" Jan 03 05:56:13 crc kubenswrapper[4854]: E0103 05:56:13.773440 4854 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbf0394a_6374_4db4_8b4a_0014e7c20a2e.slice\": RecentStats: unable to find data in memory cache]" Jan 03 05:56:15 crc kubenswrapper[4854]: I0103 05:56:15.829886 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-v6s7z"] Jan 03 05:56:15 crc kubenswrapper[4854]: E0103 05:56:15.830205 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aff9403d-691a-422b-baec-f78bc1688caa" containerName="extract" Jan 03 05:56:15 crc kubenswrapper[4854]: I0103 05:56:15.830217 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="aff9403d-691a-422b-baec-f78bc1688caa" containerName="extract" Jan 03 05:56:15 crc kubenswrapper[4854]: E0103 05:56:15.830231 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aff9403d-691a-422b-baec-f78bc1688caa" containerName="util" Jan 03 05:56:15 crc kubenswrapper[4854]: I0103 05:56:15.830237 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="aff9403d-691a-422b-baec-f78bc1688caa" containerName="util" Jan 03 05:56:15 crc kubenswrapper[4854]: E0103 05:56:15.830253 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aff9403d-691a-422b-baec-f78bc1688caa" containerName="pull" Jan 03 05:56:15 crc kubenswrapper[4854]: I0103 05:56:15.830259 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="aff9403d-691a-422b-baec-f78bc1688caa" containerName="pull" Jan 03 05:56:15 crc kubenswrapper[4854]: I0103 05:56:15.830384 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="aff9403d-691a-422b-baec-f78bc1688caa" containerName="extract" Jan 03 05:56:15 crc kubenswrapper[4854]: I0103 05:56:15.830864 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-6769fb99d-v6s7z" Jan 03 05:56:15 crc kubenswrapper[4854]: I0103 05:56:15.834510 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 03 05:56:15 crc kubenswrapper[4854]: I0103 05:56:15.834677 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 03 05:56:15 crc kubenswrapper[4854]: I0103 05:56:15.834987 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-b4f8k" Jan 03 05:56:15 crc kubenswrapper[4854]: I0103 05:56:15.845894 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-v6s7z"] Jan 03 05:56:15 crc kubenswrapper[4854]: I0103 05:56:15.892134 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tqmj\" (UniqueName: \"kubernetes.io/projected/50eec07d-7f98-4964-8522-8f0c7ceb5a8d-kube-api-access-8tqmj\") pod \"nmstate-operator-6769fb99d-v6s7z\" (UID: \"50eec07d-7f98-4964-8522-8f0c7ceb5a8d\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-v6s7z" Jan 03 05:56:15 crc kubenswrapper[4854]: I0103 05:56:15.994138 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tqmj\" (UniqueName: \"kubernetes.io/projected/50eec07d-7f98-4964-8522-8f0c7ceb5a8d-kube-api-access-8tqmj\") pod \"nmstate-operator-6769fb99d-v6s7z\" (UID: \"50eec07d-7f98-4964-8522-8f0c7ceb5a8d\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-v6s7z" Jan 03 05:56:16 crc kubenswrapper[4854]: I0103 05:56:16.013308 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tqmj\" (UniqueName: \"kubernetes.io/projected/50eec07d-7f98-4964-8522-8f0c7ceb5a8d-kube-api-access-8tqmj\") pod \"nmstate-operator-6769fb99d-v6s7z\" (UID: \"50eec07d-7f98-4964-8522-8f0c7ceb5a8d\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-v6s7z" Jan 03 05:56:16 crc kubenswrapper[4854]: I0103 05:56:16.148625 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-6769fb99d-v6s7z" Jan 03 05:56:16 crc kubenswrapper[4854]: I0103 05:56:16.673141 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-v6s7z"] Jan 03 05:56:17 crc kubenswrapper[4854]: I0103 05:56:17.533665 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-6769fb99d-v6s7z" event={"ID":"50eec07d-7f98-4964-8522-8f0c7ceb5a8d","Type":"ContainerStarted","Data":"b8969d21a94025cd9f07a7e838b7ffa015982862b26dee27a85d09080b7658e4"} Jan 03 05:56:21 crc kubenswrapper[4854]: I0103 05:56:21.568126 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-6769fb99d-v6s7z" event={"ID":"50eec07d-7f98-4964-8522-8f0c7ceb5a8d","Type":"ContainerStarted","Data":"691dc85395c71fb7d6c0a44852e8ad338e1d4becaa5b0c780bb925482985e8aa"} Jan 03 05:56:21 crc kubenswrapper[4854]: I0103 05:56:21.597279 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-6769fb99d-v6s7z" podStartSLOduration=2.555915043 podStartE2EDuration="6.597204999s" podCreationTimestamp="2026-01-03 05:56:15 +0000 UTC" firstStartedPulling="2026-01-03 05:56:16.666090768 +0000 UTC m=+954.992667340" lastFinishedPulling="2026-01-03 05:56:20.707380704 +0000 UTC m=+959.033957296" observedRunningTime="2026-01-03 05:56:21.597224549 +0000 UTC m=+959.923801221" watchObservedRunningTime="2026-01-03 05:56:21.597204999 +0000 UTC m=+959.923781561" Jan 03 05:56:23 crc kubenswrapper[4854]: I0103 05:56:23.870700 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-b66tn"] Jan 03 05:56:23 crc kubenswrapper[4854]: I0103 05:56:23.872104 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-b66tn" Jan 03 05:56:23 crc kubenswrapper[4854]: I0103 05:56:23.880092 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-mxm65"] Jan 03 05:56:23 crc kubenswrapper[4854]: I0103 05:56:23.880564 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-vc5vb" Jan 03 05:56:23 crc kubenswrapper[4854]: I0103 05:56:23.888668 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-f8fb84555-mxm65" Jan 03 05:56:23 crc kubenswrapper[4854]: I0103 05:56:23.892685 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 03 05:56:23 crc kubenswrapper[4854]: I0103 05:56:23.922627 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-b66tn"] Jan 03 05:56:23 crc kubenswrapper[4854]: I0103 05:56:23.931513 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-mxm65"] Jan 03 05:56:23 crc kubenswrapper[4854]: I0103 05:56:23.933034 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxgrs\" (UniqueName: \"kubernetes.io/projected/d78e7aa0-58e7-4445-920b-ca73758f9c84-kube-api-access-qxgrs\") pod \"nmstate-webhook-f8fb84555-mxm65\" (UID: \"d78e7aa0-58e7-4445-920b-ca73758f9c84\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-mxm65" Jan 03 05:56:23 crc kubenswrapper[4854]: I0103 05:56:23.933067 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxw4c\" (UniqueName: \"kubernetes.io/projected/098925cd-8842-4ed4-9757-568ef31ab2cf-kube-api-access-rxw4c\") pod \"nmstate-metrics-7f7f7578db-b66tn\" (UID: \"098925cd-8842-4ed4-9757-568ef31ab2cf\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-b66tn" Jan 03 05:56:23 crc kubenswrapper[4854]: I0103 05:56:23.933119 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/d78e7aa0-58e7-4445-920b-ca73758f9c84-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-mxm65\" (UID: \"d78e7aa0-58e7-4445-920b-ca73758f9c84\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-mxm65" Jan 03 05:56:23 crc kubenswrapper[4854]: I0103 05:56:23.966109 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-wwws2"] Jan 03 05:56:23 crc kubenswrapper[4854]: I0103 05:56:23.967493 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-wwws2" Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.045637 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxgrs\" (UniqueName: \"kubernetes.io/projected/d78e7aa0-58e7-4445-920b-ca73758f9c84-kube-api-access-qxgrs\") pod \"nmstate-webhook-f8fb84555-mxm65\" (UID: \"d78e7aa0-58e7-4445-920b-ca73758f9c84\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-mxm65" Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.045984 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxw4c\" (UniqueName: \"kubernetes.io/projected/098925cd-8842-4ed4-9757-568ef31ab2cf-kube-api-access-rxw4c\") pod \"nmstate-metrics-7f7f7578db-b66tn\" (UID: \"098925cd-8842-4ed4-9757-568ef31ab2cf\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-b66tn" Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.046037 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/d78e7aa0-58e7-4445-920b-ca73758f9c84-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-mxm65\" (UID: \"d78e7aa0-58e7-4445-920b-ca73758f9c84\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-mxm65" Jan 03 05:56:24 crc kubenswrapper[4854]: E0103 05:56:24.046256 4854 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 03 05:56:24 crc kubenswrapper[4854]: E0103 05:56:24.046307 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d78e7aa0-58e7-4445-920b-ca73758f9c84-tls-key-pair podName:d78e7aa0-58e7-4445-920b-ca73758f9c84 nodeName:}" failed. No retries permitted until 2026-01-03 05:56:24.546287837 +0000 UTC m=+962.872864409 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/d78e7aa0-58e7-4445-920b-ca73758f9c84-tls-key-pair") pod "nmstate-webhook-f8fb84555-mxm65" (UID: "d78e7aa0-58e7-4445-920b-ca73758f9c84") : secret "openshift-nmstate-webhook" not found Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.070165 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-tsrnv"] Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.070539 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxw4c\" (UniqueName: \"kubernetes.io/projected/098925cd-8842-4ed4-9757-568ef31ab2cf-kube-api-access-rxw4c\") pod \"nmstate-metrics-7f7f7578db-b66tn\" (UID: \"098925cd-8842-4ed4-9757-568ef31ab2cf\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-b66tn" Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.071230 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-tsrnv" Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.077359 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.077518 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-7xsdj" Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.077364 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.080589 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxgrs\" (UniqueName: \"kubernetes.io/projected/d78e7aa0-58e7-4445-920b-ca73758f9c84-kube-api-access-qxgrs\") pod \"nmstate-webhook-f8fb84555-mxm65\" (UID: \"d78e7aa0-58e7-4445-920b-ca73758f9c84\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-mxm65" Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.084426 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-tsrnv"] Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.147224 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcqml\" (UniqueName: \"kubernetes.io/projected/6cc37176-dd9d-4138-a8f4-615d7815311a-kube-api-access-fcqml\") pod \"nmstate-handler-wwws2\" (UID: \"6cc37176-dd9d-4138-a8f4-615d7815311a\") " pod="openshift-nmstate/nmstate-handler-wwws2" Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.147298 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zgll\" (UniqueName: \"kubernetes.io/projected/a79c8399-b9f8-4caf-b03b-7474f9c4441a-kube-api-access-9zgll\") pod \"nmstate-console-plugin-6ff7998486-tsrnv\" (UID: \"a79c8399-b9f8-4caf-b03b-7474f9c4441a\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-tsrnv" Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.147326 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/6cc37176-dd9d-4138-a8f4-615d7815311a-nmstate-lock\") pod \"nmstate-handler-wwws2\" (UID: \"6cc37176-dd9d-4138-a8f4-615d7815311a\") " pod="openshift-nmstate/nmstate-handler-wwws2" Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.147350 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a79c8399-b9f8-4caf-b03b-7474f9c4441a-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-tsrnv\" (UID: \"a79c8399-b9f8-4caf-b03b-7474f9c4441a\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-tsrnv" Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.147388 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a79c8399-b9f8-4caf-b03b-7474f9c4441a-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-tsrnv\" (UID: \"a79c8399-b9f8-4caf-b03b-7474f9c4441a\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-tsrnv" Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.147416 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/6cc37176-dd9d-4138-a8f4-615d7815311a-ovs-socket\") pod \"nmstate-handler-wwws2\" (UID: \"6cc37176-dd9d-4138-a8f4-615d7815311a\") " pod="openshift-nmstate/nmstate-handler-wwws2" Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.147438 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/6cc37176-dd9d-4138-a8f4-615d7815311a-dbus-socket\") pod \"nmstate-handler-wwws2\" (UID: \"6cc37176-dd9d-4138-a8f4-615d7815311a\") " pod="openshift-nmstate/nmstate-handler-wwws2" Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.228873 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-b66tn" Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.249377 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcqml\" (UniqueName: \"kubernetes.io/projected/6cc37176-dd9d-4138-a8f4-615d7815311a-kube-api-access-fcqml\") pod \"nmstate-handler-wwws2\" (UID: \"6cc37176-dd9d-4138-a8f4-615d7815311a\") " pod="openshift-nmstate/nmstate-handler-wwws2" Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.249480 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zgll\" (UniqueName: \"kubernetes.io/projected/a79c8399-b9f8-4caf-b03b-7474f9c4441a-kube-api-access-9zgll\") pod \"nmstate-console-plugin-6ff7998486-tsrnv\" (UID: \"a79c8399-b9f8-4caf-b03b-7474f9c4441a\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-tsrnv" Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.249520 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/6cc37176-dd9d-4138-a8f4-615d7815311a-nmstate-lock\") pod \"nmstate-handler-wwws2\" (UID: \"6cc37176-dd9d-4138-a8f4-615d7815311a\") " pod="openshift-nmstate/nmstate-handler-wwws2" Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.249553 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a79c8399-b9f8-4caf-b03b-7474f9c4441a-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-tsrnv\" (UID: \"a79c8399-b9f8-4caf-b03b-7474f9c4441a\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-tsrnv" Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.249584 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a79c8399-b9f8-4caf-b03b-7474f9c4441a-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-tsrnv\" (UID: \"a79c8399-b9f8-4caf-b03b-7474f9c4441a\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-tsrnv" Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.249625 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/6cc37176-dd9d-4138-a8f4-615d7815311a-ovs-socket\") pod \"nmstate-handler-wwws2\" (UID: \"6cc37176-dd9d-4138-a8f4-615d7815311a\") " pod="openshift-nmstate/nmstate-handler-wwws2" Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.249669 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/6cc37176-dd9d-4138-a8f4-615d7815311a-dbus-socket\") pod \"nmstate-handler-wwws2\" (UID: \"6cc37176-dd9d-4138-a8f4-615d7815311a\") " pod="openshift-nmstate/nmstate-handler-wwws2" Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.250159 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/6cc37176-dd9d-4138-a8f4-615d7815311a-dbus-socket\") pod \"nmstate-handler-wwws2\" (UID: \"6cc37176-dd9d-4138-a8f4-615d7815311a\") " pod="openshift-nmstate/nmstate-handler-wwws2" Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.250706 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/6cc37176-dd9d-4138-a8f4-615d7815311a-nmstate-lock\") pod \"nmstate-handler-wwws2\" (UID: \"6cc37176-dd9d-4138-a8f4-615d7815311a\") " pod="openshift-nmstate/nmstate-handler-wwws2" Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.251727 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a79c8399-b9f8-4caf-b03b-7474f9c4441a-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-tsrnv\" (UID: \"a79c8399-b9f8-4caf-b03b-7474f9c4441a\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-tsrnv" Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.252404 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/6cc37176-dd9d-4138-a8f4-615d7815311a-ovs-socket\") pod \"nmstate-handler-wwws2\" (UID: \"6cc37176-dd9d-4138-a8f4-615d7815311a\") " pod="openshift-nmstate/nmstate-handler-wwws2" Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.258756 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a79c8399-b9f8-4caf-b03b-7474f9c4441a-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-tsrnv\" (UID: \"a79c8399-b9f8-4caf-b03b-7474f9c4441a\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-tsrnv" Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.280978 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-64d6659995-xwhf5"] Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.282336 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zgll\" (UniqueName: \"kubernetes.io/projected/a79c8399-b9f8-4caf-b03b-7474f9c4441a-kube-api-access-9zgll\") pod \"nmstate-console-plugin-6ff7998486-tsrnv\" (UID: \"a79c8399-b9f8-4caf-b03b-7474f9c4441a\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-tsrnv" Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.283464 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcqml\" (UniqueName: \"kubernetes.io/projected/6cc37176-dd9d-4138-a8f4-615d7815311a-kube-api-access-fcqml\") pod \"nmstate-handler-wwws2\" (UID: \"6cc37176-dd9d-4138-a8f4-615d7815311a\") " pod="openshift-nmstate/nmstate-handler-wwws2" Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.288106 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-wwws2" Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.310127 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d6659995-xwhf5"] Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.310277 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d6659995-xwhf5" Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.354472 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ca99325-405c-467a-a9e0-53c5e4fb96e4-trusted-ca-bundle\") pod \"console-64d6659995-xwhf5\" (UID: \"1ca99325-405c-467a-a9e0-53c5e4fb96e4\") " pod="openshift-console/console-64d6659995-xwhf5" Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.358821 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1ca99325-405c-467a-a9e0-53c5e4fb96e4-console-serving-cert\") pod \"console-64d6659995-xwhf5\" (UID: \"1ca99325-405c-467a-a9e0-53c5e4fb96e4\") " pod="openshift-console/console-64d6659995-xwhf5" Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.358915 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1ca99325-405c-467a-a9e0-53c5e4fb96e4-oauth-serving-cert\") pod \"console-64d6659995-xwhf5\" (UID: \"1ca99325-405c-467a-a9e0-53c5e4fb96e4\") " pod="openshift-console/console-64d6659995-xwhf5" Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.359040 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1ca99325-405c-467a-a9e0-53c5e4fb96e4-service-ca\") pod \"console-64d6659995-xwhf5\" (UID: \"1ca99325-405c-467a-a9e0-53c5e4fb96e4\") " pod="openshift-console/console-64d6659995-xwhf5" Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.359065 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5dmk\" (UniqueName: \"kubernetes.io/projected/1ca99325-405c-467a-a9e0-53c5e4fb96e4-kube-api-access-s5dmk\") pod \"console-64d6659995-xwhf5\" (UID: \"1ca99325-405c-467a-a9e0-53c5e4fb96e4\") " pod="openshift-console/console-64d6659995-xwhf5" Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.359134 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1ca99325-405c-467a-a9e0-53c5e4fb96e4-console-oauth-config\") pod \"console-64d6659995-xwhf5\" (UID: \"1ca99325-405c-467a-a9e0-53c5e4fb96e4\") " pod="openshift-console/console-64d6659995-xwhf5" Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.359161 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1ca99325-405c-467a-a9e0-53c5e4fb96e4-console-config\") pod \"console-64d6659995-xwhf5\" (UID: \"1ca99325-405c-467a-a9e0-53c5e4fb96e4\") " pod="openshift-console/console-64d6659995-xwhf5" Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.438369 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-tsrnv" Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.460813 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ca99325-405c-467a-a9e0-53c5e4fb96e4-trusted-ca-bundle\") pod \"console-64d6659995-xwhf5\" (UID: \"1ca99325-405c-467a-a9e0-53c5e4fb96e4\") " pod="openshift-console/console-64d6659995-xwhf5" Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.461744 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1ca99325-405c-467a-a9e0-53c5e4fb96e4-console-serving-cert\") pod \"console-64d6659995-xwhf5\" (UID: \"1ca99325-405c-467a-a9e0-53c5e4fb96e4\") " pod="openshift-console/console-64d6659995-xwhf5" Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.461904 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1ca99325-405c-467a-a9e0-53c5e4fb96e4-oauth-serving-cert\") pod \"console-64d6659995-xwhf5\" (UID: \"1ca99325-405c-467a-a9e0-53c5e4fb96e4\") " pod="openshift-console/console-64d6659995-xwhf5" Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.462249 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1ca99325-405c-467a-a9e0-53c5e4fb96e4-service-ca\") pod \"console-64d6659995-xwhf5\" (UID: \"1ca99325-405c-467a-a9e0-53c5e4fb96e4\") " pod="openshift-console/console-64d6659995-xwhf5" Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.462472 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ca99325-405c-467a-a9e0-53c5e4fb96e4-trusted-ca-bundle\") pod \"console-64d6659995-xwhf5\" (UID: \"1ca99325-405c-467a-a9e0-53c5e4fb96e4\") " pod="openshift-console/console-64d6659995-xwhf5" Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.463127 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1ca99325-405c-467a-a9e0-53c5e4fb96e4-oauth-serving-cert\") pod \"console-64d6659995-xwhf5\" (UID: \"1ca99325-405c-467a-a9e0-53c5e4fb96e4\") " pod="openshift-console/console-64d6659995-xwhf5" Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.463911 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1ca99325-405c-467a-a9e0-53c5e4fb96e4-service-ca\") pod \"console-64d6659995-xwhf5\" (UID: \"1ca99325-405c-467a-a9e0-53c5e4fb96e4\") " pod="openshift-console/console-64d6659995-xwhf5" Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.464036 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5dmk\" (UniqueName: \"kubernetes.io/projected/1ca99325-405c-467a-a9e0-53c5e4fb96e4-kube-api-access-s5dmk\") pod \"console-64d6659995-xwhf5\" (UID: \"1ca99325-405c-467a-a9e0-53c5e4fb96e4\") " pod="openshift-console/console-64d6659995-xwhf5" Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.464180 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1ca99325-405c-467a-a9e0-53c5e4fb96e4-console-oauth-config\") pod \"console-64d6659995-xwhf5\" (UID: \"1ca99325-405c-467a-a9e0-53c5e4fb96e4\") " pod="openshift-console/console-64d6659995-xwhf5" Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.464266 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1ca99325-405c-467a-a9e0-53c5e4fb96e4-console-config\") pod \"console-64d6659995-xwhf5\" (UID: \"1ca99325-405c-467a-a9e0-53c5e4fb96e4\") " pod="openshift-console/console-64d6659995-xwhf5" Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.465794 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1ca99325-405c-467a-a9e0-53c5e4fb96e4-console-serving-cert\") pod \"console-64d6659995-xwhf5\" (UID: \"1ca99325-405c-467a-a9e0-53c5e4fb96e4\") " pod="openshift-console/console-64d6659995-xwhf5" Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.465977 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1ca99325-405c-467a-a9e0-53c5e4fb96e4-console-config\") pod \"console-64d6659995-xwhf5\" (UID: \"1ca99325-405c-467a-a9e0-53c5e4fb96e4\") " pod="openshift-console/console-64d6659995-xwhf5" Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.470863 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1ca99325-405c-467a-a9e0-53c5e4fb96e4-console-oauth-config\") pod \"console-64d6659995-xwhf5\" (UID: \"1ca99325-405c-467a-a9e0-53c5e4fb96e4\") " pod="openshift-console/console-64d6659995-xwhf5" Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.482978 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5dmk\" (UniqueName: \"kubernetes.io/projected/1ca99325-405c-467a-a9e0-53c5e4fb96e4-kube-api-access-s5dmk\") pod \"console-64d6659995-xwhf5\" (UID: \"1ca99325-405c-467a-a9e0-53c5e4fb96e4\") " pod="openshift-console/console-64d6659995-xwhf5" Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.566787 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/d78e7aa0-58e7-4445-920b-ca73758f9c84-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-mxm65\" (UID: \"d78e7aa0-58e7-4445-920b-ca73758f9c84\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-mxm65" Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.569962 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/d78e7aa0-58e7-4445-920b-ca73758f9c84-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-mxm65\" (UID: \"d78e7aa0-58e7-4445-920b-ca73758f9c84\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-mxm65" Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.586890 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-wwws2" event={"ID":"6cc37176-dd9d-4138-a8f4-615d7815311a","Type":"ContainerStarted","Data":"070d8494296f5dd1aa6ac5eca697f87218d0fc1245144de728d639608b4f38c7"} Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.637646 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d6659995-xwhf5" Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.751967 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-b66tn"] Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.861809 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-f8fb84555-mxm65" Jan 03 05:56:24 crc kubenswrapper[4854]: I0103 05:56:24.917658 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-tsrnv"] Jan 03 05:56:25 crc kubenswrapper[4854]: I0103 05:56:25.146436 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d6659995-xwhf5"] Jan 03 05:56:25 crc kubenswrapper[4854]: W0103 05:56:25.155939 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ca99325_405c_467a_a9e0_53c5e4fb96e4.slice/crio-064a900973c6546b03e8b420b28b3f7e0bbd28f9f57dfc8ded02ffac2098d734 WatchSource:0}: Error finding container 064a900973c6546b03e8b420b28b3f7e0bbd28f9f57dfc8ded02ffac2098d734: Status 404 returned error can't find the container with id 064a900973c6546b03e8b420b28b3f7e0bbd28f9f57dfc8ded02ffac2098d734 Jan 03 05:56:25 crc kubenswrapper[4854]: I0103 05:56:25.273228 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-mxm65"] Jan 03 05:56:25 crc kubenswrapper[4854]: W0103 05:56:25.282126 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd78e7aa0_58e7_4445_920b_ca73758f9c84.slice/crio-59bba45c98e3ccd456fec92567345d3574750429bfd854370380e153a095cf7c WatchSource:0}: Error finding container 59bba45c98e3ccd456fec92567345d3574750429bfd854370380e153a095cf7c: Status 404 returned error can't find the container with id 59bba45c98e3ccd456fec92567345d3574750429bfd854370380e153a095cf7c Jan 03 05:56:25 crc kubenswrapper[4854]: I0103 05:56:25.601034 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-f8fb84555-mxm65" event={"ID":"d78e7aa0-58e7-4445-920b-ca73758f9c84","Type":"ContainerStarted","Data":"59bba45c98e3ccd456fec92567345d3574750429bfd854370380e153a095cf7c"} Jan 03 05:56:25 crc kubenswrapper[4854]: I0103 05:56:25.602532 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-b66tn" event={"ID":"098925cd-8842-4ed4-9757-568ef31ab2cf","Type":"ContainerStarted","Data":"ff19aa7b8cca096512b3a112f64836b2e310e88e8f61fca30172191e0ddc1d00"} Jan 03 05:56:25 crc kubenswrapper[4854]: I0103 05:56:25.604141 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d6659995-xwhf5" event={"ID":"1ca99325-405c-467a-a9e0-53c5e4fb96e4","Type":"ContainerStarted","Data":"5513109ef6942d9686904d3f18a4a8b92ad267ac54c67de568e73bfe7ff3688e"} Jan 03 05:56:25 crc kubenswrapper[4854]: I0103 05:56:25.604183 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d6659995-xwhf5" event={"ID":"1ca99325-405c-467a-a9e0-53c5e4fb96e4","Type":"ContainerStarted","Data":"064a900973c6546b03e8b420b28b3f7e0bbd28f9f57dfc8ded02ffac2098d734"} Jan 03 05:56:25 crc kubenswrapper[4854]: I0103 05:56:25.605617 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-tsrnv" event={"ID":"a79c8399-b9f8-4caf-b03b-7474f9c4441a","Type":"ContainerStarted","Data":"deec8bda88d896e0ecfb41aab6a8b405c1f12a9b36d8775858ed11aea3b55ac6"} Jan 03 05:56:25 crc kubenswrapper[4854]: I0103 05:56:25.639405 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64d6659995-xwhf5" podStartSLOduration=1.639381358 podStartE2EDuration="1.639381358s" podCreationTimestamp="2026-01-03 05:56:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:56:25.625533732 +0000 UTC m=+963.952110304" watchObservedRunningTime="2026-01-03 05:56:25.639381358 +0000 UTC m=+963.965957930" Jan 03 05:56:29 crc kubenswrapper[4854]: I0103 05:56:29.641384 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-tsrnv" event={"ID":"a79c8399-b9f8-4caf-b03b-7474f9c4441a","Type":"ContainerStarted","Data":"8ba2bdb94793454cf47532f8265085d4c81aeb31136c97e44db9a29d3a95223e"} Jan 03 05:56:29 crc kubenswrapper[4854]: I0103 05:56:29.643813 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-wwws2" event={"ID":"6cc37176-dd9d-4138-a8f4-615d7815311a","Type":"ContainerStarted","Data":"b4559f72ba39aabbec97ef4052bfc0c46f1a012359af15125e6a9e5b18422cbd"} Jan 03 05:56:29 crc kubenswrapper[4854]: I0103 05:56:29.644661 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-wwws2" Jan 03 05:56:29 crc kubenswrapper[4854]: I0103 05:56:29.645902 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-f8fb84555-mxm65" event={"ID":"d78e7aa0-58e7-4445-920b-ca73758f9c84","Type":"ContainerStarted","Data":"5fbdec58ab8b0d698aaaf59ce889e537baf7d8d47eb6aa3fe22da066c0d9c36a"} Jan 03 05:56:29 crc kubenswrapper[4854]: I0103 05:56:29.646403 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-f8fb84555-mxm65" Jan 03 05:56:29 crc kubenswrapper[4854]: I0103 05:56:29.650453 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-b66tn" event={"ID":"098925cd-8842-4ed4-9757-568ef31ab2cf","Type":"ContainerStarted","Data":"0e02257486be79751ebf5de20e4501afe7fcb9703399c7537d23659e22af3393"} Jan 03 05:56:29 crc kubenswrapper[4854]: I0103 05:56:29.671836 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-tsrnv" podStartSLOduration=2.050342451 podStartE2EDuration="5.671816264s" podCreationTimestamp="2026-01-03 05:56:24 +0000 UTC" firstStartedPulling="2026-01-03 05:56:24.944016069 +0000 UTC m=+963.270592641" lastFinishedPulling="2026-01-03 05:56:28.565489882 +0000 UTC m=+966.892066454" observedRunningTime="2026-01-03 05:56:29.665760413 +0000 UTC m=+967.992337005" watchObservedRunningTime="2026-01-03 05:56:29.671816264 +0000 UTC m=+967.998392856" Jan 03 05:56:29 crc kubenswrapper[4854]: I0103 05:56:29.692928 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-f8fb84555-mxm65" podStartSLOduration=3.406464912 podStartE2EDuration="6.692906382s" podCreationTimestamp="2026-01-03 05:56:23 +0000 UTC" firstStartedPulling="2026-01-03 05:56:25.292021157 +0000 UTC m=+963.618597729" lastFinishedPulling="2026-01-03 05:56:28.578462627 +0000 UTC m=+966.905039199" observedRunningTime="2026-01-03 05:56:29.691877396 +0000 UTC m=+968.018453968" watchObservedRunningTime="2026-01-03 05:56:29.692906382 +0000 UTC m=+968.019482964" Jan 03 05:56:29 crc kubenswrapper[4854]: I0103 05:56:29.716285 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-wwws2" podStartSLOduration=2.460425651 podStartE2EDuration="6.716267977s" podCreationTimestamp="2026-01-03 05:56:23 +0000 UTC" firstStartedPulling="2026-01-03 05:56:24.322769255 +0000 UTC m=+962.649345827" lastFinishedPulling="2026-01-03 05:56:28.578611581 +0000 UTC m=+966.905188153" observedRunningTime="2026-01-03 05:56:29.715846306 +0000 UTC m=+968.042422878" watchObservedRunningTime="2026-01-03 05:56:29.716267977 +0000 UTC m=+968.042844549" Jan 03 05:56:31 crc kubenswrapper[4854]: I0103 05:56:31.675770 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-b66tn" event={"ID":"098925cd-8842-4ed4-9757-568ef31ab2cf","Type":"ContainerStarted","Data":"e5c64bc1153739712ad8ca210558f480e22b79b194d32d837ca7be81d5c7eb46"} Jan 03 05:56:34 crc kubenswrapper[4854]: I0103 05:56:34.331656 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-wwws2" Jan 03 05:56:34 crc kubenswrapper[4854]: I0103 05:56:34.355807 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-b66tn" podStartSLOduration=4.86553088 podStartE2EDuration="11.355776152s" podCreationTimestamp="2026-01-03 05:56:23 +0000 UTC" firstStartedPulling="2026-01-03 05:56:24.750407755 +0000 UTC m=+963.076984327" lastFinishedPulling="2026-01-03 05:56:31.240653007 +0000 UTC m=+969.567229599" observedRunningTime="2026-01-03 05:56:31.709608121 +0000 UTC m=+970.036184703" watchObservedRunningTime="2026-01-03 05:56:34.355776152 +0000 UTC m=+972.682352744" Jan 03 05:56:34 crc kubenswrapper[4854]: I0103 05:56:34.638854 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64d6659995-xwhf5" Jan 03 05:56:34 crc kubenswrapper[4854]: I0103 05:56:34.638907 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-64d6659995-xwhf5" Jan 03 05:56:34 crc kubenswrapper[4854]: I0103 05:56:34.643815 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64d6659995-xwhf5" Jan 03 05:56:34 crc kubenswrapper[4854]: I0103 05:56:34.703043 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64d6659995-xwhf5" Jan 03 05:56:34 crc kubenswrapper[4854]: I0103 05:56:34.773786 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-76775dbc85-4tdnl"] Jan 03 05:56:44 crc kubenswrapper[4854]: I0103 05:56:44.873260 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-f8fb84555-mxm65" Jan 03 05:56:59 crc kubenswrapper[4854]: I0103 05:56:59.835774 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-76775dbc85-4tdnl" podUID="e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210" containerName="console" containerID="cri-o://6baa761307040e0b368469ef570944b30d01fbf529c92f060f1840666ccaa6f3" gracePeriod=15 Jan 03 05:57:00 crc kubenswrapper[4854]: I0103 05:57:00.263345 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-76775dbc85-4tdnl_e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210/console/0.log" Jan 03 05:57:00 crc kubenswrapper[4854]: I0103 05:57:00.263678 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76775dbc85-4tdnl" Jan 03 05:57:00 crc kubenswrapper[4854]: I0103 05:57:00.443648 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210-service-ca\") pod \"e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210\" (UID: \"e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210\") " Jan 03 05:57:00 crc kubenswrapper[4854]: I0103 05:57:00.443734 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210-oauth-serving-cert\") pod \"e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210\" (UID: \"e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210\") " Jan 03 05:57:00 crc kubenswrapper[4854]: I0103 05:57:00.443831 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210-trusted-ca-bundle\") pod \"e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210\" (UID: \"e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210\") " Jan 03 05:57:00 crc kubenswrapper[4854]: I0103 05:57:00.443878 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vjh7\" (UniqueName: \"kubernetes.io/projected/e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210-kube-api-access-7vjh7\") pod \"e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210\" (UID: \"e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210\") " Jan 03 05:57:00 crc kubenswrapper[4854]: I0103 05:57:00.443912 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210-console-oauth-config\") pod \"e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210\" (UID: \"e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210\") " Jan 03 05:57:00 crc kubenswrapper[4854]: I0103 05:57:00.444017 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210-console-serving-cert\") pod \"e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210\" (UID: \"e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210\") " Jan 03 05:57:00 crc kubenswrapper[4854]: I0103 05:57:00.444050 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210-console-config\") pod \"e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210\" (UID: \"e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210\") " Jan 03 05:57:00 crc kubenswrapper[4854]: I0103 05:57:00.444922 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210" (UID: "e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:57:00 crc kubenswrapper[4854]: I0103 05:57:00.444956 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210-console-config" (OuterVolumeSpecName: "console-config") pod "e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210" (UID: "e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:57:00 crc kubenswrapper[4854]: I0103 05:57:00.445027 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210" (UID: "e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:57:00 crc kubenswrapper[4854]: I0103 05:57:00.445059 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210-service-ca" (OuterVolumeSpecName: "service-ca") pod "e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210" (UID: "e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:57:00 crc kubenswrapper[4854]: I0103 05:57:00.454209 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210-kube-api-access-7vjh7" (OuterVolumeSpecName: "kube-api-access-7vjh7") pod "e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210" (UID: "e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210"). InnerVolumeSpecName "kube-api-access-7vjh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:57:00 crc kubenswrapper[4854]: I0103 05:57:00.456688 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210" (UID: "e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:57:00 crc kubenswrapper[4854]: I0103 05:57:00.456859 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210" (UID: "e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:57:00 crc kubenswrapper[4854]: I0103 05:57:00.546339 4854 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 05:57:00 crc kubenswrapper[4854]: I0103 05:57:00.546386 4854 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 05:57:00 crc kubenswrapper[4854]: I0103 05:57:00.546441 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vjh7\" (UniqueName: \"kubernetes.io/projected/e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210-kube-api-access-7vjh7\") on node \"crc\" DevicePath \"\"" Jan 03 05:57:00 crc kubenswrapper[4854]: I0103 05:57:00.546458 4854 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 03 05:57:00 crc kubenswrapper[4854]: I0103 05:57:00.546471 4854 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 05:57:00 crc kubenswrapper[4854]: I0103 05:57:00.546483 4854 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210-console-config\") on node \"crc\" DevicePath \"\"" Jan 03 05:57:00 crc kubenswrapper[4854]: I0103 05:57:00.546494 4854 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210-service-ca\") on node \"crc\" DevicePath \"\"" Jan 03 05:57:00 crc kubenswrapper[4854]: I0103 05:57:00.925267 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-76775dbc85-4tdnl_e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210/console/0.log" Jan 03 05:57:00 crc kubenswrapper[4854]: I0103 05:57:00.925620 4854 generic.go:334] "Generic (PLEG): container finished" podID="e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210" containerID="6baa761307040e0b368469ef570944b30d01fbf529c92f060f1840666ccaa6f3" exitCode=2 Jan 03 05:57:00 crc kubenswrapper[4854]: I0103 05:57:00.925653 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76775dbc85-4tdnl" event={"ID":"e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210","Type":"ContainerDied","Data":"6baa761307040e0b368469ef570944b30d01fbf529c92f060f1840666ccaa6f3"} Jan 03 05:57:00 crc kubenswrapper[4854]: I0103 05:57:00.925678 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76775dbc85-4tdnl" event={"ID":"e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210","Type":"ContainerDied","Data":"54db28405a9c9114c00755ba3742c00bbe63caa734348b6b9f7b47b1a8ab7d24"} Jan 03 05:57:00 crc kubenswrapper[4854]: I0103 05:57:00.925695 4854 scope.go:117] "RemoveContainer" containerID="6baa761307040e0b368469ef570944b30d01fbf529c92f060f1840666ccaa6f3" Jan 03 05:57:00 crc kubenswrapper[4854]: I0103 05:57:00.925720 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76775dbc85-4tdnl" Jan 03 05:57:00 crc kubenswrapper[4854]: I0103 05:57:00.944526 4854 scope.go:117] "RemoveContainer" containerID="6baa761307040e0b368469ef570944b30d01fbf529c92f060f1840666ccaa6f3" Jan 03 05:57:00 crc kubenswrapper[4854]: E0103 05:57:00.945026 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6baa761307040e0b368469ef570944b30d01fbf529c92f060f1840666ccaa6f3\": container with ID starting with 6baa761307040e0b368469ef570944b30d01fbf529c92f060f1840666ccaa6f3 not found: ID does not exist" containerID="6baa761307040e0b368469ef570944b30d01fbf529c92f060f1840666ccaa6f3" Jan 03 05:57:00 crc kubenswrapper[4854]: I0103 05:57:00.945103 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6baa761307040e0b368469ef570944b30d01fbf529c92f060f1840666ccaa6f3"} err="failed to get container status \"6baa761307040e0b368469ef570944b30d01fbf529c92f060f1840666ccaa6f3\": rpc error: code = NotFound desc = could not find container \"6baa761307040e0b368469ef570944b30d01fbf529c92f060f1840666ccaa6f3\": container with ID starting with 6baa761307040e0b368469ef570944b30d01fbf529c92f060f1840666ccaa6f3 not found: ID does not exist" Jan 03 05:57:00 crc kubenswrapper[4854]: I0103 05:57:00.963766 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-76775dbc85-4tdnl"] Jan 03 05:57:00 crc kubenswrapper[4854]: I0103 05:57:00.987707 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-76775dbc85-4tdnl"] Jan 03 05:57:02 crc kubenswrapper[4854]: I0103 05:57:02.077461 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qb8s7"] Jan 03 05:57:02 crc kubenswrapper[4854]: E0103 05:57:02.078030 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210" containerName="console" Jan 03 05:57:02 crc kubenswrapper[4854]: I0103 05:57:02.078053 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210" containerName="console" Jan 03 05:57:02 crc kubenswrapper[4854]: I0103 05:57:02.078325 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210" containerName="console" Jan 03 05:57:02 crc kubenswrapper[4854]: I0103 05:57:02.079996 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qb8s7" Jan 03 05:57:02 crc kubenswrapper[4854]: I0103 05:57:02.086901 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 03 05:57:02 crc kubenswrapper[4854]: I0103 05:57:02.089872 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qb8s7"] Jan 03 05:57:02 crc kubenswrapper[4854]: I0103 05:57:02.126790 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210" path="/var/lib/kubelet/pods/e77a9738-c4d7-4ca3-bbf7-c1c6e8ce4210/volumes" Jan 03 05:57:02 crc kubenswrapper[4854]: I0103 05:57:02.172458 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/511def33-7855-43fc-85a1-065f7e0d7c07-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qb8s7\" (UID: \"511def33-7855-43fc-85a1-065f7e0d7c07\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qb8s7" Jan 03 05:57:02 crc kubenswrapper[4854]: I0103 05:57:02.172606 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k498w\" (UniqueName: \"kubernetes.io/projected/511def33-7855-43fc-85a1-065f7e0d7c07-kube-api-access-k498w\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qb8s7\" (UID: \"511def33-7855-43fc-85a1-065f7e0d7c07\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qb8s7" Jan 03 05:57:02 crc kubenswrapper[4854]: I0103 05:57:02.172659 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/511def33-7855-43fc-85a1-065f7e0d7c07-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qb8s7\" (UID: \"511def33-7855-43fc-85a1-065f7e0d7c07\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qb8s7" Jan 03 05:57:02 crc kubenswrapper[4854]: I0103 05:57:02.273448 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k498w\" (UniqueName: \"kubernetes.io/projected/511def33-7855-43fc-85a1-065f7e0d7c07-kube-api-access-k498w\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qb8s7\" (UID: \"511def33-7855-43fc-85a1-065f7e0d7c07\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qb8s7" Jan 03 05:57:02 crc kubenswrapper[4854]: I0103 05:57:02.273515 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/511def33-7855-43fc-85a1-065f7e0d7c07-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qb8s7\" (UID: \"511def33-7855-43fc-85a1-065f7e0d7c07\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qb8s7" Jan 03 05:57:02 crc kubenswrapper[4854]: I0103 05:57:02.273771 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/511def33-7855-43fc-85a1-065f7e0d7c07-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qb8s7\" (UID: \"511def33-7855-43fc-85a1-065f7e0d7c07\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qb8s7" Jan 03 05:57:02 crc kubenswrapper[4854]: I0103 05:57:02.274147 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/511def33-7855-43fc-85a1-065f7e0d7c07-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qb8s7\" (UID: \"511def33-7855-43fc-85a1-065f7e0d7c07\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qb8s7" Jan 03 05:57:02 crc kubenswrapper[4854]: I0103 05:57:02.274151 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/511def33-7855-43fc-85a1-065f7e0d7c07-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qb8s7\" (UID: \"511def33-7855-43fc-85a1-065f7e0d7c07\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qb8s7" Jan 03 05:57:02 crc kubenswrapper[4854]: I0103 05:57:02.291876 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k498w\" (UniqueName: \"kubernetes.io/projected/511def33-7855-43fc-85a1-065f7e0d7c07-kube-api-access-k498w\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qb8s7\" (UID: \"511def33-7855-43fc-85a1-065f7e0d7c07\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qb8s7" Jan 03 05:57:02 crc kubenswrapper[4854]: I0103 05:57:02.403816 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qb8s7" Jan 03 05:57:03 crc kubenswrapper[4854]: I0103 05:57:03.037844 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qb8s7"] Jan 03 05:57:03 crc kubenswrapper[4854]: I0103 05:57:03.956794 4854 generic.go:334] "Generic (PLEG): container finished" podID="511def33-7855-43fc-85a1-065f7e0d7c07" containerID="f65767a4c55b0ef9b28735d185783dfbc7e9362379594a693ffdc52a752cfda6" exitCode=0 Jan 03 05:57:03 crc kubenswrapper[4854]: I0103 05:57:03.956886 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qb8s7" event={"ID":"511def33-7855-43fc-85a1-065f7e0d7c07","Type":"ContainerDied","Data":"f65767a4c55b0ef9b28735d185783dfbc7e9362379594a693ffdc52a752cfda6"} Jan 03 05:57:03 crc kubenswrapper[4854]: I0103 05:57:03.957172 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qb8s7" event={"ID":"511def33-7855-43fc-85a1-065f7e0d7c07","Type":"ContainerStarted","Data":"05603ad75ec6d90a06da838445ce261f75b7d735c7ec4f179c5605911b6c7298"} Jan 03 05:57:05 crc kubenswrapper[4854]: I0103 05:57:05.977537 4854 generic.go:334] "Generic (PLEG): container finished" podID="511def33-7855-43fc-85a1-065f7e0d7c07" containerID="f3db1cc05f67436a11016076345169d19d092b6cb9e0840028a4da9ff3996019" exitCode=0 Jan 03 05:57:05 crc kubenswrapper[4854]: I0103 05:57:05.977611 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qb8s7" event={"ID":"511def33-7855-43fc-85a1-065f7e0d7c07","Type":"ContainerDied","Data":"f3db1cc05f67436a11016076345169d19d092b6cb9e0840028a4da9ff3996019"} Jan 03 05:57:06 crc kubenswrapper[4854]: I0103 05:57:06.987055 4854 generic.go:334] "Generic (PLEG): container finished" podID="511def33-7855-43fc-85a1-065f7e0d7c07" containerID="ec4f39c9a31b9d204d24bd6cd7db78c17a05f7916956259af84df5230b020fc8" exitCode=0 Jan 03 05:57:06 crc kubenswrapper[4854]: I0103 05:57:06.987109 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qb8s7" event={"ID":"511def33-7855-43fc-85a1-065f7e0d7c07","Type":"ContainerDied","Data":"ec4f39c9a31b9d204d24bd6cd7db78c17a05f7916956259af84df5230b020fc8"} Jan 03 05:57:08 crc kubenswrapper[4854]: I0103 05:57:08.379558 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qb8s7" Jan 03 05:57:08 crc kubenswrapper[4854]: I0103 05:57:08.475853 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/511def33-7855-43fc-85a1-065f7e0d7c07-bundle\") pod \"511def33-7855-43fc-85a1-065f7e0d7c07\" (UID: \"511def33-7855-43fc-85a1-065f7e0d7c07\") " Jan 03 05:57:08 crc kubenswrapper[4854]: I0103 05:57:08.475989 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/511def33-7855-43fc-85a1-065f7e0d7c07-util\") pod \"511def33-7855-43fc-85a1-065f7e0d7c07\" (UID: \"511def33-7855-43fc-85a1-065f7e0d7c07\") " Jan 03 05:57:08 crc kubenswrapper[4854]: I0103 05:57:08.476144 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k498w\" (UniqueName: \"kubernetes.io/projected/511def33-7855-43fc-85a1-065f7e0d7c07-kube-api-access-k498w\") pod \"511def33-7855-43fc-85a1-065f7e0d7c07\" (UID: \"511def33-7855-43fc-85a1-065f7e0d7c07\") " Jan 03 05:57:08 crc kubenswrapper[4854]: I0103 05:57:08.476867 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/511def33-7855-43fc-85a1-065f7e0d7c07-bundle" (OuterVolumeSpecName: "bundle") pod "511def33-7855-43fc-85a1-065f7e0d7c07" (UID: "511def33-7855-43fc-85a1-065f7e0d7c07"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:57:08 crc kubenswrapper[4854]: I0103 05:57:08.480966 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/511def33-7855-43fc-85a1-065f7e0d7c07-kube-api-access-k498w" (OuterVolumeSpecName: "kube-api-access-k498w") pod "511def33-7855-43fc-85a1-065f7e0d7c07" (UID: "511def33-7855-43fc-85a1-065f7e0d7c07"). InnerVolumeSpecName "kube-api-access-k498w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:57:08 crc kubenswrapper[4854]: I0103 05:57:08.578373 4854 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/511def33-7855-43fc-85a1-065f7e0d7c07-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 05:57:08 crc kubenswrapper[4854]: I0103 05:57:08.578405 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k498w\" (UniqueName: \"kubernetes.io/projected/511def33-7855-43fc-85a1-065f7e0d7c07-kube-api-access-k498w\") on node \"crc\" DevicePath \"\"" Jan 03 05:57:08 crc kubenswrapper[4854]: I0103 05:57:08.794442 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/511def33-7855-43fc-85a1-065f7e0d7c07-util" (OuterVolumeSpecName: "util") pod "511def33-7855-43fc-85a1-065f7e0d7c07" (UID: "511def33-7855-43fc-85a1-065f7e0d7c07"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:57:08 crc kubenswrapper[4854]: I0103 05:57:08.886034 4854 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/511def33-7855-43fc-85a1-065f7e0d7c07-util\") on node \"crc\" DevicePath \"\"" Jan 03 05:57:09 crc kubenswrapper[4854]: I0103 05:57:09.009951 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qb8s7" event={"ID":"511def33-7855-43fc-85a1-065f7e0d7c07","Type":"ContainerDied","Data":"05603ad75ec6d90a06da838445ce261f75b7d735c7ec4f179c5605911b6c7298"} Jan 03 05:57:09 crc kubenswrapper[4854]: I0103 05:57:09.010017 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05603ad75ec6d90a06da838445ce261f75b7d735c7ec4f179c5605911b6c7298" Jan 03 05:57:09 crc kubenswrapper[4854]: I0103 05:57:09.010019 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qb8s7" Jan 03 05:57:11 crc kubenswrapper[4854]: I0103 05:57:11.755896 4854 patch_prober.go:28] interesting pod/machine-config-daemon-qdhfx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 05:57:11 crc kubenswrapper[4854]: I0103 05:57:11.756418 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 05:57:19 crc kubenswrapper[4854]: I0103 05:57:19.061605 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-7fdb976ccd-xpqws"] Jan 03 05:57:19 crc kubenswrapper[4854]: E0103 05:57:19.063270 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="511def33-7855-43fc-85a1-065f7e0d7c07" containerName="util" Jan 03 05:57:19 crc kubenswrapper[4854]: I0103 05:57:19.063394 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="511def33-7855-43fc-85a1-065f7e0d7c07" containerName="util" Jan 03 05:57:19 crc kubenswrapper[4854]: E0103 05:57:19.063473 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="511def33-7855-43fc-85a1-065f7e0d7c07" containerName="pull" Jan 03 05:57:19 crc kubenswrapper[4854]: I0103 05:57:19.063526 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="511def33-7855-43fc-85a1-065f7e0d7c07" containerName="pull" Jan 03 05:57:19 crc kubenswrapper[4854]: E0103 05:57:19.063589 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="511def33-7855-43fc-85a1-065f7e0d7c07" containerName="extract" Jan 03 05:57:19 crc kubenswrapper[4854]: I0103 05:57:19.063655 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="511def33-7855-43fc-85a1-065f7e0d7c07" containerName="extract" Jan 03 05:57:19 crc kubenswrapper[4854]: I0103 05:57:19.063905 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="511def33-7855-43fc-85a1-065f7e0d7c07" containerName="extract" Jan 03 05:57:19 crc kubenswrapper[4854]: I0103 05:57:19.064568 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7fdb976ccd-xpqws" Jan 03 05:57:19 crc kubenswrapper[4854]: I0103 05:57:19.066962 4854 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-ppsvt" Jan 03 05:57:19 crc kubenswrapper[4854]: I0103 05:57:19.066968 4854 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 03 05:57:19 crc kubenswrapper[4854]: I0103 05:57:19.066959 4854 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 03 05:57:19 crc kubenswrapper[4854]: I0103 05:57:19.067144 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 03 05:57:19 crc kubenswrapper[4854]: I0103 05:57:19.067371 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 03 05:57:19 crc kubenswrapper[4854]: I0103 05:57:19.086663 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7fdb976ccd-xpqws"] Jan 03 05:57:19 crc kubenswrapper[4854]: I0103 05:57:19.150929 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmfd4\" (UniqueName: \"kubernetes.io/projected/c752fc50-5b45-4cbc-8a1c-b0cec9e720e5-kube-api-access-rmfd4\") pod \"metallb-operator-controller-manager-7fdb976ccd-xpqws\" (UID: \"c752fc50-5b45-4cbc-8a1c-b0cec9e720e5\") " pod="metallb-system/metallb-operator-controller-manager-7fdb976ccd-xpqws" Jan 03 05:57:19 crc kubenswrapper[4854]: I0103 05:57:19.151009 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c752fc50-5b45-4cbc-8a1c-b0cec9e720e5-webhook-cert\") pod \"metallb-operator-controller-manager-7fdb976ccd-xpqws\" (UID: \"c752fc50-5b45-4cbc-8a1c-b0cec9e720e5\") " pod="metallb-system/metallb-operator-controller-manager-7fdb976ccd-xpqws" Jan 03 05:57:19 crc kubenswrapper[4854]: I0103 05:57:19.151118 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c752fc50-5b45-4cbc-8a1c-b0cec9e720e5-apiservice-cert\") pod \"metallb-operator-controller-manager-7fdb976ccd-xpqws\" (UID: \"c752fc50-5b45-4cbc-8a1c-b0cec9e720e5\") " pod="metallb-system/metallb-operator-controller-manager-7fdb976ccd-xpqws" Jan 03 05:57:19 crc kubenswrapper[4854]: I0103 05:57:19.252980 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmfd4\" (UniqueName: \"kubernetes.io/projected/c752fc50-5b45-4cbc-8a1c-b0cec9e720e5-kube-api-access-rmfd4\") pod \"metallb-operator-controller-manager-7fdb976ccd-xpqws\" (UID: \"c752fc50-5b45-4cbc-8a1c-b0cec9e720e5\") " pod="metallb-system/metallb-operator-controller-manager-7fdb976ccd-xpqws" Jan 03 05:57:19 crc kubenswrapper[4854]: I0103 05:57:19.253057 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c752fc50-5b45-4cbc-8a1c-b0cec9e720e5-webhook-cert\") pod \"metallb-operator-controller-manager-7fdb976ccd-xpqws\" (UID: \"c752fc50-5b45-4cbc-8a1c-b0cec9e720e5\") " pod="metallb-system/metallb-operator-controller-manager-7fdb976ccd-xpqws" Jan 03 05:57:19 crc kubenswrapper[4854]: I0103 05:57:19.253125 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c752fc50-5b45-4cbc-8a1c-b0cec9e720e5-apiservice-cert\") pod \"metallb-operator-controller-manager-7fdb976ccd-xpqws\" (UID: \"c752fc50-5b45-4cbc-8a1c-b0cec9e720e5\") " pod="metallb-system/metallb-operator-controller-manager-7fdb976ccd-xpqws" Jan 03 05:57:19 crc kubenswrapper[4854]: I0103 05:57:19.258795 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c752fc50-5b45-4cbc-8a1c-b0cec9e720e5-webhook-cert\") pod \"metallb-operator-controller-manager-7fdb976ccd-xpqws\" (UID: \"c752fc50-5b45-4cbc-8a1c-b0cec9e720e5\") " pod="metallb-system/metallb-operator-controller-manager-7fdb976ccd-xpqws" Jan 03 05:57:19 crc kubenswrapper[4854]: I0103 05:57:19.258864 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c752fc50-5b45-4cbc-8a1c-b0cec9e720e5-apiservice-cert\") pod \"metallb-operator-controller-manager-7fdb976ccd-xpqws\" (UID: \"c752fc50-5b45-4cbc-8a1c-b0cec9e720e5\") " pod="metallb-system/metallb-operator-controller-manager-7fdb976ccd-xpqws" Jan 03 05:57:19 crc kubenswrapper[4854]: I0103 05:57:19.274952 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmfd4\" (UniqueName: \"kubernetes.io/projected/c752fc50-5b45-4cbc-8a1c-b0cec9e720e5-kube-api-access-rmfd4\") pod \"metallb-operator-controller-manager-7fdb976ccd-xpqws\" (UID: \"c752fc50-5b45-4cbc-8a1c-b0cec9e720e5\") " pod="metallb-system/metallb-operator-controller-manager-7fdb976ccd-xpqws" Jan 03 05:57:19 crc kubenswrapper[4854]: I0103 05:57:19.382804 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7fdb976ccd-xpqws" Jan 03 05:57:19 crc kubenswrapper[4854]: I0103 05:57:19.430325 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-85546d974f-nhdvf"] Jan 03 05:57:19 crc kubenswrapper[4854]: I0103 05:57:19.433285 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-85546d974f-nhdvf" Jan 03 05:57:19 crc kubenswrapper[4854]: I0103 05:57:19.436666 4854 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 03 05:57:19 crc kubenswrapper[4854]: I0103 05:57:19.437488 4854 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-wnkjs" Jan 03 05:57:19 crc kubenswrapper[4854]: I0103 05:57:19.437683 4854 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 03 05:57:19 crc kubenswrapper[4854]: I0103 05:57:19.446484 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-85546d974f-nhdvf"] Jan 03 05:57:19 crc kubenswrapper[4854]: I0103 05:57:19.562108 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c82f4933-ef34-46ae-8f48-f87b3ce1e90f-webhook-cert\") pod \"metallb-operator-webhook-server-85546d974f-nhdvf\" (UID: \"c82f4933-ef34-46ae-8f48-f87b3ce1e90f\") " pod="metallb-system/metallb-operator-webhook-server-85546d974f-nhdvf" Jan 03 05:57:19 crc kubenswrapper[4854]: I0103 05:57:19.562188 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph767\" (UniqueName: \"kubernetes.io/projected/c82f4933-ef34-46ae-8f48-f87b3ce1e90f-kube-api-access-ph767\") pod \"metallb-operator-webhook-server-85546d974f-nhdvf\" (UID: \"c82f4933-ef34-46ae-8f48-f87b3ce1e90f\") " pod="metallb-system/metallb-operator-webhook-server-85546d974f-nhdvf" Jan 03 05:57:19 crc kubenswrapper[4854]: I0103 05:57:19.562472 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c82f4933-ef34-46ae-8f48-f87b3ce1e90f-apiservice-cert\") pod \"metallb-operator-webhook-server-85546d974f-nhdvf\" (UID: \"c82f4933-ef34-46ae-8f48-f87b3ce1e90f\") " pod="metallb-system/metallb-operator-webhook-server-85546d974f-nhdvf" Jan 03 05:57:19 crc kubenswrapper[4854]: I0103 05:57:19.664197 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c82f4933-ef34-46ae-8f48-f87b3ce1e90f-apiservice-cert\") pod \"metallb-operator-webhook-server-85546d974f-nhdvf\" (UID: \"c82f4933-ef34-46ae-8f48-f87b3ce1e90f\") " pod="metallb-system/metallb-operator-webhook-server-85546d974f-nhdvf" Jan 03 05:57:19 crc kubenswrapper[4854]: I0103 05:57:19.664266 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c82f4933-ef34-46ae-8f48-f87b3ce1e90f-webhook-cert\") pod \"metallb-operator-webhook-server-85546d974f-nhdvf\" (UID: \"c82f4933-ef34-46ae-8f48-f87b3ce1e90f\") " pod="metallb-system/metallb-operator-webhook-server-85546d974f-nhdvf" Jan 03 05:57:19 crc kubenswrapper[4854]: I0103 05:57:19.664319 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ph767\" (UniqueName: \"kubernetes.io/projected/c82f4933-ef34-46ae-8f48-f87b3ce1e90f-kube-api-access-ph767\") pod \"metallb-operator-webhook-server-85546d974f-nhdvf\" (UID: \"c82f4933-ef34-46ae-8f48-f87b3ce1e90f\") " pod="metallb-system/metallb-operator-webhook-server-85546d974f-nhdvf" Jan 03 05:57:19 crc kubenswrapper[4854]: I0103 05:57:19.671348 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c82f4933-ef34-46ae-8f48-f87b3ce1e90f-webhook-cert\") pod \"metallb-operator-webhook-server-85546d974f-nhdvf\" (UID: \"c82f4933-ef34-46ae-8f48-f87b3ce1e90f\") " pod="metallb-system/metallb-operator-webhook-server-85546d974f-nhdvf" Jan 03 05:57:19 crc kubenswrapper[4854]: I0103 05:57:19.682866 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ph767\" (UniqueName: \"kubernetes.io/projected/c82f4933-ef34-46ae-8f48-f87b3ce1e90f-kube-api-access-ph767\") pod \"metallb-operator-webhook-server-85546d974f-nhdvf\" (UID: \"c82f4933-ef34-46ae-8f48-f87b3ce1e90f\") " pod="metallb-system/metallb-operator-webhook-server-85546d974f-nhdvf" Jan 03 05:57:19 crc kubenswrapper[4854]: I0103 05:57:19.684391 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c82f4933-ef34-46ae-8f48-f87b3ce1e90f-apiservice-cert\") pod \"metallb-operator-webhook-server-85546d974f-nhdvf\" (UID: \"c82f4933-ef34-46ae-8f48-f87b3ce1e90f\") " pod="metallb-system/metallb-operator-webhook-server-85546d974f-nhdvf" Jan 03 05:57:19 crc kubenswrapper[4854]: I0103 05:57:19.767682 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-85546d974f-nhdvf" Jan 03 05:57:19 crc kubenswrapper[4854]: I0103 05:57:19.909944 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7fdb976ccd-xpqws"] Jan 03 05:57:20 crc kubenswrapper[4854]: I0103 05:57:20.094335 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7fdb976ccd-xpqws" event={"ID":"c752fc50-5b45-4cbc-8a1c-b0cec9e720e5","Type":"ContainerStarted","Data":"7a9ab8e6054766d2b9bc5e3341ec79d9231bbb20f08db54f97797603aec88c15"} Jan 03 05:57:20 crc kubenswrapper[4854]: I0103 05:57:20.317847 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-85546d974f-nhdvf"] Jan 03 05:57:20 crc kubenswrapper[4854]: W0103 05:57:20.320419 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc82f4933_ef34_46ae_8f48_f87b3ce1e90f.slice/crio-b58f656b21340e0200c2f7a9291879a638df58462bba2d755d9b46011bfa1900 WatchSource:0}: Error finding container b58f656b21340e0200c2f7a9291879a638df58462bba2d755d9b46011bfa1900: Status 404 returned error can't find the container with id b58f656b21340e0200c2f7a9291879a638df58462bba2d755d9b46011bfa1900 Jan 03 05:57:21 crc kubenswrapper[4854]: I0103 05:57:21.105376 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-85546d974f-nhdvf" event={"ID":"c82f4933-ef34-46ae-8f48-f87b3ce1e90f","Type":"ContainerStarted","Data":"b58f656b21340e0200c2f7a9291879a638df58462bba2d755d9b46011bfa1900"} Jan 03 05:57:24 crc kubenswrapper[4854]: I0103 05:57:24.191605 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7fdb976ccd-xpqws" Jan 03 05:57:24 crc kubenswrapper[4854]: I0103 05:57:24.201780 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7fdb976ccd-xpqws" event={"ID":"c752fc50-5b45-4cbc-8a1c-b0cec9e720e5","Type":"ContainerStarted","Data":"fe20e50857042ab7dcaddfbd3d7f074d09030b7ded9acadf155df462aaccbfcd"} Jan 03 05:57:24 crc kubenswrapper[4854]: I0103 05:57:24.208203 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-7fdb976ccd-xpqws" podStartSLOduration=1.765397689 podStartE2EDuration="5.208184141s" podCreationTimestamp="2026-01-03 05:57:19 +0000 UTC" firstStartedPulling="2026-01-03 05:57:19.921132634 +0000 UTC m=+1018.247709206" lastFinishedPulling="2026-01-03 05:57:23.363919086 +0000 UTC m=+1021.690495658" observedRunningTime="2026-01-03 05:57:24.178645312 +0000 UTC m=+1022.505221904" watchObservedRunningTime="2026-01-03 05:57:24.208184141 +0000 UTC m=+1022.534760713" Jan 03 05:57:27 crc kubenswrapper[4854]: I0103 05:57:27.287461 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-85546d974f-nhdvf" event={"ID":"c82f4933-ef34-46ae-8f48-f87b3ce1e90f","Type":"ContainerStarted","Data":"920c20a2aca36567c8d53a27e449dedf658aa6fc46392a08b3ed436f3b4ece63"} Jan 03 05:57:27 crc kubenswrapper[4854]: I0103 05:57:27.289752 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-85546d974f-nhdvf" Jan 03 05:57:27 crc kubenswrapper[4854]: I0103 05:57:27.325583 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-85546d974f-nhdvf" podStartSLOduration=2.311900983 podStartE2EDuration="8.325559861s" podCreationTimestamp="2026-01-03 05:57:19 +0000 UTC" firstStartedPulling="2026-01-03 05:57:20.322964178 +0000 UTC m=+1018.649540750" lastFinishedPulling="2026-01-03 05:57:26.336623056 +0000 UTC m=+1024.663199628" observedRunningTime="2026-01-03 05:57:27.319921509 +0000 UTC m=+1025.646498091" watchObservedRunningTime="2026-01-03 05:57:27.325559861 +0000 UTC m=+1025.652136453" Jan 03 05:57:39 crc kubenswrapper[4854]: I0103 05:57:39.774983 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-85546d974f-nhdvf" Jan 03 05:57:41 crc kubenswrapper[4854]: I0103 05:57:41.756232 4854 patch_prober.go:28] interesting pod/machine-config-daemon-qdhfx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 05:57:41 crc kubenswrapper[4854]: I0103 05:57:41.756734 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 05:57:59 crc kubenswrapper[4854]: I0103 05:57:59.385109 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7fdb976ccd-xpqws" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.129433 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-lzwb4"] Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.130909 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-lzwb4" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.135966 4854 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-ts5vn" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.136373 4854 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.137798 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-lzwb4"] Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.145097 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-6fczv"] Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.152344 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-6fczv" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.158007 4854 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.158092 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.218263 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-9mfrk"] Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.219596 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-9mfrk" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.223224 4854 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.223292 4854 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-5hxm7" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.223246 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.226831 4854 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.242366 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-5bddd4b946-bzjqc"] Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.243738 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5bddd4b946-bzjqc" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.245989 4854 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.252226 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5bddd4b946-bzjqc"] Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.257437 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e29c84ac-4ca9-44ec-b886-ae50c84ba121-frr-startup\") pod \"frr-k8s-6fczv\" (UID: \"e29c84ac-4ca9-44ec-b886-ae50c84ba121\") " pod="metallb-system/frr-k8s-6fczv" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.257495 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6zwv\" (UniqueName: \"kubernetes.io/projected/e29c84ac-4ca9-44ec-b886-ae50c84ba121-kube-api-access-w6zwv\") pod \"frr-k8s-6fczv\" (UID: \"e29c84ac-4ca9-44ec-b886-ae50c84ba121\") " pod="metallb-system/frr-k8s-6fczv" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.257521 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e29c84ac-4ca9-44ec-b886-ae50c84ba121-frr-conf\") pod \"frr-k8s-6fczv\" (UID: \"e29c84ac-4ca9-44ec-b886-ae50c84ba121\") " pod="metallb-system/frr-k8s-6fczv" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.257540 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e29c84ac-4ca9-44ec-b886-ae50c84ba121-metrics-certs\") pod \"frr-k8s-6fczv\" (UID: \"e29c84ac-4ca9-44ec-b886-ae50c84ba121\") " pod="metallb-system/frr-k8s-6fczv" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.257646 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e29c84ac-4ca9-44ec-b886-ae50c84ba121-metrics\") pod \"frr-k8s-6fczv\" (UID: \"e29c84ac-4ca9-44ec-b886-ae50c84ba121\") " pod="metallb-system/frr-k8s-6fczv" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.257938 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqzvl\" (UniqueName: \"kubernetes.io/projected/ea9863f6-8706-4844-ad3e-93309cdbef22-kube-api-access-fqzvl\") pod \"frr-k8s-webhook-server-7784b6fcf-lzwb4\" (UID: \"ea9863f6-8706-4844-ad3e-93309cdbef22\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-lzwb4" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.258233 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e29c84ac-4ca9-44ec-b886-ae50c84ba121-reloader\") pod \"frr-k8s-6fczv\" (UID: \"e29c84ac-4ca9-44ec-b886-ae50c84ba121\") " pod="metallb-system/frr-k8s-6fczv" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.258458 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ea9863f6-8706-4844-ad3e-93309cdbef22-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-lzwb4\" (UID: \"ea9863f6-8706-4844-ad3e-93309cdbef22\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-lzwb4" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.258522 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e29c84ac-4ca9-44ec-b886-ae50c84ba121-frr-sockets\") pod \"frr-k8s-6fczv\" (UID: \"e29c84ac-4ca9-44ec-b886-ae50c84ba121\") " pod="metallb-system/frr-k8s-6fczv" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.360734 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b826d6d3-0de8-4b3d-9294-9e5f8f9faae6-memberlist\") pod \"speaker-9mfrk\" (UID: \"b826d6d3-0de8-4b3d-9294-9e5f8f9faae6\") " pod="metallb-system/speaker-9mfrk" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.360801 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ea9863f6-8706-4844-ad3e-93309cdbef22-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-lzwb4\" (UID: \"ea9863f6-8706-4844-ad3e-93309cdbef22\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-lzwb4" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.360840 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e29c84ac-4ca9-44ec-b886-ae50c84ba121-frr-sockets\") pod \"frr-k8s-6fczv\" (UID: \"e29c84ac-4ca9-44ec-b886-ae50c84ba121\") " pod="metallb-system/frr-k8s-6fczv" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.360901 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b826d6d3-0de8-4b3d-9294-9e5f8f9faae6-metallb-excludel2\") pod \"speaker-9mfrk\" (UID: \"b826d6d3-0de8-4b3d-9294-9e5f8f9faae6\") " pod="metallb-system/speaker-9mfrk" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.360928 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e29c84ac-4ca9-44ec-b886-ae50c84ba121-frr-startup\") pod \"frr-k8s-6fczv\" (UID: \"e29c84ac-4ca9-44ec-b886-ae50c84ba121\") " pod="metallb-system/frr-k8s-6fczv" Jan 03 05:58:00 crc kubenswrapper[4854]: E0103 05:58:00.360975 4854 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.360983 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d1422b70-f6c6-46f8-81b3-1d2f35800374-metrics-certs\") pod \"controller-5bddd4b946-bzjqc\" (UID: \"d1422b70-f6c6-46f8-81b3-1d2f35800374\") " pod="metallb-system/controller-5bddd4b946-bzjqc" Jan 03 05:58:00 crc kubenswrapper[4854]: E0103 05:58:00.361102 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea9863f6-8706-4844-ad3e-93309cdbef22-cert podName:ea9863f6-8706-4844-ad3e-93309cdbef22 nodeName:}" failed. No retries permitted until 2026-01-03 05:58:00.861035708 +0000 UTC m=+1059.187612280 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ea9863f6-8706-4844-ad3e-93309cdbef22-cert") pod "frr-k8s-webhook-server-7784b6fcf-lzwb4" (UID: "ea9863f6-8706-4844-ad3e-93309cdbef22") : secret "frr-k8s-webhook-server-cert" not found Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.361164 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6zwv\" (UniqueName: \"kubernetes.io/projected/e29c84ac-4ca9-44ec-b886-ae50c84ba121-kube-api-access-w6zwv\") pod \"frr-k8s-6fczv\" (UID: \"e29c84ac-4ca9-44ec-b886-ae50c84ba121\") " pod="metallb-system/frr-k8s-6fczv" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.361225 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e29c84ac-4ca9-44ec-b886-ae50c84ba121-frr-conf\") pod \"frr-k8s-6fczv\" (UID: \"e29c84ac-4ca9-44ec-b886-ae50c84ba121\") " pod="metallb-system/frr-k8s-6fczv" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.361257 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e29c84ac-4ca9-44ec-b886-ae50c84ba121-metrics-certs\") pod \"frr-k8s-6fczv\" (UID: \"e29c84ac-4ca9-44ec-b886-ae50c84ba121\") " pod="metallb-system/frr-k8s-6fczv" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.361287 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e29c84ac-4ca9-44ec-b886-ae50c84ba121-metrics\") pod \"frr-k8s-6fczv\" (UID: \"e29c84ac-4ca9-44ec-b886-ae50c84ba121\") " pod="metallb-system/frr-k8s-6fczv" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.361308 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqzvl\" (UniqueName: \"kubernetes.io/projected/ea9863f6-8706-4844-ad3e-93309cdbef22-kube-api-access-fqzvl\") pod \"frr-k8s-webhook-server-7784b6fcf-lzwb4\" (UID: \"ea9863f6-8706-4844-ad3e-93309cdbef22\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-lzwb4" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.361329 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e29c84ac-4ca9-44ec-b886-ae50c84ba121-frr-sockets\") pod \"frr-k8s-6fczv\" (UID: \"e29c84ac-4ca9-44ec-b886-ae50c84ba121\") " pod="metallb-system/frr-k8s-6fczv" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.361382 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxpx4\" (UniqueName: \"kubernetes.io/projected/b826d6d3-0de8-4b3d-9294-9e5f8f9faae6-kube-api-access-cxpx4\") pod \"speaker-9mfrk\" (UID: \"b826d6d3-0de8-4b3d-9294-9e5f8f9faae6\") " pod="metallb-system/speaker-9mfrk" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.361456 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d1422b70-f6c6-46f8-81b3-1d2f35800374-cert\") pod \"controller-5bddd4b946-bzjqc\" (UID: \"d1422b70-f6c6-46f8-81b3-1d2f35800374\") " pod="metallb-system/controller-5bddd4b946-bzjqc" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.361473 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c9f9\" (UniqueName: \"kubernetes.io/projected/d1422b70-f6c6-46f8-81b3-1d2f35800374-kube-api-access-4c9f9\") pod \"controller-5bddd4b946-bzjqc\" (UID: \"d1422b70-f6c6-46f8-81b3-1d2f35800374\") " pod="metallb-system/controller-5bddd4b946-bzjqc" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.361514 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e29c84ac-4ca9-44ec-b886-ae50c84ba121-reloader\") pod \"frr-k8s-6fczv\" (UID: \"e29c84ac-4ca9-44ec-b886-ae50c84ba121\") " pod="metallb-system/frr-k8s-6fczv" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.361530 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e29c84ac-4ca9-44ec-b886-ae50c84ba121-frr-conf\") pod \"frr-k8s-6fczv\" (UID: \"e29c84ac-4ca9-44ec-b886-ae50c84ba121\") " pod="metallb-system/frr-k8s-6fczv" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.361591 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b826d6d3-0de8-4b3d-9294-9e5f8f9faae6-metrics-certs\") pod \"speaker-9mfrk\" (UID: \"b826d6d3-0de8-4b3d-9294-9e5f8f9faae6\") " pod="metallb-system/speaker-9mfrk" Jan 03 05:58:00 crc kubenswrapper[4854]: E0103 05:58:00.361599 4854 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 03 05:58:00 crc kubenswrapper[4854]: E0103 05:58:00.361641 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e29c84ac-4ca9-44ec-b886-ae50c84ba121-metrics-certs podName:e29c84ac-4ca9-44ec-b886-ae50c84ba121 nodeName:}" failed. No retries permitted until 2026-01-03 05:58:00.861626683 +0000 UTC m=+1059.188203255 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e29c84ac-4ca9-44ec-b886-ae50c84ba121-metrics-certs") pod "frr-k8s-6fczv" (UID: "e29c84ac-4ca9-44ec-b886-ae50c84ba121") : secret "frr-k8s-certs-secret" not found Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.361966 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e29c84ac-4ca9-44ec-b886-ae50c84ba121-metrics\") pod \"frr-k8s-6fczv\" (UID: \"e29c84ac-4ca9-44ec-b886-ae50c84ba121\") " pod="metallb-system/frr-k8s-6fczv" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.362185 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e29c84ac-4ca9-44ec-b886-ae50c84ba121-reloader\") pod \"frr-k8s-6fczv\" (UID: \"e29c84ac-4ca9-44ec-b886-ae50c84ba121\") " pod="metallb-system/frr-k8s-6fczv" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.362371 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e29c84ac-4ca9-44ec-b886-ae50c84ba121-frr-startup\") pod \"frr-k8s-6fczv\" (UID: \"e29c84ac-4ca9-44ec-b886-ae50c84ba121\") " pod="metallb-system/frr-k8s-6fczv" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.386194 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqzvl\" (UniqueName: \"kubernetes.io/projected/ea9863f6-8706-4844-ad3e-93309cdbef22-kube-api-access-fqzvl\") pod \"frr-k8s-webhook-server-7784b6fcf-lzwb4\" (UID: \"ea9863f6-8706-4844-ad3e-93309cdbef22\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-lzwb4" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.390743 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6zwv\" (UniqueName: \"kubernetes.io/projected/e29c84ac-4ca9-44ec-b886-ae50c84ba121-kube-api-access-w6zwv\") pod \"frr-k8s-6fczv\" (UID: \"e29c84ac-4ca9-44ec-b886-ae50c84ba121\") " pod="metallb-system/frr-k8s-6fczv" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.462677 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b826d6d3-0de8-4b3d-9294-9e5f8f9faae6-metallb-excludel2\") pod \"speaker-9mfrk\" (UID: \"b826d6d3-0de8-4b3d-9294-9e5f8f9faae6\") " pod="metallb-system/speaker-9mfrk" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.462734 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d1422b70-f6c6-46f8-81b3-1d2f35800374-metrics-certs\") pod \"controller-5bddd4b946-bzjqc\" (UID: \"d1422b70-f6c6-46f8-81b3-1d2f35800374\") " pod="metallb-system/controller-5bddd4b946-bzjqc" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.462791 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxpx4\" (UniqueName: \"kubernetes.io/projected/b826d6d3-0de8-4b3d-9294-9e5f8f9faae6-kube-api-access-cxpx4\") pod \"speaker-9mfrk\" (UID: \"b826d6d3-0de8-4b3d-9294-9e5f8f9faae6\") " pod="metallb-system/speaker-9mfrk" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.462820 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4c9f9\" (UniqueName: \"kubernetes.io/projected/d1422b70-f6c6-46f8-81b3-1d2f35800374-kube-api-access-4c9f9\") pod \"controller-5bddd4b946-bzjqc\" (UID: \"d1422b70-f6c6-46f8-81b3-1d2f35800374\") " pod="metallb-system/controller-5bddd4b946-bzjqc" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.462837 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d1422b70-f6c6-46f8-81b3-1d2f35800374-cert\") pod \"controller-5bddd4b946-bzjqc\" (UID: \"d1422b70-f6c6-46f8-81b3-1d2f35800374\") " pod="metallb-system/controller-5bddd4b946-bzjqc" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.462874 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b826d6d3-0de8-4b3d-9294-9e5f8f9faae6-metrics-certs\") pod \"speaker-9mfrk\" (UID: \"b826d6d3-0de8-4b3d-9294-9e5f8f9faae6\") " pod="metallb-system/speaker-9mfrk" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.462907 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b826d6d3-0de8-4b3d-9294-9e5f8f9faae6-memberlist\") pod \"speaker-9mfrk\" (UID: \"b826d6d3-0de8-4b3d-9294-9e5f8f9faae6\") " pod="metallb-system/speaker-9mfrk" Jan 03 05:58:00 crc kubenswrapper[4854]: E0103 05:58:00.462921 4854 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 03 05:58:00 crc kubenswrapper[4854]: E0103 05:58:00.462985 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1422b70-f6c6-46f8-81b3-1d2f35800374-metrics-certs podName:d1422b70-f6c6-46f8-81b3-1d2f35800374 nodeName:}" failed. No retries permitted until 2026-01-03 05:58:00.962966156 +0000 UTC m=+1059.289542728 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d1422b70-f6c6-46f8-81b3-1d2f35800374-metrics-certs") pod "controller-5bddd4b946-bzjqc" (UID: "d1422b70-f6c6-46f8-81b3-1d2f35800374") : secret "controller-certs-secret" not found Jan 03 05:58:00 crc kubenswrapper[4854]: E0103 05:58:00.463020 4854 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 03 05:58:00 crc kubenswrapper[4854]: E0103 05:58:00.463066 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b826d6d3-0de8-4b3d-9294-9e5f8f9faae6-memberlist podName:b826d6d3-0de8-4b3d-9294-9e5f8f9faae6 nodeName:}" failed. No retries permitted until 2026-01-03 05:58:00.963050018 +0000 UTC m=+1059.289626590 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/b826d6d3-0de8-4b3d-9294-9e5f8f9faae6-memberlist") pod "speaker-9mfrk" (UID: "b826d6d3-0de8-4b3d-9294-9e5f8f9faae6") : secret "metallb-memberlist" not found Jan 03 05:58:00 crc kubenswrapper[4854]: E0103 05:58:00.463148 4854 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 03 05:58:00 crc kubenswrapper[4854]: E0103 05:58:00.463187 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b826d6d3-0de8-4b3d-9294-9e5f8f9faae6-metrics-certs podName:b826d6d3-0de8-4b3d-9294-9e5f8f9faae6 nodeName:}" failed. No retries permitted until 2026-01-03 05:58:00.963177012 +0000 UTC m=+1059.289753714 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b826d6d3-0de8-4b3d-9294-9e5f8f9faae6-metrics-certs") pod "speaker-9mfrk" (UID: "b826d6d3-0de8-4b3d-9294-9e5f8f9faae6") : secret "speaker-certs-secret" not found Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.463383 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b826d6d3-0de8-4b3d-9294-9e5f8f9faae6-metallb-excludel2\") pod \"speaker-9mfrk\" (UID: \"b826d6d3-0de8-4b3d-9294-9e5f8f9faae6\") " pod="metallb-system/speaker-9mfrk" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.464821 4854 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.476771 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d1422b70-f6c6-46f8-81b3-1d2f35800374-cert\") pod \"controller-5bddd4b946-bzjqc\" (UID: \"d1422b70-f6c6-46f8-81b3-1d2f35800374\") " pod="metallb-system/controller-5bddd4b946-bzjqc" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.481337 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxpx4\" (UniqueName: \"kubernetes.io/projected/b826d6d3-0de8-4b3d-9294-9e5f8f9faae6-kube-api-access-cxpx4\") pod \"speaker-9mfrk\" (UID: \"b826d6d3-0de8-4b3d-9294-9e5f8f9faae6\") " pod="metallb-system/speaker-9mfrk" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.481847 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4c9f9\" (UniqueName: \"kubernetes.io/projected/d1422b70-f6c6-46f8-81b3-1d2f35800374-kube-api-access-4c9f9\") pod \"controller-5bddd4b946-bzjqc\" (UID: \"d1422b70-f6c6-46f8-81b3-1d2f35800374\") " pod="metallb-system/controller-5bddd4b946-bzjqc" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.869492 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e29c84ac-4ca9-44ec-b886-ae50c84ba121-metrics-certs\") pod \"frr-k8s-6fczv\" (UID: \"e29c84ac-4ca9-44ec-b886-ae50c84ba121\") " pod="metallb-system/frr-k8s-6fczv" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.869617 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ea9863f6-8706-4844-ad3e-93309cdbef22-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-lzwb4\" (UID: \"ea9863f6-8706-4844-ad3e-93309cdbef22\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-lzwb4" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.874120 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ea9863f6-8706-4844-ad3e-93309cdbef22-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-lzwb4\" (UID: \"ea9863f6-8706-4844-ad3e-93309cdbef22\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-lzwb4" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.879754 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e29c84ac-4ca9-44ec-b886-ae50c84ba121-metrics-certs\") pod \"frr-k8s-6fczv\" (UID: \"e29c84ac-4ca9-44ec-b886-ae50c84ba121\") " pod="metallb-system/frr-k8s-6fczv" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.971428 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b826d6d3-0de8-4b3d-9294-9e5f8f9faae6-metrics-certs\") pod \"speaker-9mfrk\" (UID: \"b826d6d3-0de8-4b3d-9294-9e5f8f9faae6\") " pod="metallb-system/speaker-9mfrk" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.971507 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b826d6d3-0de8-4b3d-9294-9e5f8f9faae6-memberlist\") pod \"speaker-9mfrk\" (UID: \"b826d6d3-0de8-4b3d-9294-9e5f8f9faae6\") " pod="metallb-system/speaker-9mfrk" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.971589 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d1422b70-f6c6-46f8-81b3-1d2f35800374-metrics-certs\") pod \"controller-5bddd4b946-bzjqc\" (UID: \"d1422b70-f6c6-46f8-81b3-1d2f35800374\") " pod="metallb-system/controller-5bddd4b946-bzjqc" Jan 03 05:58:00 crc kubenswrapper[4854]: E0103 05:58:00.971639 4854 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 03 05:58:00 crc kubenswrapper[4854]: E0103 05:58:00.971716 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b826d6d3-0de8-4b3d-9294-9e5f8f9faae6-memberlist podName:b826d6d3-0de8-4b3d-9294-9e5f8f9faae6 nodeName:}" failed. No retries permitted until 2026-01-03 05:58:01.971693405 +0000 UTC m=+1060.298269977 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/b826d6d3-0de8-4b3d-9294-9e5f8f9faae6-memberlist") pod "speaker-9mfrk" (UID: "b826d6d3-0de8-4b3d-9294-9e5f8f9faae6") : secret "metallb-memberlist" not found Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.977752 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b826d6d3-0de8-4b3d-9294-9e5f8f9faae6-metrics-certs\") pod \"speaker-9mfrk\" (UID: \"b826d6d3-0de8-4b3d-9294-9e5f8f9faae6\") " pod="metallb-system/speaker-9mfrk" Jan 03 05:58:00 crc kubenswrapper[4854]: I0103 05:58:00.984826 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d1422b70-f6c6-46f8-81b3-1d2f35800374-metrics-certs\") pod \"controller-5bddd4b946-bzjqc\" (UID: \"d1422b70-f6c6-46f8-81b3-1d2f35800374\") " pod="metallb-system/controller-5bddd4b946-bzjqc" Jan 03 05:58:01 crc kubenswrapper[4854]: I0103 05:58:01.051002 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-lzwb4" Jan 03 05:58:01 crc kubenswrapper[4854]: I0103 05:58:01.070562 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-6fczv" Jan 03 05:58:01 crc kubenswrapper[4854]: I0103 05:58:01.158561 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5bddd4b946-bzjqc" Jan 03 05:58:01 crc kubenswrapper[4854]: I0103 05:58:01.597013 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-lzwb4"] Jan 03 05:58:01 crc kubenswrapper[4854]: W0103 05:58:01.601299 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea9863f6_8706_4844_ad3e_93309cdbef22.slice/crio-1dcb810e46c144d7ff00eb314e08008fe751daeca08b596fe9458c7188efae56 WatchSource:0}: Error finding container 1dcb810e46c144d7ff00eb314e08008fe751daeca08b596fe9458c7188efae56: Status 404 returned error can't find the container with id 1dcb810e46c144d7ff00eb314e08008fe751daeca08b596fe9458c7188efae56 Jan 03 05:58:01 crc kubenswrapper[4854]: I0103 05:58:01.665874 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6fczv" event={"ID":"e29c84ac-4ca9-44ec-b886-ae50c84ba121","Type":"ContainerStarted","Data":"5d19e80dca86a8c1e56628929e343243937aefd29323bb2d0e0022ce7c06f66d"} Jan 03 05:58:01 crc kubenswrapper[4854]: I0103 05:58:01.666999 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-lzwb4" event={"ID":"ea9863f6-8706-4844-ad3e-93309cdbef22","Type":"ContainerStarted","Data":"1dcb810e46c144d7ff00eb314e08008fe751daeca08b596fe9458c7188efae56"} Jan 03 05:58:01 crc kubenswrapper[4854]: I0103 05:58:01.700692 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5bddd4b946-bzjqc"] Jan 03 05:58:01 crc kubenswrapper[4854]: W0103 05:58:01.707119 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1422b70_f6c6_46f8_81b3_1d2f35800374.slice/crio-970b8afac462a5bed707f16c76dbdd01584bdcba3012266d5d0c2892f2e16d06 WatchSource:0}: Error finding container 970b8afac462a5bed707f16c76dbdd01584bdcba3012266d5d0c2892f2e16d06: Status 404 returned error can't find the container with id 970b8afac462a5bed707f16c76dbdd01584bdcba3012266d5d0c2892f2e16d06 Jan 03 05:58:01 crc kubenswrapper[4854]: I0103 05:58:01.993804 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b826d6d3-0de8-4b3d-9294-9e5f8f9faae6-memberlist\") pod \"speaker-9mfrk\" (UID: \"b826d6d3-0de8-4b3d-9294-9e5f8f9faae6\") " pod="metallb-system/speaker-9mfrk" Jan 03 05:58:01 crc kubenswrapper[4854]: E0103 05:58:01.994046 4854 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 03 05:58:01 crc kubenswrapper[4854]: E0103 05:58:01.994495 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b826d6d3-0de8-4b3d-9294-9e5f8f9faae6-memberlist podName:b826d6d3-0de8-4b3d-9294-9e5f8f9faae6 nodeName:}" failed. No retries permitted until 2026-01-03 05:58:03.994467949 +0000 UTC m=+1062.321044521 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/b826d6d3-0de8-4b3d-9294-9e5f8f9faae6-memberlist") pod "speaker-9mfrk" (UID: "b826d6d3-0de8-4b3d-9294-9e5f8f9faae6") : secret "metallb-memberlist" not found Jan 03 05:58:02 crc kubenswrapper[4854]: I0103 05:58:02.688267 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-bzjqc" event={"ID":"d1422b70-f6c6-46f8-81b3-1d2f35800374","Type":"ContainerStarted","Data":"60bb7ed57e082ed35fa06bd2cf9cbd0b2b013e4b449bf4724940af98ee5844ee"} Jan 03 05:58:02 crc kubenswrapper[4854]: I0103 05:58:02.688317 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-bzjqc" event={"ID":"d1422b70-f6c6-46f8-81b3-1d2f35800374","Type":"ContainerStarted","Data":"179fce328f6e20fdc3653c24e9f94fa6adc4fd59f7d79ee8575929659e342509"} Jan 03 05:58:02 crc kubenswrapper[4854]: I0103 05:58:02.688329 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-bzjqc" event={"ID":"d1422b70-f6c6-46f8-81b3-1d2f35800374","Type":"ContainerStarted","Data":"970b8afac462a5bed707f16c76dbdd01584bdcba3012266d5d0c2892f2e16d06"} Jan 03 05:58:02 crc kubenswrapper[4854]: I0103 05:58:02.689290 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-5bddd4b946-bzjqc" Jan 03 05:58:02 crc kubenswrapper[4854]: I0103 05:58:02.718465 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-5bddd4b946-bzjqc" podStartSLOduration=2.7184444819999998 podStartE2EDuration="2.718444482s" podCreationTimestamp="2026-01-03 05:58:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:58:02.717404196 +0000 UTC m=+1061.043980788" watchObservedRunningTime="2026-01-03 05:58:02.718444482 +0000 UTC m=+1061.045021054" Jan 03 05:58:04 crc kubenswrapper[4854]: I0103 05:58:04.043474 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b826d6d3-0de8-4b3d-9294-9e5f8f9faae6-memberlist\") pod \"speaker-9mfrk\" (UID: \"b826d6d3-0de8-4b3d-9294-9e5f8f9faae6\") " pod="metallb-system/speaker-9mfrk" Jan 03 05:58:04 crc kubenswrapper[4854]: I0103 05:58:04.049668 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b826d6d3-0de8-4b3d-9294-9e5f8f9faae6-memberlist\") pod \"speaker-9mfrk\" (UID: \"b826d6d3-0de8-4b3d-9294-9e5f8f9faae6\") " pod="metallb-system/speaker-9mfrk" Jan 03 05:58:04 crc kubenswrapper[4854]: I0103 05:58:04.133692 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-9mfrk" Jan 03 05:58:04 crc kubenswrapper[4854]: I0103 05:58:04.744168 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-9mfrk" event={"ID":"b826d6d3-0de8-4b3d-9294-9e5f8f9faae6","Type":"ContainerStarted","Data":"252cba5186a3c3dc8dd0d53e03137843f95f7633b9b36e4815bc42dab4f08ae0"} Jan 03 05:58:04 crc kubenswrapper[4854]: I0103 05:58:04.744531 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-9mfrk" event={"ID":"b826d6d3-0de8-4b3d-9294-9e5f8f9faae6","Type":"ContainerStarted","Data":"0e5b6b20acded5142bdf0e1f611fc8de6e58a327839767e9a8e8e33357ed1e1c"} Jan 03 05:58:05 crc kubenswrapper[4854]: I0103 05:58:05.752740 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-9mfrk" event={"ID":"b826d6d3-0de8-4b3d-9294-9e5f8f9faae6","Type":"ContainerStarted","Data":"9296224555b6bee1d1041f6da05308b073bb33922d6d462dfbff2a6080d7da49"} Jan 03 05:58:05 crc kubenswrapper[4854]: I0103 05:58:05.753996 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-9mfrk" Jan 03 05:58:11 crc kubenswrapper[4854]: I0103 05:58:11.163181 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-5bddd4b946-bzjqc" Jan 03 05:58:11 crc kubenswrapper[4854]: I0103 05:58:11.183655 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-9mfrk" podStartSLOduration=11.18363518 podStartE2EDuration="11.18363518s" podCreationTimestamp="2026-01-03 05:58:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:58:05.772216145 +0000 UTC m=+1064.098792717" watchObservedRunningTime="2026-01-03 05:58:11.18363518 +0000 UTC m=+1069.510211752" Jan 03 05:58:11 crc kubenswrapper[4854]: I0103 05:58:11.755739 4854 patch_prober.go:28] interesting pod/machine-config-daemon-qdhfx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 05:58:11 crc kubenswrapper[4854]: I0103 05:58:11.755824 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 05:58:11 crc kubenswrapper[4854]: I0103 05:58:11.755885 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" Jan 03 05:58:11 crc kubenswrapper[4854]: I0103 05:58:11.756884 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"382eac6c86719b2cf06557df9d71397fec24546fd4a1359e257bb73a0fbe3ef6"} pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 03 05:58:11 crc kubenswrapper[4854]: I0103 05:58:11.756983 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" containerID="cri-o://382eac6c86719b2cf06557df9d71397fec24546fd4a1359e257bb73a0fbe3ef6" gracePeriod=600 Jan 03 05:58:13 crc kubenswrapper[4854]: I0103 05:58:13.842016 4854 generic.go:334] "Generic (PLEG): container finished" podID="e29c84ac-4ca9-44ec-b886-ae50c84ba121" containerID="df14de77972e592276aee08f5b37a3add4274aa398a8d54945e006fe982b172c" exitCode=0 Jan 03 05:58:13 crc kubenswrapper[4854]: I0103 05:58:13.842131 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6fczv" event={"ID":"e29c84ac-4ca9-44ec-b886-ae50c84ba121","Type":"ContainerDied","Data":"df14de77972e592276aee08f5b37a3add4274aa398a8d54945e006fe982b172c"} Jan 03 05:58:13 crc kubenswrapper[4854]: I0103 05:58:13.845359 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-lzwb4" event={"ID":"ea9863f6-8706-4844-ad3e-93309cdbef22","Type":"ContainerStarted","Data":"c5a1cc71bf27b754936bdde8e575bd7aa1f0da15a1c6e03b51b95f04ffc0c08b"} Jan 03 05:58:13 crc kubenswrapper[4854]: I0103 05:58:13.845508 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-lzwb4" Jan 03 05:58:13 crc kubenswrapper[4854]: I0103 05:58:13.853509 4854 generic.go:334] "Generic (PLEG): container finished" podID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerID="382eac6c86719b2cf06557df9d71397fec24546fd4a1359e257bb73a0fbe3ef6" exitCode=0 Jan 03 05:58:13 crc kubenswrapper[4854]: I0103 05:58:13.853647 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" event={"ID":"e8c88d7d-092b-44f7-b4c8-3540be3c0e8b","Type":"ContainerDied","Data":"382eac6c86719b2cf06557df9d71397fec24546fd4a1359e257bb73a0fbe3ef6"} Jan 03 05:58:13 crc kubenswrapper[4854]: I0103 05:58:13.853760 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" event={"ID":"e8c88d7d-092b-44f7-b4c8-3540be3c0e8b","Type":"ContainerStarted","Data":"698413854ef7d83140b2bf1b7914886f1f1ee8bad9480a9e32b96368143c12a3"} Jan 03 05:58:13 crc kubenswrapper[4854]: I0103 05:58:13.853856 4854 scope.go:117] "RemoveContainer" containerID="e01a9e027959d17e8604f32720a945b283546578a7ff1bc2cd05356d9cba66ad" Jan 03 05:58:13 crc kubenswrapper[4854]: I0103 05:58:13.885032 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-lzwb4" podStartSLOduration=2.07935755 podStartE2EDuration="13.885012438s" podCreationTimestamp="2026-01-03 05:58:00 +0000 UTC" firstStartedPulling="2026-01-03 05:58:01.605018623 +0000 UTC m=+1059.931595215" lastFinishedPulling="2026-01-03 05:58:13.410673521 +0000 UTC m=+1071.737250103" observedRunningTime="2026-01-03 05:58:13.884528476 +0000 UTC m=+1072.211105068" watchObservedRunningTime="2026-01-03 05:58:13.885012438 +0000 UTC m=+1072.211589020" Jan 03 05:58:14 crc kubenswrapper[4854]: I0103 05:58:14.138815 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-9mfrk" Jan 03 05:58:14 crc kubenswrapper[4854]: I0103 05:58:14.867778 4854 generic.go:334] "Generic (PLEG): container finished" podID="e29c84ac-4ca9-44ec-b886-ae50c84ba121" containerID="4cd56ce3fa52418c57d232ddac56a4398be3769ab99876b4ee0812bd268a2128" exitCode=0 Jan 03 05:58:14 crc kubenswrapper[4854]: I0103 05:58:14.867835 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6fczv" event={"ID":"e29c84ac-4ca9-44ec-b886-ae50c84ba121","Type":"ContainerDied","Data":"4cd56ce3fa52418c57d232ddac56a4398be3769ab99876b4ee0812bd268a2128"} Jan 03 05:58:15 crc kubenswrapper[4854]: I0103 05:58:15.877424 4854 generic.go:334] "Generic (PLEG): container finished" podID="e29c84ac-4ca9-44ec-b886-ae50c84ba121" containerID="29016c519de4cb496536997321ac5fde7f2ada2fcddc88aeedae118bbe1632c3" exitCode=0 Jan 03 05:58:15 crc kubenswrapper[4854]: I0103 05:58:15.877504 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6fczv" event={"ID":"e29c84ac-4ca9-44ec-b886-ae50c84ba121","Type":"ContainerDied","Data":"29016c519de4cb496536997321ac5fde7f2ada2fcddc88aeedae118bbe1632c3"} Jan 03 05:58:16 crc kubenswrapper[4854]: I0103 05:58:16.889987 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6fczv" event={"ID":"e29c84ac-4ca9-44ec-b886-ae50c84ba121","Type":"ContainerStarted","Data":"038cf4b7df9e34af1e4e0ae5d12a6b2db72aaa42fcba0d53a4145adfaab19edb"} Jan 03 05:58:16 crc kubenswrapper[4854]: I0103 05:58:16.890349 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6fczv" event={"ID":"e29c84ac-4ca9-44ec-b886-ae50c84ba121","Type":"ContainerStarted","Data":"7b630a3efa8ce39674fa7e49283a955b29131890c4af3ab5fd62d185244310c4"} Jan 03 05:58:16 crc kubenswrapper[4854]: I0103 05:58:16.890363 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6fczv" event={"ID":"e29c84ac-4ca9-44ec-b886-ae50c84ba121","Type":"ContainerStarted","Data":"9539e21b873a1e4b3365e05006ca6162cad1734de5e700761c66055ef4d7c3a1"} Jan 03 05:58:16 crc kubenswrapper[4854]: I0103 05:58:16.890375 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6fczv" event={"ID":"e29c84ac-4ca9-44ec-b886-ae50c84ba121","Type":"ContainerStarted","Data":"820d48d16dc8ad8bfb1070482e2c87343667e66671ad0c016f3473c4af9b4abf"} Jan 03 05:58:17 crc kubenswrapper[4854]: I0103 05:58:17.067692 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-9mhgb"] Jan 03 05:58:17 crc kubenswrapper[4854]: I0103 05:58:17.069883 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9mhgb" Jan 03 05:58:17 crc kubenswrapper[4854]: I0103 05:58:17.072247 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 03 05:58:17 crc kubenswrapper[4854]: I0103 05:58:17.073237 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-b7mk2" Jan 03 05:58:17 crc kubenswrapper[4854]: I0103 05:58:17.073596 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 03 05:58:17 crc kubenswrapper[4854]: I0103 05:58:17.083046 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-9mhgb"] Jan 03 05:58:17 crc kubenswrapper[4854]: I0103 05:58:17.221250 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wjvt\" (UniqueName: \"kubernetes.io/projected/ed9b9fda-53a8-4374-a4ca-3a432c85c8d4-kube-api-access-8wjvt\") pod \"openstack-operator-index-9mhgb\" (UID: \"ed9b9fda-53a8-4374-a4ca-3a432c85c8d4\") " pod="openstack-operators/openstack-operator-index-9mhgb" Jan 03 05:58:17 crc kubenswrapper[4854]: I0103 05:58:17.323530 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wjvt\" (UniqueName: \"kubernetes.io/projected/ed9b9fda-53a8-4374-a4ca-3a432c85c8d4-kube-api-access-8wjvt\") pod \"openstack-operator-index-9mhgb\" (UID: \"ed9b9fda-53a8-4374-a4ca-3a432c85c8d4\") " pod="openstack-operators/openstack-operator-index-9mhgb" Jan 03 05:58:17 crc kubenswrapper[4854]: I0103 05:58:17.351804 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wjvt\" (UniqueName: \"kubernetes.io/projected/ed9b9fda-53a8-4374-a4ca-3a432c85c8d4-kube-api-access-8wjvt\") pod \"openstack-operator-index-9mhgb\" (UID: \"ed9b9fda-53a8-4374-a4ca-3a432c85c8d4\") " pod="openstack-operators/openstack-operator-index-9mhgb" Jan 03 05:58:17 crc kubenswrapper[4854]: I0103 05:58:17.400136 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9mhgb" Jan 03 05:58:17 crc kubenswrapper[4854]: I0103 05:58:17.912836 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-9mhgb"] Jan 03 05:58:17 crc kubenswrapper[4854]: W0103 05:58:17.928956 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded9b9fda_53a8_4374_a4ca_3a432c85c8d4.slice/crio-8c32287d87010f4f7c6b9538ae9ec5e09ecc4bdfb1a52b8a2b95ae38654827dc WatchSource:0}: Error finding container 8c32287d87010f4f7c6b9538ae9ec5e09ecc4bdfb1a52b8a2b95ae38654827dc: Status 404 returned error can't find the container with id 8c32287d87010f4f7c6b9538ae9ec5e09ecc4bdfb1a52b8a2b95ae38654827dc Jan 03 05:58:18 crc kubenswrapper[4854]: I0103 05:58:18.916023 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9mhgb" event={"ID":"ed9b9fda-53a8-4374-a4ca-3a432c85c8d4","Type":"ContainerStarted","Data":"8c32287d87010f4f7c6b9538ae9ec5e09ecc4bdfb1a52b8a2b95ae38654827dc"} Jan 03 05:58:18 crc kubenswrapper[4854]: I0103 05:58:18.925311 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6fczv" event={"ID":"e29c84ac-4ca9-44ec-b886-ae50c84ba121","Type":"ContainerStarted","Data":"acbcf2025bf93ad4f323e7f621694d8ebec28c054723f5f038965a3221f5523c"} Jan 03 05:58:18 crc kubenswrapper[4854]: I0103 05:58:18.925367 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6fczv" event={"ID":"e29c84ac-4ca9-44ec-b886-ae50c84ba121","Type":"ContainerStarted","Data":"38995c387a3001bc33d1901015805f9c01b0463a6f317e2190e1b6201c58b4b5"} Jan 03 05:58:18 crc kubenswrapper[4854]: I0103 05:58:18.925515 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-6fczv" Jan 03 05:58:18 crc kubenswrapper[4854]: I0103 05:58:18.955878 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-6fczv" podStartSLOduration=6.841197338 podStartE2EDuration="18.955854653s" podCreationTimestamp="2026-01-03 05:58:00 +0000 UTC" firstStartedPulling="2026-01-03 05:58:01.263952371 +0000 UTC m=+1059.590528943" lastFinishedPulling="2026-01-03 05:58:13.378609676 +0000 UTC m=+1071.705186258" observedRunningTime="2026-01-03 05:58:18.950610831 +0000 UTC m=+1077.277187413" watchObservedRunningTime="2026-01-03 05:58:18.955854653 +0000 UTC m=+1077.282431235" Jan 03 05:58:20 crc kubenswrapper[4854]: I0103 05:58:20.431370 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-9mhgb"] Jan 03 05:58:20 crc kubenswrapper[4854]: I0103 05:58:20.943273 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9mhgb" event={"ID":"ed9b9fda-53a8-4374-a4ca-3a432c85c8d4","Type":"ContainerStarted","Data":"5826ce69bcc369c4826b1a33d75cc17d2c00bd5a03504cd60f2c3c576ebecbd1"} Jan 03 05:58:20 crc kubenswrapper[4854]: I0103 05:58:20.943523 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-9mhgb" podUID="ed9b9fda-53a8-4374-a4ca-3a432c85c8d4" containerName="registry-server" containerID="cri-o://5826ce69bcc369c4826b1a33d75cc17d2c00bd5a03504cd60f2c3c576ebecbd1" gracePeriod=2 Jan 03 05:58:20 crc kubenswrapper[4854]: I0103 05:58:20.964992 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-9mhgb" podStartSLOduration=1.432052955 podStartE2EDuration="3.964972735s" podCreationTimestamp="2026-01-03 05:58:17 +0000 UTC" firstStartedPulling="2026-01-03 05:58:17.931533531 +0000 UTC m=+1076.258110103" lastFinishedPulling="2026-01-03 05:58:20.464453311 +0000 UTC m=+1078.791029883" observedRunningTime="2026-01-03 05:58:20.959330563 +0000 UTC m=+1079.285907135" watchObservedRunningTime="2026-01-03 05:58:20.964972735 +0000 UTC m=+1079.291549297" Jan 03 05:58:21 crc kubenswrapper[4854]: I0103 05:58:21.025879 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-7vksk"] Jan 03 05:58:21 crc kubenswrapper[4854]: I0103 05:58:21.027753 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-7vksk" Jan 03 05:58:21 crc kubenswrapper[4854]: I0103 05:58:21.038293 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-7vksk"] Jan 03 05:58:21 crc kubenswrapper[4854]: I0103 05:58:21.071391 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-6fczv" Jan 03 05:58:21 crc kubenswrapper[4854]: I0103 05:58:21.091900 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9m2l\" (UniqueName: \"kubernetes.io/projected/ec8a24a9-62d4-4db8-8f17-f261a85d6a47-kube-api-access-p9m2l\") pod \"openstack-operator-index-7vksk\" (UID: \"ec8a24a9-62d4-4db8-8f17-f261a85d6a47\") " pod="openstack-operators/openstack-operator-index-7vksk" Jan 03 05:58:21 crc kubenswrapper[4854]: I0103 05:58:21.107052 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-6fczv" Jan 03 05:58:21 crc kubenswrapper[4854]: I0103 05:58:21.195392 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9m2l\" (UniqueName: \"kubernetes.io/projected/ec8a24a9-62d4-4db8-8f17-f261a85d6a47-kube-api-access-p9m2l\") pod \"openstack-operator-index-7vksk\" (UID: \"ec8a24a9-62d4-4db8-8f17-f261a85d6a47\") " pod="openstack-operators/openstack-operator-index-7vksk" Jan 03 05:58:21 crc kubenswrapper[4854]: I0103 05:58:21.237007 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9m2l\" (UniqueName: \"kubernetes.io/projected/ec8a24a9-62d4-4db8-8f17-f261a85d6a47-kube-api-access-p9m2l\") pod \"openstack-operator-index-7vksk\" (UID: \"ec8a24a9-62d4-4db8-8f17-f261a85d6a47\") " pod="openstack-operators/openstack-operator-index-7vksk" Jan 03 05:58:21 crc kubenswrapper[4854]: I0103 05:58:21.385646 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9mhgb" Jan 03 05:58:21 crc kubenswrapper[4854]: I0103 05:58:21.400821 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-7vksk" Jan 03 05:58:21 crc kubenswrapper[4854]: I0103 05:58:21.500375 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wjvt\" (UniqueName: \"kubernetes.io/projected/ed9b9fda-53a8-4374-a4ca-3a432c85c8d4-kube-api-access-8wjvt\") pod \"ed9b9fda-53a8-4374-a4ca-3a432c85c8d4\" (UID: \"ed9b9fda-53a8-4374-a4ca-3a432c85c8d4\") " Jan 03 05:58:21 crc kubenswrapper[4854]: I0103 05:58:21.506648 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed9b9fda-53a8-4374-a4ca-3a432c85c8d4-kube-api-access-8wjvt" (OuterVolumeSpecName: "kube-api-access-8wjvt") pod "ed9b9fda-53a8-4374-a4ca-3a432c85c8d4" (UID: "ed9b9fda-53a8-4374-a4ca-3a432c85c8d4"). InnerVolumeSpecName "kube-api-access-8wjvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:58:21 crc kubenswrapper[4854]: I0103 05:58:21.604143 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wjvt\" (UniqueName: \"kubernetes.io/projected/ed9b9fda-53a8-4374-a4ca-3a432c85c8d4-kube-api-access-8wjvt\") on node \"crc\" DevicePath \"\"" Jan 03 05:58:21 crc kubenswrapper[4854]: I0103 05:58:21.961976 4854 generic.go:334] "Generic (PLEG): container finished" podID="ed9b9fda-53a8-4374-a4ca-3a432c85c8d4" containerID="5826ce69bcc369c4826b1a33d75cc17d2c00bd5a03504cd60f2c3c576ebecbd1" exitCode=0 Jan 03 05:58:21 crc kubenswrapper[4854]: I0103 05:58:21.962922 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9mhgb" event={"ID":"ed9b9fda-53a8-4374-a4ca-3a432c85c8d4","Type":"ContainerDied","Data":"5826ce69bcc369c4826b1a33d75cc17d2c00bd5a03504cd60f2c3c576ebecbd1"} Jan 03 05:58:21 crc kubenswrapper[4854]: I0103 05:58:21.963036 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9mhgb" event={"ID":"ed9b9fda-53a8-4374-a4ca-3a432c85c8d4","Type":"ContainerDied","Data":"8c32287d87010f4f7c6b9538ae9ec5e09ecc4bdfb1a52b8a2b95ae38654827dc"} Jan 03 05:58:21 crc kubenswrapper[4854]: I0103 05:58:21.963061 4854 scope.go:117] "RemoveContainer" containerID="5826ce69bcc369c4826b1a33d75cc17d2c00bd5a03504cd60f2c3c576ebecbd1" Jan 03 05:58:21 crc kubenswrapper[4854]: I0103 05:58:21.963299 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9mhgb" Jan 03 05:58:22 crc kubenswrapper[4854]: I0103 05:58:22.005416 4854 scope.go:117] "RemoveContainer" containerID="5826ce69bcc369c4826b1a33d75cc17d2c00bd5a03504cd60f2c3c576ebecbd1" Jan 03 05:58:22 crc kubenswrapper[4854]: E0103 05:58:22.006818 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5826ce69bcc369c4826b1a33d75cc17d2c00bd5a03504cd60f2c3c576ebecbd1\": container with ID starting with 5826ce69bcc369c4826b1a33d75cc17d2c00bd5a03504cd60f2c3c576ebecbd1 not found: ID does not exist" containerID="5826ce69bcc369c4826b1a33d75cc17d2c00bd5a03504cd60f2c3c576ebecbd1" Jan 03 05:58:22 crc kubenswrapper[4854]: I0103 05:58:22.006856 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5826ce69bcc369c4826b1a33d75cc17d2c00bd5a03504cd60f2c3c576ebecbd1"} err="failed to get container status \"5826ce69bcc369c4826b1a33d75cc17d2c00bd5a03504cd60f2c3c576ebecbd1\": rpc error: code = NotFound desc = could not find container \"5826ce69bcc369c4826b1a33d75cc17d2c00bd5a03504cd60f2c3c576ebecbd1\": container with ID starting with 5826ce69bcc369c4826b1a33d75cc17d2c00bd5a03504cd60f2c3c576ebecbd1 not found: ID does not exist" Jan 03 05:58:22 crc kubenswrapper[4854]: I0103 05:58:22.011639 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-7vksk"] Jan 03 05:58:22 crc kubenswrapper[4854]: I0103 05:58:22.028328 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-9mhgb"] Jan 03 05:58:22 crc kubenswrapper[4854]: W0103 05:58:22.032631 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec8a24a9_62d4_4db8_8f17_f261a85d6a47.slice/crio-b9855a22fe4d579eaf1b57ce9e04822bb7bf94bce65071ab56a3c8be427b3205 WatchSource:0}: Error finding container b9855a22fe4d579eaf1b57ce9e04822bb7bf94bce65071ab56a3c8be427b3205: Status 404 returned error can't find the container with id b9855a22fe4d579eaf1b57ce9e04822bb7bf94bce65071ab56a3c8be427b3205 Jan 03 05:58:22 crc kubenswrapper[4854]: I0103 05:58:22.037191 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-9mhgb"] Jan 03 05:58:22 crc kubenswrapper[4854]: I0103 05:58:22.127512 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed9b9fda-53a8-4374-a4ca-3a432c85c8d4" path="/var/lib/kubelet/pods/ed9b9fda-53a8-4374-a4ca-3a432c85c8d4/volumes" Jan 03 05:58:22 crc kubenswrapper[4854]: I0103 05:58:22.972996 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-7vksk" event={"ID":"ec8a24a9-62d4-4db8-8f17-f261a85d6a47","Type":"ContainerStarted","Data":"bf78afb6756a0ac3f08ae94f7f5549ad50f06b336a95f7cfdbbb58d836a8e757"} Jan 03 05:58:22 crc kubenswrapper[4854]: I0103 05:58:22.973470 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-7vksk" event={"ID":"ec8a24a9-62d4-4db8-8f17-f261a85d6a47","Type":"ContainerStarted","Data":"b9855a22fe4d579eaf1b57ce9e04822bb7bf94bce65071ab56a3c8be427b3205"} Jan 03 05:58:23 crc kubenswrapper[4854]: I0103 05:58:23.011765 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-7vksk" podStartSLOduration=1.929815276 podStartE2EDuration="2.011745261s" podCreationTimestamp="2026-01-03 05:58:21 +0000 UTC" firstStartedPulling="2026-01-03 05:58:22.034137303 +0000 UTC m=+1080.360713885" lastFinishedPulling="2026-01-03 05:58:22.116067298 +0000 UTC m=+1080.442643870" observedRunningTime="2026-01-03 05:58:22.992214981 +0000 UTC m=+1081.318791583" watchObservedRunningTime="2026-01-03 05:58:23.011745261 +0000 UTC m=+1081.338321833" Jan 03 05:58:31 crc kubenswrapper[4854]: I0103 05:58:31.064188 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-lzwb4" Jan 03 05:58:31 crc kubenswrapper[4854]: I0103 05:58:31.072847 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-6fczv" Jan 03 05:58:31 crc kubenswrapper[4854]: I0103 05:58:31.401448 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-7vksk" Jan 03 05:58:31 crc kubenswrapper[4854]: I0103 05:58:31.401511 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-7vksk" Jan 03 05:58:31 crc kubenswrapper[4854]: I0103 05:58:31.445989 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-7vksk" Jan 03 05:58:32 crc kubenswrapper[4854]: I0103 05:58:32.095015 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-7vksk" Jan 03 05:58:38 crc kubenswrapper[4854]: I0103 05:58:38.715026 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/22e8c3df57bae442a7d819714ad2d4060a2a8cb9b2c08fade900b10f12x885k"] Jan 03 05:58:38 crc kubenswrapper[4854]: E0103 05:58:38.715953 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed9b9fda-53a8-4374-a4ca-3a432c85c8d4" containerName="registry-server" Jan 03 05:58:38 crc kubenswrapper[4854]: I0103 05:58:38.715966 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed9b9fda-53a8-4374-a4ca-3a432c85c8d4" containerName="registry-server" Jan 03 05:58:38 crc kubenswrapper[4854]: I0103 05:58:38.716148 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed9b9fda-53a8-4374-a4ca-3a432c85c8d4" containerName="registry-server" Jan 03 05:58:38 crc kubenswrapper[4854]: I0103 05:58:38.717340 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/22e8c3df57bae442a7d819714ad2d4060a2a8cb9b2c08fade900b10f12x885k" Jan 03 05:58:38 crc kubenswrapper[4854]: I0103 05:58:38.719490 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-rnrnr" Jan 03 05:58:38 crc kubenswrapper[4854]: I0103 05:58:38.731137 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/22e8c3df57bae442a7d819714ad2d4060a2a8cb9b2c08fade900b10f12x885k"] Jan 03 05:58:38 crc kubenswrapper[4854]: I0103 05:58:38.753672 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b71122bd-3890-464d-a427-fff759045806-bundle\") pod \"22e8c3df57bae442a7d819714ad2d4060a2a8cb9b2c08fade900b10f12x885k\" (UID: \"b71122bd-3890-464d-a427-fff759045806\") " pod="openstack-operators/22e8c3df57bae442a7d819714ad2d4060a2a8cb9b2c08fade900b10f12x885k" Jan 03 05:58:38 crc kubenswrapper[4854]: I0103 05:58:38.753759 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b71122bd-3890-464d-a427-fff759045806-util\") pod \"22e8c3df57bae442a7d819714ad2d4060a2a8cb9b2c08fade900b10f12x885k\" (UID: \"b71122bd-3890-464d-a427-fff759045806\") " pod="openstack-operators/22e8c3df57bae442a7d819714ad2d4060a2a8cb9b2c08fade900b10f12x885k" Jan 03 05:58:38 crc kubenswrapper[4854]: I0103 05:58:38.753787 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shw6h\" (UniqueName: \"kubernetes.io/projected/b71122bd-3890-464d-a427-fff759045806-kube-api-access-shw6h\") pod \"22e8c3df57bae442a7d819714ad2d4060a2a8cb9b2c08fade900b10f12x885k\" (UID: \"b71122bd-3890-464d-a427-fff759045806\") " pod="openstack-operators/22e8c3df57bae442a7d819714ad2d4060a2a8cb9b2c08fade900b10f12x885k" Jan 03 05:58:38 crc kubenswrapper[4854]: I0103 05:58:38.855456 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b71122bd-3890-464d-a427-fff759045806-bundle\") pod \"22e8c3df57bae442a7d819714ad2d4060a2a8cb9b2c08fade900b10f12x885k\" (UID: \"b71122bd-3890-464d-a427-fff759045806\") " pod="openstack-operators/22e8c3df57bae442a7d819714ad2d4060a2a8cb9b2c08fade900b10f12x885k" Jan 03 05:58:38 crc kubenswrapper[4854]: I0103 05:58:38.855562 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b71122bd-3890-464d-a427-fff759045806-util\") pod \"22e8c3df57bae442a7d819714ad2d4060a2a8cb9b2c08fade900b10f12x885k\" (UID: \"b71122bd-3890-464d-a427-fff759045806\") " pod="openstack-operators/22e8c3df57bae442a7d819714ad2d4060a2a8cb9b2c08fade900b10f12x885k" Jan 03 05:58:38 crc kubenswrapper[4854]: I0103 05:58:38.855592 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shw6h\" (UniqueName: \"kubernetes.io/projected/b71122bd-3890-464d-a427-fff759045806-kube-api-access-shw6h\") pod \"22e8c3df57bae442a7d819714ad2d4060a2a8cb9b2c08fade900b10f12x885k\" (UID: \"b71122bd-3890-464d-a427-fff759045806\") " pod="openstack-operators/22e8c3df57bae442a7d819714ad2d4060a2a8cb9b2c08fade900b10f12x885k" Jan 03 05:58:38 crc kubenswrapper[4854]: I0103 05:58:38.856228 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b71122bd-3890-464d-a427-fff759045806-bundle\") pod \"22e8c3df57bae442a7d819714ad2d4060a2a8cb9b2c08fade900b10f12x885k\" (UID: \"b71122bd-3890-464d-a427-fff759045806\") " pod="openstack-operators/22e8c3df57bae442a7d819714ad2d4060a2a8cb9b2c08fade900b10f12x885k" Jan 03 05:58:38 crc kubenswrapper[4854]: I0103 05:58:38.856347 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b71122bd-3890-464d-a427-fff759045806-util\") pod \"22e8c3df57bae442a7d819714ad2d4060a2a8cb9b2c08fade900b10f12x885k\" (UID: \"b71122bd-3890-464d-a427-fff759045806\") " pod="openstack-operators/22e8c3df57bae442a7d819714ad2d4060a2a8cb9b2c08fade900b10f12x885k" Jan 03 05:58:38 crc kubenswrapper[4854]: I0103 05:58:38.876444 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shw6h\" (UniqueName: \"kubernetes.io/projected/b71122bd-3890-464d-a427-fff759045806-kube-api-access-shw6h\") pod \"22e8c3df57bae442a7d819714ad2d4060a2a8cb9b2c08fade900b10f12x885k\" (UID: \"b71122bd-3890-464d-a427-fff759045806\") " pod="openstack-operators/22e8c3df57bae442a7d819714ad2d4060a2a8cb9b2c08fade900b10f12x885k" Jan 03 05:58:39 crc kubenswrapper[4854]: I0103 05:58:39.088514 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/22e8c3df57bae442a7d819714ad2d4060a2a8cb9b2c08fade900b10f12x885k" Jan 03 05:58:39 crc kubenswrapper[4854]: I0103 05:58:39.508893 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/22e8c3df57bae442a7d819714ad2d4060a2a8cb9b2c08fade900b10f12x885k"] Jan 03 05:58:40 crc kubenswrapper[4854]: I0103 05:58:40.137210 4854 generic.go:334] "Generic (PLEG): container finished" podID="b71122bd-3890-464d-a427-fff759045806" containerID="97fca2c8b987e71363c336547fa5c0202682617d6a27f97c96ef01a90afa8f65" exitCode=0 Jan 03 05:58:40 crc kubenswrapper[4854]: I0103 05:58:40.137642 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/22e8c3df57bae442a7d819714ad2d4060a2a8cb9b2c08fade900b10f12x885k" event={"ID":"b71122bd-3890-464d-a427-fff759045806","Type":"ContainerDied","Data":"97fca2c8b987e71363c336547fa5c0202682617d6a27f97c96ef01a90afa8f65"} Jan 03 05:58:40 crc kubenswrapper[4854]: I0103 05:58:40.137688 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/22e8c3df57bae442a7d819714ad2d4060a2a8cb9b2c08fade900b10f12x885k" event={"ID":"b71122bd-3890-464d-a427-fff759045806","Type":"ContainerStarted","Data":"a836d72ba6a96459dc2535b0573266413fac309dc0ab942d98d5e098709b7cd0"} Jan 03 05:58:42 crc kubenswrapper[4854]: I0103 05:58:42.159487 4854 generic.go:334] "Generic (PLEG): container finished" podID="b71122bd-3890-464d-a427-fff759045806" containerID="0afcef6eb5d5d0fb844c98e261d4f0e8325855b23b8f5afa3df6edf9835de309" exitCode=0 Jan 03 05:58:42 crc kubenswrapper[4854]: I0103 05:58:42.159552 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/22e8c3df57bae442a7d819714ad2d4060a2a8cb9b2c08fade900b10f12x885k" event={"ID":"b71122bd-3890-464d-a427-fff759045806","Type":"ContainerDied","Data":"0afcef6eb5d5d0fb844c98e261d4f0e8325855b23b8f5afa3df6edf9835de309"} Jan 03 05:58:43 crc kubenswrapper[4854]: I0103 05:58:43.172756 4854 generic.go:334] "Generic (PLEG): container finished" podID="b71122bd-3890-464d-a427-fff759045806" containerID="353dd6ff45ad91e5afa8ffe5cf3bf11baba1f13c2424930d027fad32571f2eca" exitCode=0 Jan 03 05:58:43 crc kubenswrapper[4854]: I0103 05:58:43.172834 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/22e8c3df57bae442a7d819714ad2d4060a2a8cb9b2c08fade900b10f12x885k" event={"ID":"b71122bd-3890-464d-a427-fff759045806","Type":"ContainerDied","Data":"353dd6ff45ad91e5afa8ffe5cf3bf11baba1f13c2424930d027fad32571f2eca"} Jan 03 05:58:44 crc kubenswrapper[4854]: I0103 05:58:44.579602 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/22e8c3df57bae442a7d819714ad2d4060a2a8cb9b2c08fade900b10f12x885k" Jan 03 05:58:44 crc kubenswrapper[4854]: I0103 05:58:44.667685 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b71122bd-3890-464d-a427-fff759045806-util\") pod \"b71122bd-3890-464d-a427-fff759045806\" (UID: \"b71122bd-3890-464d-a427-fff759045806\") " Jan 03 05:58:44 crc kubenswrapper[4854]: I0103 05:58:44.667959 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shw6h\" (UniqueName: \"kubernetes.io/projected/b71122bd-3890-464d-a427-fff759045806-kube-api-access-shw6h\") pod \"b71122bd-3890-464d-a427-fff759045806\" (UID: \"b71122bd-3890-464d-a427-fff759045806\") " Jan 03 05:58:44 crc kubenswrapper[4854]: I0103 05:58:44.668055 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b71122bd-3890-464d-a427-fff759045806-bundle\") pod \"b71122bd-3890-464d-a427-fff759045806\" (UID: \"b71122bd-3890-464d-a427-fff759045806\") " Jan 03 05:58:44 crc kubenswrapper[4854]: I0103 05:58:44.669610 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b71122bd-3890-464d-a427-fff759045806-bundle" (OuterVolumeSpecName: "bundle") pod "b71122bd-3890-464d-a427-fff759045806" (UID: "b71122bd-3890-464d-a427-fff759045806"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:58:44 crc kubenswrapper[4854]: I0103 05:58:44.682552 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b71122bd-3890-464d-a427-fff759045806-kube-api-access-shw6h" (OuterVolumeSpecName: "kube-api-access-shw6h") pod "b71122bd-3890-464d-a427-fff759045806" (UID: "b71122bd-3890-464d-a427-fff759045806"). InnerVolumeSpecName "kube-api-access-shw6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:58:44 crc kubenswrapper[4854]: I0103 05:58:44.692675 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b71122bd-3890-464d-a427-fff759045806-util" (OuterVolumeSpecName: "util") pod "b71122bd-3890-464d-a427-fff759045806" (UID: "b71122bd-3890-464d-a427-fff759045806"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:58:44 crc kubenswrapper[4854]: I0103 05:58:44.770146 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shw6h\" (UniqueName: \"kubernetes.io/projected/b71122bd-3890-464d-a427-fff759045806-kube-api-access-shw6h\") on node \"crc\" DevicePath \"\"" Jan 03 05:58:44 crc kubenswrapper[4854]: I0103 05:58:44.770195 4854 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b71122bd-3890-464d-a427-fff759045806-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 05:58:44 crc kubenswrapper[4854]: I0103 05:58:44.770208 4854 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b71122bd-3890-464d-a427-fff759045806-util\") on node \"crc\" DevicePath \"\"" Jan 03 05:58:45 crc kubenswrapper[4854]: I0103 05:58:45.193816 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/22e8c3df57bae442a7d819714ad2d4060a2a8cb9b2c08fade900b10f12x885k" event={"ID":"b71122bd-3890-464d-a427-fff759045806","Type":"ContainerDied","Data":"a836d72ba6a96459dc2535b0573266413fac309dc0ab942d98d5e098709b7cd0"} Jan 03 05:58:45 crc kubenswrapper[4854]: I0103 05:58:45.193873 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a836d72ba6a96459dc2535b0573266413fac309dc0ab942d98d5e098709b7cd0" Jan 03 05:58:45 crc kubenswrapper[4854]: I0103 05:58:45.193940 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/22e8c3df57bae442a7d819714ad2d4060a2a8cb9b2c08fade900b10f12x885k" Jan 03 05:59:03 crc kubenswrapper[4854]: I0103 05:59:03.098631 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-c8b457848-dg5cs"] Jan 03 05:59:03 crc kubenswrapper[4854]: E0103 05:59:03.100701 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b71122bd-3890-464d-a427-fff759045806" containerName="util" Jan 03 05:59:03 crc kubenswrapper[4854]: I0103 05:59:03.100719 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="b71122bd-3890-464d-a427-fff759045806" containerName="util" Jan 03 05:59:03 crc kubenswrapper[4854]: E0103 05:59:03.100742 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b71122bd-3890-464d-a427-fff759045806" containerName="extract" Jan 03 05:59:03 crc kubenswrapper[4854]: I0103 05:59:03.100751 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="b71122bd-3890-464d-a427-fff759045806" containerName="extract" Jan 03 05:59:03 crc kubenswrapper[4854]: E0103 05:59:03.100766 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b71122bd-3890-464d-a427-fff759045806" containerName="pull" Jan 03 05:59:03 crc kubenswrapper[4854]: I0103 05:59:03.100773 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="b71122bd-3890-464d-a427-fff759045806" containerName="pull" Jan 03 05:59:03 crc kubenswrapper[4854]: I0103 05:59:03.100925 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="b71122bd-3890-464d-a427-fff759045806" containerName="extract" Jan 03 05:59:03 crc kubenswrapper[4854]: I0103 05:59:03.101573 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-c8b457848-dg5cs" Jan 03 05:59:03 crc kubenswrapper[4854]: I0103 05:59:03.104965 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-hzmnx" Jan 03 05:59:03 crc kubenswrapper[4854]: I0103 05:59:03.145490 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7pvs\" (UniqueName: \"kubernetes.io/projected/bc9994eb-5930-484d-a02c-60d4e13483e2-kube-api-access-m7pvs\") pod \"openstack-operator-controller-operator-c8b457848-dg5cs\" (UID: \"bc9994eb-5930-484d-a02c-60d4e13483e2\") " pod="openstack-operators/openstack-operator-controller-operator-c8b457848-dg5cs" Jan 03 05:59:03 crc kubenswrapper[4854]: I0103 05:59:03.152563 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-c8b457848-dg5cs"] Jan 03 05:59:03 crc kubenswrapper[4854]: I0103 05:59:03.248273 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7pvs\" (UniqueName: \"kubernetes.io/projected/bc9994eb-5930-484d-a02c-60d4e13483e2-kube-api-access-m7pvs\") pod \"openstack-operator-controller-operator-c8b457848-dg5cs\" (UID: \"bc9994eb-5930-484d-a02c-60d4e13483e2\") " pod="openstack-operators/openstack-operator-controller-operator-c8b457848-dg5cs" Jan 03 05:59:03 crc kubenswrapper[4854]: I0103 05:59:03.273711 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7pvs\" (UniqueName: \"kubernetes.io/projected/bc9994eb-5930-484d-a02c-60d4e13483e2-kube-api-access-m7pvs\") pod \"openstack-operator-controller-operator-c8b457848-dg5cs\" (UID: \"bc9994eb-5930-484d-a02c-60d4e13483e2\") " pod="openstack-operators/openstack-operator-controller-operator-c8b457848-dg5cs" Jan 03 05:59:03 crc kubenswrapper[4854]: I0103 05:59:03.432290 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-c8b457848-dg5cs" Jan 03 05:59:03 crc kubenswrapper[4854]: I0103 05:59:03.890763 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-c8b457848-dg5cs"] Jan 03 05:59:04 crc kubenswrapper[4854]: I0103 05:59:04.377492 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-c8b457848-dg5cs" event={"ID":"bc9994eb-5930-484d-a02c-60d4e13483e2","Type":"ContainerStarted","Data":"6cbcd0ae913da38a1ce1357e9d04d0230d9540b405856f783f296830b9c0faba"} Jan 03 05:59:10 crc kubenswrapper[4854]: I0103 05:59:10.450676 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-c8b457848-dg5cs" event={"ID":"bc9994eb-5930-484d-a02c-60d4e13483e2","Type":"ContainerStarted","Data":"423337c31d04ab35f34cc1bfe20f120baa0b2e3d55c33fe2710212e8b1497b88"} Jan 03 05:59:10 crc kubenswrapper[4854]: I0103 05:59:10.451338 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-c8b457848-dg5cs" Jan 03 05:59:10 crc kubenswrapper[4854]: I0103 05:59:10.478898 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-c8b457848-dg5cs" podStartSLOduration=1.9176863499999999 podStartE2EDuration="7.478877823s" podCreationTimestamp="2026-01-03 05:59:03 +0000 UTC" firstStartedPulling="2026-01-03 05:59:03.893623664 +0000 UTC m=+1122.220200256" lastFinishedPulling="2026-01-03 05:59:09.454815157 +0000 UTC m=+1127.781391729" observedRunningTime="2026-01-03 05:59:10.474336209 +0000 UTC m=+1128.800912821" watchObservedRunningTime="2026-01-03 05:59:10.478877823 +0000 UTC m=+1128.805454415" Jan 03 05:59:23 crc kubenswrapper[4854]: I0103 05:59:23.435593 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-c8b457848-dg5cs" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.124710 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-f6f74d6db-jvp7v"] Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.126350 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-jvp7v" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.127988 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-lbr9c" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.132681 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-78979fc445-jx5q2"] Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.133937 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-jx5q2" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.136462 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-zzrjq" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.140180 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-f6f74d6db-jvp7v"] Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.152504 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-78979fc445-jx5q2"] Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.159179 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-66f8b87655-msvf6"] Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.160445 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-msvf6" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.162774 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-snbqf" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.172800 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-7b549fc966-hgwsb"] Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.174072 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-hgwsb" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.178371 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-znm8r" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.184257 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66f8b87655-msvf6"] Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.205913 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-trsxr"] Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.207125 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-trsxr" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.208556 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-mnf4q" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.220251 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-trsxr"] Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.242115 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-658dd65b86-k6nnf"] Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.243205 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-k6nnf" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.249478 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-x2jvc" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.258917 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tz65\" (UniqueName: \"kubernetes.io/projected/40ad961e-d740-49fa-9a1f-e9d950002a3e-kube-api-access-9tz65\") pod \"cinder-operator-controller-manager-78979fc445-jx5q2\" (UID: \"40ad961e-d740-49fa-9a1f-e9d950002a3e\") " pod="openstack-operators/cinder-operator-controller-manager-78979fc445-jx5q2" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.258962 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2zd5\" (UniqueName: \"kubernetes.io/projected/c2f6c336-91f0-41e6-b439-c5d940264b7f-kube-api-access-t2zd5\") pod \"designate-operator-controller-manager-66f8b87655-msvf6\" (UID: \"c2f6c336-91f0-41e6-b439-c5d940264b7f\") " pod="openstack-operators/designate-operator-controller-manager-66f8b87655-msvf6" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.259015 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbtlt\" (UniqueName: \"kubernetes.io/projected/81de0b3b-e6fc-45c9-b347-995726d00213-kube-api-access-cbtlt\") pod \"barbican-operator-controller-manager-f6f74d6db-jvp7v\" (UID: \"81de0b3b-e6fc-45c9-b347-995726d00213\") " pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-jvp7v" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.266997 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-6d99759cf-qqbq9"] Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.268493 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qqbq9" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.272891 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.272975 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-jkg22" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.293808 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-6d99759cf-qqbq9"] Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.300156 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-658dd65b86-k6nnf"] Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.307650 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-f99f54bc8-4pbfn"] Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.308975 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-4pbfn" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.314147 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-f99f54bc8-4pbfn"] Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.314605 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-6fx8m" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.335821 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-7b549fc966-hgwsb"] Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.345942 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-568985c78-x78fv"] Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.347048 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-568985c78-x78fv" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.352018 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b88bfc995-vdnq9"] Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.355915 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-vdnq9" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.367260 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-crwmj" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.369105 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbtlt\" (UniqueName: \"kubernetes.io/projected/81de0b3b-e6fc-45c9-b347-995726d00213-kube-api-access-cbtlt\") pod \"barbican-operator-controller-manager-f6f74d6db-jvp7v\" (UID: \"81de0b3b-e6fc-45c9-b347-995726d00213\") " pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-jvp7v" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.369167 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkj4v\" (UniqueName: \"kubernetes.io/projected/ba0f32da-a0e3-4c43-8dde-d6212a1c63e1-kube-api-access-qkj4v\") pod \"infra-operator-controller-manager-6d99759cf-qqbq9\" (UID: \"ba0f32da-a0e3-4c43-8dde-d6212a1c63e1\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qqbq9" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.369205 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ba0f32da-a0e3-4c43-8dde-d6212a1c63e1-cert\") pod \"infra-operator-controller-manager-6d99759cf-qqbq9\" (UID: \"ba0f32da-a0e3-4c43-8dde-d6212a1c63e1\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qqbq9" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.369239 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92xv6\" (UniqueName: \"kubernetes.io/projected/f5b690cb-eb48-469c-a774-eff5eda46f89-kube-api-access-92xv6\") pod \"horizon-operator-controller-manager-7f5ddd8d7b-trsxr\" (UID: \"f5b690cb-eb48-469c-a774-eff5eda46f89\") " pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-trsxr" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.369275 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lxd2\" (UniqueName: \"kubernetes.io/projected/7d4776d0-290f-4c82-aa5c-6412b5bb4608-kube-api-access-5lxd2\") pod \"heat-operator-controller-manager-658dd65b86-k6nnf\" (UID: \"7d4776d0-290f-4c82-aa5c-6412b5bb4608\") " pod="openstack-operators/heat-operator-controller-manager-658dd65b86-k6nnf" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.369306 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tz65\" (UniqueName: \"kubernetes.io/projected/40ad961e-d740-49fa-9a1f-e9d950002a3e-kube-api-access-9tz65\") pod \"cinder-operator-controller-manager-78979fc445-jx5q2\" (UID: \"40ad961e-d740-49fa-9a1f-e9d950002a3e\") " pod="openstack-operators/cinder-operator-controller-manager-78979fc445-jx5q2" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.369342 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2zd5\" (UniqueName: \"kubernetes.io/projected/c2f6c336-91f0-41e6-b439-c5d940264b7f-kube-api-access-t2zd5\") pod \"designate-operator-controller-manager-66f8b87655-msvf6\" (UID: \"c2f6c336-91f0-41e6-b439-c5d940264b7f\") " pod="openstack-operators/designate-operator-controller-manager-66f8b87655-msvf6" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.369371 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbklk\" (UniqueName: \"kubernetes.io/projected/a327e8cf-824f-41b1-9076-5fd57a8b4352-kube-api-access-wbklk\") pod \"glance-operator-controller-manager-7b549fc966-hgwsb\" (UID: \"a327e8cf-824f-41b1-9076-5fd57a8b4352\") " pod="openstack-operators/glance-operator-controller-manager-7b549fc966-hgwsb" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.385724 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-568985c78-x78fv"] Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.393406 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-mv9pt" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.411015 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-598945d5b8-z7cfx"] Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.412256 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-z7cfx" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.414764 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b88bfc995-vdnq9"] Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.426906 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-ltffm" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.439536 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tz65\" (UniqueName: \"kubernetes.io/projected/40ad961e-d740-49fa-9a1f-e9d950002a3e-kube-api-access-9tz65\") pod \"cinder-operator-controller-manager-78979fc445-jx5q2\" (UID: \"40ad961e-d740-49fa-9a1f-e9d950002a3e\") " pod="openstack-operators/cinder-operator-controller-manager-78979fc445-jx5q2" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.447701 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbtlt\" (UniqueName: \"kubernetes.io/projected/81de0b3b-e6fc-45c9-b347-995726d00213-kube-api-access-cbtlt\") pod \"barbican-operator-controller-manager-f6f74d6db-jvp7v\" (UID: \"81de0b3b-e6fc-45c9-b347-995726d00213\") " pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-jvp7v" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.449762 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2zd5\" (UniqueName: \"kubernetes.io/projected/c2f6c336-91f0-41e6-b439-c5d940264b7f-kube-api-access-t2zd5\") pod \"designate-operator-controller-manager-66f8b87655-msvf6\" (UID: \"c2f6c336-91f0-41e6-b439-c5d940264b7f\") " pod="openstack-operators/designate-operator-controller-manager-66f8b87655-msvf6" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.460127 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7cd87b778f-xgtzc"] Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.461227 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-xgtzc" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.471622 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkj4v\" (UniqueName: \"kubernetes.io/projected/ba0f32da-a0e3-4c43-8dde-d6212a1c63e1-kube-api-access-qkj4v\") pod \"infra-operator-controller-manager-6d99759cf-qqbq9\" (UID: \"ba0f32da-a0e3-4c43-8dde-d6212a1c63e1\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qqbq9" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.471669 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ba0f32da-a0e3-4c43-8dde-d6212a1c63e1-cert\") pod \"infra-operator-controller-manager-6d99759cf-qqbq9\" (UID: \"ba0f32da-a0e3-4c43-8dde-d6212a1c63e1\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qqbq9" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.471709 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92xv6\" (UniqueName: \"kubernetes.io/projected/f5b690cb-eb48-469c-a774-eff5eda46f89-kube-api-access-92xv6\") pod \"horizon-operator-controller-manager-7f5ddd8d7b-trsxr\" (UID: \"f5b690cb-eb48-469c-a774-eff5eda46f89\") " pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-trsxr" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.471748 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lxd2\" (UniqueName: \"kubernetes.io/projected/7d4776d0-290f-4c82-aa5c-6412b5bb4608-kube-api-access-5lxd2\") pod \"heat-operator-controller-manager-658dd65b86-k6nnf\" (UID: \"7d4776d0-290f-4c82-aa5c-6412b5bb4608\") " pod="openstack-operators/heat-operator-controller-manager-658dd65b86-k6nnf" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.485547 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkf5h\" (UniqueName: \"kubernetes.io/projected/14991c3c-8c35-4008-b1a0-1b8690074322-kube-api-access-nkf5h\") pod \"keystone-operator-controller-manager-568985c78-x78fv\" (UID: \"14991c3c-8c35-4008-b1a0-1b8690074322\") " pod="openstack-operators/keystone-operator-controller-manager-568985c78-x78fv" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.485965 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbklk\" (UniqueName: \"kubernetes.io/projected/a327e8cf-824f-41b1-9076-5fd57a8b4352-kube-api-access-wbklk\") pod \"glance-operator-controller-manager-7b549fc966-hgwsb\" (UID: \"a327e8cf-824f-41b1-9076-5fd57a8b4352\") " pod="openstack-operators/glance-operator-controller-manager-7b549fc966-hgwsb" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.486120 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82z6r\" (UniqueName: \"kubernetes.io/projected/1d8399ce-3c90-4601-9a32-31dc20da4552-kube-api-access-82z6r\") pod \"ironic-operator-controller-manager-f99f54bc8-4pbfn\" (UID: \"1d8399ce-3c90-4601-9a32-31dc20da4552\") " pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-4pbfn" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.486175 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hktn9\" (UniqueName: \"kubernetes.io/projected/25988b2b-1924-4007-a6b1-5e5403d5dc68-kube-api-access-hktn9\") pod \"mariadb-operator-controller-manager-7b88bfc995-vdnq9\" (UID: \"25988b2b-1924-4007-a6b1-5e5403d5dc68\") " pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-vdnq9" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.501821 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-b4t7p" Jan 03 05:59:43 crc kubenswrapper[4854]: E0103 05:59:43.502650 4854 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 03 05:59:43 crc kubenswrapper[4854]: E0103 05:59:43.502705 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba0f32da-a0e3-4c43-8dde-d6212a1c63e1-cert podName:ba0f32da-a0e3-4c43-8dde-d6212a1c63e1 nodeName:}" failed. No retries permitted until 2026-01-03 05:59:44.002687816 +0000 UTC m=+1162.329264388 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ba0f32da-a0e3-4c43-8dde-d6212a1c63e1-cert") pod "infra-operator-controller-manager-6d99759cf-qqbq9" (UID: "ba0f32da-a0e3-4c43-8dde-d6212a1c63e1") : secret "infra-operator-webhook-server-cert" not found Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.504700 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-598945d5b8-z7cfx"] Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.512451 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-jx5q2" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.516538 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-jvp7v" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.528792 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-msvf6" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.543256 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7cd87b778f-xgtzc"] Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.546097 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92xv6\" (UniqueName: \"kubernetes.io/projected/f5b690cb-eb48-469c-a774-eff5eda46f89-kube-api-access-92xv6\") pod \"horizon-operator-controller-manager-7f5ddd8d7b-trsxr\" (UID: \"f5b690cb-eb48-469c-a774-eff5eda46f89\") " pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-trsxr" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.547744 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbklk\" (UniqueName: \"kubernetes.io/projected/a327e8cf-824f-41b1-9076-5fd57a8b4352-kube-api-access-wbklk\") pod \"glance-operator-controller-manager-7b549fc966-hgwsb\" (UID: \"a327e8cf-824f-41b1-9076-5fd57a8b4352\") " pod="openstack-operators/glance-operator-controller-manager-7b549fc966-hgwsb" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.558559 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkj4v\" (UniqueName: \"kubernetes.io/projected/ba0f32da-a0e3-4c43-8dde-d6212a1c63e1-kube-api-access-qkj4v\") pod \"infra-operator-controller-manager-6d99759cf-qqbq9\" (UID: \"ba0f32da-a0e3-4c43-8dde-d6212a1c63e1\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qqbq9" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.561406 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-trsxr" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.614830 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c725l\" (UniqueName: \"kubernetes.io/projected/04d8c7f1-6674-45b0-9506-9d62c1a2f892-kube-api-access-c725l\") pod \"neutron-operator-controller-manager-7cd87b778f-xgtzc\" (UID: \"04d8c7f1-6674-45b0-9506-9d62c1a2f892\") " pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-xgtzc" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.615105 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkf5h\" (UniqueName: \"kubernetes.io/projected/14991c3c-8c35-4008-b1a0-1b8690074322-kube-api-access-nkf5h\") pod \"keystone-operator-controller-manager-568985c78-x78fv\" (UID: \"14991c3c-8c35-4008-b1a0-1b8690074322\") " pod="openstack-operators/keystone-operator-controller-manager-568985c78-x78fv" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.645132 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lxd2\" (UniqueName: \"kubernetes.io/projected/7d4776d0-290f-4c82-aa5c-6412b5bb4608-kube-api-access-5lxd2\") pod \"heat-operator-controller-manager-658dd65b86-k6nnf\" (UID: \"7d4776d0-290f-4c82-aa5c-6412b5bb4608\") " pod="openstack-operators/heat-operator-controller-manager-658dd65b86-k6nnf" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.654376 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82z6r\" (UniqueName: \"kubernetes.io/projected/1d8399ce-3c90-4601-9a32-31dc20da4552-kube-api-access-82z6r\") pod \"ironic-operator-controller-manager-f99f54bc8-4pbfn\" (UID: \"1d8399ce-3c90-4601-9a32-31dc20da4552\") " pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-4pbfn" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.655309 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hktn9\" (UniqueName: \"kubernetes.io/projected/25988b2b-1924-4007-a6b1-5e5403d5dc68-kube-api-access-hktn9\") pod \"mariadb-operator-controller-manager-7b88bfc995-vdnq9\" (UID: \"25988b2b-1924-4007-a6b1-5e5403d5dc68\") " pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-vdnq9" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.655359 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m8hq\" (UniqueName: \"kubernetes.io/projected/fe7f33a3-c4b8-44b6-81f1-c2143cbb9dd1-kube-api-access-6m8hq\") pod \"manila-operator-controller-manager-598945d5b8-z7cfx\" (UID: \"fe7f33a3-c4b8-44b6-81f1-c2143cbb9dd1\") " pod="openstack-operators/manila-operator-controller-manager-598945d5b8-z7cfx" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.678147 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-ncjlb"] Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.679914 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-ncjlb" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.700954 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-bqfb8" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.715739 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-ncjlb"] Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.716186 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkf5h\" (UniqueName: \"kubernetes.io/projected/14991c3c-8c35-4008-b1a0-1b8690074322-kube-api-access-nkf5h\") pod \"keystone-operator-controller-manager-568985c78-x78fv\" (UID: \"14991c3c-8c35-4008-b1a0-1b8690074322\") " pod="openstack-operators/keystone-operator-controller-manager-568985c78-x78fv" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.716805 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82z6r\" (UniqueName: \"kubernetes.io/projected/1d8399ce-3c90-4601-9a32-31dc20da4552-kube-api-access-82z6r\") pod \"ironic-operator-controller-manager-f99f54bc8-4pbfn\" (UID: \"1d8399ce-3c90-4601-9a32-31dc20da4552\") " pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-4pbfn" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.731297 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-68c649d9d-8xksh"] Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.732774 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-8xksh" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.737150 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-k4s56" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.737598 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bf6d4f946-jqj54"] Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.738938 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-jqj54" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.740139 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-68c649d9d-8xksh"] Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.743624 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-qqncc" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.749773 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bf6d4f946-jqj54"] Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.757020 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6m8hq\" (UniqueName: \"kubernetes.io/projected/fe7f33a3-c4b8-44b6-81f1-c2143cbb9dd1-kube-api-access-6m8hq\") pod \"manila-operator-controller-manager-598945d5b8-z7cfx\" (UID: \"fe7f33a3-c4b8-44b6-81f1-c2143cbb9dd1\") " pod="openstack-operators/manila-operator-controller-manager-598945d5b8-z7cfx" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.757117 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjtz7\" (UniqueName: \"kubernetes.io/projected/402a077e-f741-447d-ab1c-25bc62cd24cf-kube-api-access-xjtz7\") pod \"nova-operator-controller-manager-5fbbf8b6cc-ncjlb\" (UID: \"402a077e-f741-447d-ab1c-25bc62cd24cf\") " pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-ncjlb" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.757194 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c725l\" (UniqueName: \"kubernetes.io/projected/04d8c7f1-6674-45b0-9506-9d62c1a2f892-kube-api-access-c725l\") pod \"neutron-operator-controller-manager-7cd87b778f-xgtzc\" (UID: \"04d8c7f1-6674-45b0-9506-9d62c1a2f892\") " pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-xgtzc" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.762125 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7nclw5"] Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.763338 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7nclw5" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.792544 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-9b6f8f78c-dprp4"] Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.793802 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-9b6f8f78c-dprp4"] Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.793973 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-dprp4" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.794198 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.794483 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-trlt7" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.817140 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-bb586bbf4-qzzw2"] Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.818323 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-qzzw2" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.840236 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7nclw5"] Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.841982 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-hgwsb" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.843001 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-bb586bbf4-qzzw2"] Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.849419 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7666dbdd4f-46t4f"] Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.850917 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7666dbdd4f-46t4f" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.890386 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr4k6\" (UniqueName: \"kubernetes.io/projected/05f5522f-8e47-4d35-be75-2edee0f16f77-kube-api-access-pr4k6\") pod \"octavia-operator-controller-manager-68c649d9d-8xksh\" (UID: \"05f5522f-8e47-4d35-be75-2edee0f16f77\") " pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-8xksh" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.890434 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjtz7\" (UniqueName: \"kubernetes.io/projected/402a077e-f741-447d-ab1c-25bc62cd24cf-kube-api-access-xjtz7\") pod \"nova-operator-controller-manager-5fbbf8b6cc-ncjlb\" (UID: \"402a077e-f741-447d-ab1c-25bc62cd24cf\") " pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-ncjlb" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.890517 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvqcj\" (UniqueName: \"kubernetes.io/projected/6515eec5-5595-42cb-8588-81baa0db47c1-kube-api-access-xvqcj\") pod \"placement-operator-controller-manager-9b6f8f78c-dprp4\" (UID: \"6515eec5-5595-42cb-8588-81baa0db47c1\") " pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-dprp4" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.891161 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tjzd\" (UniqueName: \"kubernetes.io/projected/1f9928f3-0c28-40df-b6ad-c871424ad3a6-kube-api-access-8tjzd\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7nclw5\" (UID: \"1f9928f3-0c28-40df-b6ad-c871424ad3a6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7nclw5" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.891208 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1f9928f3-0c28-40df-b6ad-c871424ad3a6-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7nclw5\" (UID: \"1f9928f3-0c28-40df-b6ad-c871424ad3a6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7nclw5" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.891231 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzxkc\" (UniqueName: \"kubernetes.io/projected/e62c43c5-cac2-4f9f-9e1b-de61827c4c94-kube-api-access-nzxkc\") pod \"ovn-operator-controller-manager-bf6d4f946-jqj54\" (UID: \"e62c43c5-cac2-4f9f-9e1b-de61827c4c94\") " pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-jqj54" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.893406 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-k6nnf" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.894803 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-b8dq7" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.895462 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-k6b5s" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.895792 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-dqmwp" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.905229 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-6c866cfdcb-7lvxp"] Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.906613 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-9dbdf6486-xrghz"] Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.907419 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7666dbdd4f-46t4f"] Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.907513 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-xrghz" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.910902 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-7lvxp" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.914028 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hktn9\" (UniqueName: \"kubernetes.io/projected/25988b2b-1924-4007-a6b1-5e5403d5dc68-kube-api-access-hktn9\") pod \"mariadb-operator-controller-manager-7b88bfc995-vdnq9\" (UID: \"25988b2b-1924-4007-a6b1-5e5403d5dc68\") " pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-vdnq9" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.917643 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-4scdp" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.917920 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-sqdnw" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.925548 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6m8hq\" (UniqueName: \"kubernetes.io/projected/fe7f33a3-c4b8-44b6-81f1-c2143cbb9dd1-kube-api-access-6m8hq\") pod \"manila-operator-controller-manager-598945d5b8-z7cfx\" (UID: \"fe7f33a3-c4b8-44b6-81f1-c2143cbb9dd1\") " pod="openstack-operators/manila-operator-controller-manager-598945d5b8-z7cfx" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.927724 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c725l\" (UniqueName: \"kubernetes.io/projected/04d8c7f1-6674-45b0-9506-9d62c1a2f892-kube-api-access-c725l\") pod \"neutron-operator-controller-manager-7cd87b778f-xgtzc\" (UID: \"04d8c7f1-6674-45b0-9506-9d62c1a2f892\") " pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-xgtzc" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.936835 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-4pbfn" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.944429 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-6c866cfdcb-7lvxp"] Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.959396 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-xgtzc" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.967193 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-9dbdf6486-xrghz"] Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.967512 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjtz7\" (UniqueName: \"kubernetes.io/projected/402a077e-f741-447d-ab1c-25bc62cd24cf-kube-api-access-xjtz7\") pod \"nova-operator-controller-manager-5fbbf8b6cc-ncjlb\" (UID: \"402a077e-f741-447d-ab1c-25bc62cd24cf\") " pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-ncjlb" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.986816 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-568985c78-x78fv" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.993346 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5pqb\" (UniqueName: \"kubernetes.io/projected/ad6a18d3-e1d2-446a-9b41-a9fca5e8b574-kube-api-access-c5pqb\") pod \"test-operator-controller-manager-6c866cfdcb-7lvxp\" (UID: \"ad6a18d3-e1d2-446a-9b41-a9fca5e8b574\") " pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-7lvxp" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.993414 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt5nk\" (UniqueName: \"kubernetes.io/projected/ddf8e54e-858e-432c-ab2d-8b4d83f6282b-kube-api-access-rt5nk\") pod \"swift-operator-controller-manager-bb586bbf4-qzzw2\" (UID: \"ddf8e54e-858e-432c-ab2d-8b4d83f6282b\") " pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-qzzw2" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.993455 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvqcj\" (UniqueName: \"kubernetes.io/projected/6515eec5-5595-42cb-8588-81baa0db47c1-kube-api-access-xvqcj\") pod \"placement-operator-controller-manager-9b6f8f78c-dprp4\" (UID: \"6515eec5-5595-42cb-8588-81baa0db47c1\") " pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-dprp4" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.993488 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwgll\" (UniqueName: \"kubernetes.io/projected/56476ba9-ae33-4d34-855c-0e144e4f5da3-kube-api-access-nwgll\") pod \"watcher-operator-controller-manager-9dbdf6486-xrghz\" (UID: \"56476ba9-ae33-4d34-855c-0e144e4f5da3\") " pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-xrghz" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.993507 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tjzd\" (UniqueName: \"kubernetes.io/projected/1f9928f3-0c28-40df-b6ad-c871424ad3a6-kube-api-access-8tjzd\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7nclw5\" (UID: \"1f9928f3-0c28-40df-b6ad-c871424ad3a6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7nclw5" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.993540 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1f9928f3-0c28-40df-b6ad-c871424ad3a6-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7nclw5\" (UID: \"1f9928f3-0c28-40df-b6ad-c871424ad3a6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7nclw5" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.993568 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzxkc\" (UniqueName: \"kubernetes.io/projected/e62c43c5-cac2-4f9f-9e1b-de61827c4c94-kube-api-access-nzxkc\") pod \"ovn-operator-controller-manager-bf6d4f946-jqj54\" (UID: \"e62c43c5-cac2-4f9f-9e1b-de61827c4c94\") " pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-jqj54" Jan 03 05:59:43 crc kubenswrapper[4854]: I0103 05:59:43.993604 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tj2lk\" (UniqueName: \"kubernetes.io/projected/8f21d9f8-0bdd-43de-8196-186dccb7b2f8-kube-api-access-tj2lk\") pod \"telemetry-operator-controller-manager-7666dbdd4f-46t4f\" (UID: \"8f21d9f8-0bdd-43de-8196-186dccb7b2f8\") " pod="openstack-operators/telemetry-operator-controller-manager-7666dbdd4f-46t4f" Jan 03 05:59:44 crc kubenswrapper[4854]: I0103 05:59:43.993665 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr4k6\" (UniqueName: \"kubernetes.io/projected/05f5522f-8e47-4d35-be75-2edee0f16f77-kube-api-access-pr4k6\") pod \"octavia-operator-controller-manager-68c649d9d-8xksh\" (UID: \"05f5522f-8e47-4d35-be75-2edee0f16f77\") " pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-8xksh" Jan 03 05:59:44 crc kubenswrapper[4854]: E0103 05:59:43.994331 4854 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 03 05:59:44 crc kubenswrapper[4854]: E0103 05:59:43.994380 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1f9928f3-0c28-40df-b6ad-c871424ad3a6-cert podName:1f9928f3-0c28-40df-b6ad-c871424ad3a6 nodeName:}" failed. No retries permitted until 2026-01-03 05:59:44.494364218 +0000 UTC m=+1162.820940790 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1f9928f3-0c28-40df-b6ad-c871424ad3a6-cert") pod "openstack-baremetal-operator-controller-manager-78948ddfd7nclw5" (UID: "1f9928f3-0c28-40df-b6ad-c871424ad3a6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 03 05:59:44 crc kubenswrapper[4854]: I0103 05:59:43.995236 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-vdnq9" Jan 03 05:59:44 crc kubenswrapper[4854]: I0103 05:59:44.030960 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr4k6\" (UniqueName: \"kubernetes.io/projected/05f5522f-8e47-4d35-be75-2edee0f16f77-kube-api-access-pr4k6\") pod \"octavia-operator-controller-manager-68c649d9d-8xksh\" (UID: \"05f5522f-8e47-4d35-be75-2edee0f16f77\") " pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-8xksh" Jan 03 05:59:44 crc kubenswrapper[4854]: I0103 05:59:44.035774 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzxkc\" (UniqueName: \"kubernetes.io/projected/e62c43c5-cac2-4f9f-9e1b-de61827c4c94-kube-api-access-nzxkc\") pod \"ovn-operator-controller-manager-bf6d4f946-jqj54\" (UID: \"e62c43c5-cac2-4f9f-9e1b-de61827c4c94\") " pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-jqj54" Jan 03 05:59:44 crc kubenswrapper[4854]: I0103 05:59:44.100761 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ba0f32da-a0e3-4c43-8dde-d6212a1c63e1-cert\") pod \"infra-operator-controller-manager-6d99759cf-qqbq9\" (UID: \"ba0f32da-a0e3-4c43-8dde-d6212a1c63e1\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qqbq9" Jan 03 05:59:44 crc kubenswrapper[4854]: I0103 05:59:44.101009 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5pqb\" (UniqueName: \"kubernetes.io/projected/ad6a18d3-e1d2-446a-9b41-a9fca5e8b574-kube-api-access-c5pqb\") pod \"test-operator-controller-manager-6c866cfdcb-7lvxp\" (UID: \"ad6a18d3-e1d2-446a-9b41-a9fca5e8b574\") " pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-7lvxp" Jan 03 05:59:44 crc kubenswrapper[4854]: I0103 05:59:44.101135 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rt5nk\" (UniqueName: \"kubernetes.io/projected/ddf8e54e-858e-432c-ab2d-8b4d83f6282b-kube-api-access-rt5nk\") pod \"swift-operator-controller-manager-bb586bbf4-qzzw2\" (UID: \"ddf8e54e-858e-432c-ab2d-8b4d83f6282b\") " pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-qzzw2" Jan 03 05:59:44 crc kubenswrapper[4854]: I0103 05:59:44.101241 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwgll\" (UniqueName: \"kubernetes.io/projected/56476ba9-ae33-4d34-855c-0e144e4f5da3-kube-api-access-nwgll\") pod \"watcher-operator-controller-manager-9dbdf6486-xrghz\" (UID: \"56476ba9-ae33-4d34-855c-0e144e4f5da3\") " pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-xrghz" Jan 03 05:59:44 crc kubenswrapper[4854]: I0103 05:59:44.101353 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tj2lk\" (UniqueName: \"kubernetes.io/projected/8f21d9f8-0bdd-43de-8196-186dccb7b2f8-kube-api-access-tj2lk\") pod \"telemetry-operator-controller-manager-7666dbdd4f-46t4f\" (UID: \"8f21d9f8-0bdd-43de-8196-186dccb7b2f8\") " pod="openstack-operators/telemetry-operator-controller-manager-7666dbdd4f-46t4f" Jan 03 05:59:44 crc kubenswrapper[4854]: E0103 05:59:44.101533 4854 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 03 05:59:44 crc kubenswrapper[4854]: E0103 05:59:44.101576 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba0f32da-a0e3-4c43-8dde-d6212a1c63e1-cert podName:ba0f32da-a0e3-4c43-8dde-d6212a1c63e1 nodeName:}" failed. No retries permitted until 2026-01-03 05:59:45.101562149 +0000 UTC m=+1163.428138721 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ba0f32da-a0e3-4c43-8dde-d6212a1c63e1-cert") pod "infra-operator-controller-manager-6d99759cf-qqbq9" (UID: "ba0f32da-a0e3-4c43-8dde-d6212a1c63e1") : secret "infra-operator-webhook-server-cert" not found Jan 03 05:59:44 crc kubenswrapper[4854]: I0103 05:59:44.110571 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tjzd\" (UniqueName: \"kubernetes.io/projected/1f9928f3-0c28-40df-b6ad-c871424ad3a6-kube-api-access-8tjzd\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7nclw5\" (UID: \"1f9928f3-0c28-40df-b6ad-c871424ad3a6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7nclw5" Jan 03 05:59:44 crc kubenswrapper[4854]: I0103 05:59:44.117646 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvqcj\" (UniqueName: \"kubernetes.io/projected/6515eec5-5595-42cb-8588-81baa0db47c1-kube-api-access-xvqcj\") pod \"placement-operator-controller-manager-9b6f8f78c-dprp4\" (UID: \"6515eec5-5595-42cb-8588-81baa0db47c1\") " pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-dprp4" Jan 03 05:59:44 crc kubenswrapper[4854]: I0103 05:59:44.126711 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-z7cfx" Jan 03 05:59:44 crc kubenswrapper[4854]: I0103 05:59:44.204942 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5pqb\" (UniqueName: \"kubernetes.io/projected/ad6a18d3-e1d2-446a-9b41-a9fca5e8b574-kube-api-access-c5pqb\") pod \"test-operator-controller-manager-6c866cfdcb-7lvxp\" (UID: \"ad6a18d3-e1d2-446a-9b41-a9fca5e8b574\") " pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-7lvxp" Jan 03 05:59:44 crc kubenswrapper[4854]: I0103 05:59:44.208382 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-7lvxp" Jan 03 05:59:44 crc kubenswrapper[4854]: I0103 05:59:44.217711 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwgll\" (UniqueName: \"kubernetes.io/projected/56476ba9-ae33-4d34-855c-0e144e4f5da3-kube-api-access-nwgll\") pod \"watcher-operator-controller-manager-9dbdf6486-xrghz\" (UID: \"56476ba9-ae33-4d34-855c-0e144e4f5da3\") " pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-xrghz" Jan 03 05:59:44 crc kubenswrapper[4854]: I0103 05:59:44.217816 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tj2lk\" (UniqueName: \"kubernetes.io/projected/8f21d9f8-0bdd-43de-8196-186dccb7b2f8-kube-api-access-tj2lk\") pod \"telemetry-operator-controller-manager-7666dbdd4f-46t4f\" (UID: \"8f21d9f8-0bdd-43de-8196-186dccb7b2f8\") " pod="openstack-operators/telemetry-operator-controller-manager-7666dbdd4f-46t4f" Jan 03 05:59:44 crc kubenswrapper[4854]: I0103 05:59:44.265158 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-8xksh" Jan 03 05:59:44 crc kubenswrapper[4854]: I0103 05:59:44.266095 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-ncjlb" Jan 03 05:59:44 crc kubenswrapper[4854]: I0103 05:59:44.274960 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-jqj54" Jan 03 05:59:44 crc kubenswrapper[4854]: I0103 05:59:44.289485 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-85b679bdc6-qrnbc"] Jan 03 05:59:44 crc kubenswrapper[4854]: I0103 05:59:44.291828 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rt5nk\" (UniqueName: \"kubernetes.io/projected/ddf8e54e-858e-432c-ab2d-8b4d83f6282b-kube-api-access-rt5nk\") pod \"swift-operator-controller-manager-bb586bbf4-qzzw2\" (UID: \"ddf8e54e-858e-432c-ab2d-8b4d83f6282b\") " pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-qzzw2" Jan 03 05:59:44 crc kubenswrapper[4854]: I0103 05:59:44.303201 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-85b679bdc6-qrnbc" Jan 03 05:59:44 crc kubenswrapper[4854]: I0103 05:59:44.310041 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-xrghz" Jan 03 05:59:44 crc kubenswrapper[4854]: I0103 05:59:44.314115 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 03 05:59:44 crc kubenswrapper[4854]: I0103 05:59:44.314386 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-zngxx" Jan 03 05:59:44 crc kubenswrapper[4854]: I0103 05:59:44.314509 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 03 05:59:44 crc kubenswrapper[4854]: I0103 05:59:44.384668 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-dprp4" Jan 03 05:59:44 crc kubenswrapper[4854]: I0103 05:59:44.403561 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-85b679bdc6-qrnbc"] Jan 03 05:59:44 crc kubenswrapper[4854]: I0103 05:59:44.419828 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7666dbdd4f-46t4f" Jan 03 05:59:44 crc kubenswrapper[4854]: I0103 05:59:44.541109 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cfs9s"] Jan 03 05:59:44 crc kubenswrapper[4854]: I0103 05:59:44.567435 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f7c87f2-5743-4000-a36a-3a9400e24cdd-metrics-certs\") pod \"openstack-operator-controller-manager-85b679bdc6-qrnbc\" (UID: \"7f7c87f2-5743-4000-a36a-3a9400e24cdd\") " pod="openstack-operators/openstack-operator-controller-manager-85b679bdc6-qrnbc" Jan 03 05:59:44 crc kubenswrapper[4854]: I0103 05:59:44.568766 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1f9928f3-0c28-40df-b6ad-c871424ad3a6-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7nclw5\" (UID: \"1f9928f3-0c28-40df-b6ad-c871424ad3a6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7nclw5" Jan 03 05:59:44 crc kubenswrapper[4854]: I0103 05:59:44.568853 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cfs9s" Jan 03 05:59:44 crc kubenswrapper[4854]: I0103 05:59:44.568977 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66w89\" (UniqueName: \"kubernetes.io/projected/7f7c87f2-5743-4000-a36a-3a9400e24cdd-kube-api-access-66w89\") pod \"openstack-operator-controller-manager-85b679bdc6-qrnbc\" (UID: \"7f7c87f2-5743-4000-a36a-3a9400e24cdd\") " pod="openstack-operators/openstack-operator-controller-manager-85b679bdc6-qrnbc" Jan 03 05:59:44 crc kubenswrapper[4854]: I0103 05:59:44.569119 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7f7c87f2-5743-4000-a36a-3a9400e24cdd-webhook-certs\") pod \"openstack-operator-controller-manager-85b679bdc6-qrnbc\" (UID: \"7f7c87f2-5743-4000-a36a-3a9400e24cdd\") " pod="openstack-operators/openstack-operator-controller-manager-85b679bdc6-qrnbc" Jan 03 05:59:44 crc kubenswrapper[4854]: E0103 05:59:44.570173 4854 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 03 05:59:44 crc kubenswrapper[4854]: E0103 05:59:44.570335 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1f9928f3-0c28-40df-b6ad-c871424ad3a6-cert podName:1f9928f3-0c28-40df-b6ad-c871424ad3a6 nodeName:}" failed. No retries permitted until 2026-01-03 05:59:45.570318385 +0000 UTC m=+1163.896894957 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1f9928f3-0c28-40df-b6ad-c871424ad3a6-cert") pod "openstack-baremetal-operator-controller-manager-78948ddfd7nclw5" (UID: "1f9928f3-0c28-40df-b6ad-c871424ad3a6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 03 05:59:44 crc kubenswrapper[4854]: I0103 05:59:44.571231 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-qzzw2" Jan 03 05:59:44 crc kubenswrapper[4854]: I0103 05:59:44.573181 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-jwqbf" Jan 03 05:59:44 crc kubenswrapper[4854]: I0103 05:59:44.593870 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cfs9s"] Jan 03 05:59:44 crc kubenswrapper[4854]: I0103 05:59:44.679641 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f7c87f2-5743-4000-a36a-3a9400e24cdd-metrics-certs\") pod \"openstack-operator-controller-manager-85b679bdc6-qrnbc\" (UID: \"7f7c87f2-5743-4000-a36a-3a9400e24cdd\") " pod="openstack-operators/openstack-operator-controller-manager-85b679bdc6-qrnbc" Jan 03 05:59:44 crc kubenswrapper[4854]: E0103 05:59:44.679781 4854 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 03 05:59:44 crc kubenswrapper[4854]: E0103 05:59:44.679927 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f7c87f2-5743-4000-a36a-3a9400e24cdd-metrics-certs podName:7f7c87f2-5743-4000-a36a-3a9400e24cdd nodeName:}" failed. No retries permitted until 2026-01-03 05:59:45.179908125 +0000 UTC m=+1163.506484697 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7f7c87f2-5743-4000-a36a-3a9400e24cdd-metrics-certs") pod "openstack-operator-controller-manager-85b679bdc6-qrnbc" (UID: "7f7c87f2-5743-4000-a36a-3a9400e24cdd") : secret "metrics-server-cert" not found Jan 03 05:59:44 crc kubenswrapper[4854]: I0103 05:59:44.680287 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66w89\" (UniqueName: \"kubernetes.io/projected/7f7c87f2-5743-4000-a36a-3a9400e24cdd-kube-api-access-66w89\") pod \"openstack-operator-controller-manager-85b679bdc6-qrnbc\" (UID: \"7f7c87f2-5743-4000-a36a-3a9400e24cdd\") " pod="openstack-operators/openstack-operator-controller-manager-85b679bdc6-qrnbc" Jan 03 05:59:44 crc kubenswrapper[4854]: I0103 05:59:44.680325 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7f7c87f2-5743-4000-a36a-3a9400e24cdd-webhook-certs\") pod \"openstack-operator-controller-manager-85b679bdc6-qrnbc\" (UID: \"7f7c87f2-5743-4000-a36a-3a9400e24cdd\") " pod="openstack-operators/openstack-operator-controller-manager-85b679bdc6-qrnbc" Jan 03 05:59:44 crc kubenswrapper[4854]: E0103 05:59:44.680681 4854 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 03 05:59:44 crc kubenswrapper[4854]: E0103 05:59:44.680705 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f7c87f2-5743-4000-a36a-3a9400e24cdd-webhook-certs podName:7f7c87f2-5743-4000-a36a-3a9400e24cdd nodeName:}" failed. No retries permitted until 2026-01-03 05:59:45.180696975 +0000 UTC m=+1163.507273547 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7f7c87f2-5743-4000-a36a-3a9400e24cdd-webhook-certs") pod "openstack-operator-controller-manager-85b679bdc6-qrnbc" (UID: "7f7c87f2-5743-4000-a36a-3a9400e24cdd") : secret "webhook-server-cert" not found Jan 03 05:59:44 crc kubenswrapper[4854]: I0103 05:59:44.709331 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66w89\" (UniqueName: \"kubernetes.io/projected/7f7c87f2-5743-4000-a36a-3a9400e24cdd-kube-api-access-66w89\") pod \"openstack-operator-controller-manager-85b679bdc6-qrnbc\" (UID: \"7f7c87f2-5743-4000-a36a-3a9400e24cdd\") " pod="openstack-operators/openstack-operator-controller-manager-85b679bdc6-qrnbc" Jan 03 05:59:44 crc kubenswrapper[4854]: I0103 05:59:44.782339 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4n5d\" (UniqueName: \"kubernetes.io/projected/b6397338-ed12-4f81-98aa-97a84e4256f6-kube-api-access-z4n5d\") pod \"rabbitmq-cluster-operator-manager-668c99d594-cfs9s\" (UID: \"b6397338-ed12-4f81-98aa-97a84e4256f6\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cfs9s" Jan 03 05:59:44 crc kubenswrapper[4854]: I0103 05:59:44.884101 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4n5d\" (UniqueName: \"kubernetes.io/projected/b6397338-ed12-4f81-98aa-97a84e4256f6-kube-api-access-z4n5d\") pod \"rabbitmq-cluster-operator-manager-668c99d594-cfs9s\" (UID: \"b6397338-ed12-4f81-98aa-97a84e4256f6\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cfs9s" Jan 03 05:59:44 crc kubenswrapper[4854]: I0103 05:59:44.911747 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4n5d\" (UniqueName: \"kubernetes.io/projected/b6397338-ed12-4f81-98aa-97a84e4256f6-kube-api-access-z4n5d\") pod \"rabbitmq-cluster-operator-manager-668c99d594-cfs9s\" (UID: \"b6397338-ed12-4f81-98aa-97a84e4256f6\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cfs9s" Jan 03 05:59:45 crc kubenswrapper[4854]: I0103 05:59:45.061164 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cfs9s" Jan 03 05:59:45 crc kubenswrapper[4854]: I0103 05:59:45.170266 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ba0f32da-a0e3-4c43-8dde-d6212a1c63e1-cert\") pod \"infra-operator-controller-manager-6d99759cf-qqbq9\" (UID: \"ba0f32da-a0e3-4c43-8dde-d6212a1c63e1\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qqbq9" Jan 03 05:59:45 crc kubenswrapper[4854]: E0103 05:59:45.170594 4854 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 03 05:59:45 crc kubenswrapper[4854]: E0103 05:59:45.170648 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba0f32da-a0e3-4c43-8dde-d6212a1c63e1-cert podName:ba0f32da-a0e3-4c43-8dde-d6212a1c63e1 nodeName:}" failed. No retries permitted until 2026-01-03 05:59:47.170632523 +0000 UTC m=+1165.497209095 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ba0f32da-a0e3-4c43-8dde-d6212a1c63e1-cert") pod "infra-operator-controller-manager-6d99759cf-qqbq9" (UID: "ba0f32da-a0e3-4c43-8dde-d6212a1c63e1") : secret "infra-operator-webhook-server-cert" not found Jan 03 05:59:45 crc kubenswrapper[4854]: I0103 05:59:45.271748 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f7c87f2-5743-4000-a36a-3a9400e24cdd-metrics-certs\") pod \"openstack-operator-controller-manager-85b679bdc6-qrnbc\" (UID: \"7f7c87f2-5743-4000-a36a-3a9400e24cdd\") " pod="openstack-operators/openstack-operator-controller-manager-85b679bdc6-qrnbc" Jan 03 05:59:45 crc kubenswrapper[4854]: I0103 05:59:45.271880 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7f7c87f2-5743-4000-a36a-3a9400e24cdd-webhook-certs\") pod \"openstack-operator-controller-manager-85b679bdc6-qrnbc\" (UID: \"7f7c87f2-5743-4000-a36a-3a9400e24cdd\") " pod="openstack-operators/openstack-operator-controller-manager-85b679bdc6-qrnbc" Jan 03 05:59:45 crc kubenswrapper[4854]: E0103 05:59:45.272967 4854 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 03 05:59:45 crc kubenswrapper[4854]: E0103 05:59:45.273012 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f7c87f2-5743-4000-a36a-3a9400e24cdd-metrics-certs podName:7f7c87f2-5743-4000-a36a-3a9400e24cdd nodeName:}" failed. No retries permitted until 2026-01-03 05:59:46.272996933 +0000 UTC m=+1164.599573505 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7f7c87f2-5743-4000-a36a-3a9400e24cdd-metrics-certs") pod "openstack-operator-controller-manager-85b679bdc6-qrnbc" (UID: "7f7c87f2-5743-4000-a36a-3a9400e24cdd") : secret "metrics-server-cert" not found Jan 03 05:59:45 crc kubenswrapper[4854]: E0103 05:59:45.273593 4854 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 03 05:59:45 crc kubenswrapper[4854]: E0103 05:59:45.273623 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f7c87f2-5743-4000-a36a-3a9400e24cdd-webhook-certs podName:7f7c87f2-5743-4000-a36a-3a9400e24cdd nodeName:}" failed. No retries permitted until 2026-01-03 05:59:46.273614628 +0000 UTC m=+1164.600191200 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7f7c87f2-5743-4000-a36a-3a9400e24cdd-webhook-certs") pod "openstack-operator-controller-manager-85b679bdc6-qrnbc" (UID: "7f7c87f2-5743-4000-a36a-3a9400e24cdd") : secret "webhook-server-cert" not found Jan 03 05:59:45 crc kubenswrapper[4854]: I0103 05:59:45.294427 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-f6f74d6db-jvp7v"] Jan 03 05:59:45 crc kubenswrapper[4854]: I0103 05:59:45.355514 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-78979fc445-jx5q2"] Jan 03 05:59:45 crc kubenswrapper[4854]: W0103 05:59:45.395802 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81de0b3b_e6fc_45c9_b347_995726d00213.slice/crio-5027498afd0ed640e238c3d97dc53949d10e833ada3bf5542ed0fe52b99af1c8 WatchSource:0}: Error finding container 5027498afd0ed640e238c3d97dc53949d10e833ada3bf5542ed0fe52b99af1c8: Status 404 returned error can't find the container with id 5027498afd0ed640e238c3d97dc53949d10e833ada3bf5542ed0fe52b99af1c8 Jan 03 05:59:45 crc kubenswrapper[4854]: I0103 05:59:45.579764 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1f9928f3-0c28-40df-b6ad-c871424ad3a6-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7nclw5\" (UID: \"1f9928f3-0c28-40df-b6ad-c871424ad3a6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7nclw5" Jan 03 05:59:45 crc kubenswrapper[4854]: E0103 05:59:45.580698 4854 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 03 05:59:45 crc kubenswrapper[4854]: E0103 05:59:45.580830 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1f9928f3-0c28-40df-b6ad-c871424ad3a6-cert podName:1f9928f3-0c28-40df-b6ad-c871424ad3a6 nodeName:}" failed. No retries permitted until 2026-01-03 05:59:47.580800589 +0000 UTC m=+1165.907377161 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1f9928f3-0c28-40df-b6ad-c871424ad3a6-cert") pod "openstack-baremetal-operator-controller-manager-78948ddfd7nclw5" (UID: "1f9928f3-0c28-40df-b6ad-c871424ad3a6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 03 05:59:45 crc kubenswrapper[4854]: I0103 05:59:45.941643 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-jx5q2" event={"ID":"40ad961e-d740-49fa-9a1f-e9d950002a3e","Type":"ContainerStarted","Data":"e34b95f449c02ab1dc4f0237f1b0e42957606d17669925f4acc064ebf06aeb1e"} Jan 03 05:59:45 crc kubenswrapper[4854]: I0103 05:59:45.954187 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-jvp7v" event={"ID":"81de0b3b-e6fc-45c9-b347-995726d00213","Type":"ContainerStarted","Data":"5027498afd0ed640e238c3d97dc53949d10e833ada3bf5542ed0fe52b99af1c8"} Jan 03 05:59:46 crc kubenswrapper[4854]: W0103 05:59:46.198265 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda327e8cf_824f_41b1_9076_5fd57a8b4352.slice/crio-8e5b0ae95fd16a32a6db3c028ef3542496d35d0eef60ed62d76610ec15b51637 WatchSource:0}: Error finding container 8e5b0ae95fd16a32a6db3c028ef3542496d35d0eef60ed62d76610ec15b51637: Status 404 returned error can't find the container with id 8e5b0ae95fd16a32a6db3c028ef3542496d35d0eef60ed62d76610ec15b51637 Jan 03 05:59:46 crc kubenswrapper[4854]: I0103 05:59:46.210659 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-7b549fc966-hgwsb"] Jan 03 05:59:46 crc kubenswrapper[4854]: I0103 05:59:46.233930 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-568985c78-x78fv"] Jan 03 05:59:46 crc kubenswrapper[4854]: W0103 05:59:46.239064 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5b690cb_eb48_469c_a774_eff5eda46f89.slice/crio-8e5976412bc9811a043af42c30798f45648134be2165d8a015dbc7be9ae5c81a WatchSource:0}: Error finding container 8e5976412bc9811a043af42c30798f45648134be2165d8a015dbc7be9ae5c81a: Status 404 returned error can't find the container with id 8e5976412bc9811a043af42c30798f45648134be2165d8a015dbc7be9ae5c81a Jan 03 05:59:46 crc kubenswrapper[4854]: W0103 05:59:46.240534 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d4776d0_290f_4c82_aa5c_6412b5bb4608.slice/crio-32e6e79255fea26b76a7762ff8edaab7a7a888434696bf2bb9ad8cfb41d8e4c8 WatchSource:0}: Error finding container 32e6e79255fea26b76a7762ff8edaab7a7a888434696bf2bb9ad8cfb41d8e4c8: Status 404 returned error can't find the container with id 32e6e79255fea26b76a7762ff8edaab7a7a888434696bf2bb9ad8cfb41d8e4c8 Jan 03 05:59:46 crc kubenswrapper[4854]: W0103 05:59:46.245170 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc2f6c336_91f0_41e6_b439_c5d940264b7f.slice/crio-76dd78205db74e766540f2225845fd7c1efb58a1bedb580550cf4a403ce4f489 WatchSource:0}: Error finding container 76dd78205db74e766540f2225845fd7c1efb58a1bedb580550cf4a403ce4f489: Status 404 returned error can't find the container with id 76dd78205db74e766540f2225845fd7c1efb58a1bedb580550cf4a403ce4f489 Jan 03 05:59:46 crc kubenswrapper[4854]: I0103 05:59:46.249263 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66f8b87655-msvf6"] Jan 03 05:59:46 crc kubenswrapper[4854]: I0103 05:59:46.258631 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-658dd65b86-k6nnf"] Jan 03 05:59:46 crc kubenswrapper[4854]: I0103 05:59:46.264713 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-trsxr"] Jan 03 05:59:46 crc kubenswrapper[4854]: I0103 05:59:46.303088 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7f7c87f2-5743-4000-a36a-3a9400e24cdd-webhook-certs\") pod \"openstack-operator-controller-manager-85b679bdc6-qrnbc\" (UID: \"7f7c87f2-5743-4000-a36a-3a9400e24cdd\") " pod="openstack-operators/openstack-operator-controller-manager-85b679bdc6-qrnbc" Jan 03 05:59:46 crc kubenswrapper[4854]: I0103 05:59:46.303563 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f7c87f2-5743-4000-a36a-3a9400e24cdd-metrics-certs\") pod \"openstack-operator-controller-manager-85b679bdc6-qrnbc\" (UID: \"7f7c87f2-5743-4000-a36a-3a9400e24cdd\") " pod="openstack-operators/openstack-operator-controller-manager-85b679bdc6-qrnbc" Jan 03 05:59:46 crc kubenswrapper[4854]: E0103 05:59:46.303697 4854 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 03 05:59:46 crc kubenswrapper[4854]: E0103 05:59:46.303778 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f7c87f2-5743-4000-a36a-3a9400e24cdd-webhook-certs podName:7f7c87f2-5743-4000-a36a-3a9400e24cdd nodeName:}" failed. No retries permitted until 2026-01-03 05:59:48.303757757 +0000 UTC m=+1166.630334329 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7f7c87f2-5743-4000-a36a-3a9400e24cdd-webhook-certs") pod "openstack-operator-controller-manager-85b679bdc6-qrnbc" (UID: "7f7c87f2-5743-4000-a36a-3a9400e24cdd") : secret "webhook-server-cert" not found Jan 03 05:59:46 crc kubenswrapper[4854]: E0103 05:59:46.303859 4854 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 03 05:59:46 crc kubenswrapper[4854]: E0103 05:59:46.303954 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f7c87f2-5743-4000-a36a-3a9400e24cdd-metrics-certs podName:7f7c87f2-5743-4000-a36a-3a9400e24cdd nodeName:}" failed. No retries permitted until 2026-01-03 05:59:48.303936821 +0000 UTC m=+1166.630513393 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7f7c87f2-5743-4000-a36a-3a9400e24cdd-metrics-certs") pod "openstack-operator-controller-manager-85b679bdc6-qrnbc" (UID: "7f7c87f2-5743-4000-a36a-3a9400e24cdd") : secret "metrics-server-cert" not found Jan 03 05:59:46 crc kubenswrapper[4854]: I0103 05:59:46.474349 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7cd87b778f-xgtzc"] Jan 03 05:59:46 crc kubenswrapper[4854]: I0103 05:59:46.481244 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b88bfc995-vdnq9"] Jan 03 05:59:46 crc kubenswrapper[4854]: I0103 05:59:46.495600 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-68c649d9d-8xksh"] Jan 03 05:59:46 crc kubenswrapper[4854]: I0103 05:59:46.523202 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-f99f54bc8-4pbfn"] Jan 03 05:59:46 crc kubenswrapper[4854]: I0103 05:59:46.527584 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-ncjlb"] Jan 03 05:59:46 crc kubenswrapper[4854]: W0103 05:59:46.543230 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d8399ce_3c90_4601_9a32_31dc20da4552.slice/crio-a9d5554aabca5dd4bf872e5c5f7c554b1417f32b72c4bca1dba4d91352c89192 WatchSource:0}: Error finding container a9d5554aabca5dd4bf872e5c5f7c554b1417f32b72c4bca1dba4d91352c89192: Status 404 returned error can't find the container with id a9d5554aabca5dd4bf872e5c5f7c554b1417f32b72c4bca1dba4d91352c89192 Jan 03 05:59:46 crc kubenswrapper[4854]: I0103 05:59:46.967951 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-568985c78-x78fv" event={"ID":"14991c3c-8c35-4008-b1a0-1b8690074322","Type":"ContainerStarted","Data":"091b7e2f33a26739d9006825010d472bda98cff2fe58a8f2b67ca3c7eeed190e"} Jan 03 05:59:46 crc kubenswrapper[4854]: I0103 05:59:46.969498 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-ncjlb" event={"ID":"402a077e-f741-447d-ab1c-25bc62cd24cf","Type":"ContainerStarted","Data":"d89cec6b0dd0d4de08695d4cc8c4d39dae5efbcfc061602db79db19163c8bac0"} Jan 03 05:59:46 crc kubenswrapper[4854]: I0103 05:59:46.973492 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-xgtzc" event={"ID":"04d8c7f1-6674-45b0-9506-9d62c1a2f892","Type":"ContainerStarted","Data":"1a46b406a30e6e4e1f99949285ca9bcb80df055682c2bba4420ac9fbeeb57de6"} Jan 03 05:59:46 crc kubenswrapper[4854]: I0103 05:59:46.974997 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-4pbfn" event={"ID":"1d8399ce-3c90-4601-9a32-31dc20da4552","Type":"ContainerStarted","Data":"a9d5554aabca5dd4bf872e5c5f7c554b1417f32b72c4bca1dba4d91352c89192"} Jan 03 05:59:46 crc kubenswrapper[4854]: I0103 05:59:46.980287 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-hgwsb" event={"ID":"a327e8cf-824f-41b1-9076-5fd57a8b4352","Type":"ContainerStarted","Data":"8e5b0ae95fd16a32a6db3c028ef3542496d35d0eef60ed62d76610ec15b51637"} Jan 03 05:59:46 crc kubenswrapper[4854]: I0103 05:59:46.992278 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-trsxr" event={"ID":"f5b690cb-eb48-469c-a774-eff5eda46f89","Type":"ContainerStarted","Data":"8e5976412bc9811a043af42c30798f45648134be2165d8a015dbc7be9ae5c81a"} Jan 03 05:59:46 crc kubenswrapper[4854]: I0103 05:59:46.994498 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-8xksh" event={"ID":"05f5522f-8e47-4d35-be75-2edee0f16f77","Type":"ContainerStarted","Data":"e1f2e36a9b4451b4fb10d29b75ed444fe6723a5737b688ee84505230343079a3"} Jan 03 05:59:46 crc kubenswrapper[4854]: I0103 05:59:46.996503 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-vdnq9" event={"ID":"25988b2b-1924-4007-a6b1-5e5403d5dc68","Type":"ContainerStarted","Data":"e75d8c13aefba74b97581489435c4cd267d5eff8d345cca961fdbcc54d6cd97d"} Jan 03 05:59:46 crc kubenswrapper[4854]: I0103 05:59:46.998941 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-k6nnf" event={"ID":"7d4776d0-290f-4c82-aa5c-6412b5bb4608","Type":"ContainerStarted","Data":"32e6e79255fea26b76a7762ff8edaab7a7a888434696bf2bb9ad8cfb41d8e4c8"} Jan 03 05:59:47 crc kubenswrapper[4854]: I0103 05:59:47.001194 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-msvf6" event={"ID":"c2f6c336-91f0-41e6-b439-c5d940264b7f","Type":"ContainerStarted","Data":"76dd78205db74e766540f2225845fd7c1efb58a1bedb580550cf4a403ce4f489"} Jan 03 05:59:47 crc kubenswrapper[4854]: I0103 05:59:47.080236 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cfs9s"] Jan 03 05:59:47 crc kubenswrapper[4854]: I0103 05:59:47.087827 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-bb586bbf4-qzzw2"] Jan 03 05:59:47 crc kubenswrapper[4854]: I0103 05:59:47.132788 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-9b6f8f78c-dprp4"] Jan 03 05:59:47 crc kubenswrapper[4854]: W0103 05:59:47.135528 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6397338_ed12_4f81_98aa_97a84e4256f6.slice/crio-dc98d3d1b12c088512ae14a0b7dde0c98482bb716d94016d40e1056632be4b03 WatchSource:0}: Error finding container dc98d3d1b12c088512ae14a0b7dde0c98482bb716d94016d40e1056632be4b03: Status 404 returned error can't find the container with id dc98d3d1b12c088512ae14a0b7dde0c98482bb716d94016d40e1056632be4b03 Jan 03 05:59:47 crc kubenswrapper[4854]: W0103 05:59:47.139406 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podddf8e54e_858e_432c_ab2d_8b4d83f6282b.slice/crio-32cff2908590f3b4250c8f62d8353238aebfa38ed373a9b91f52ff0f8e13178b WatchSource:0}: Error finding container 32cff2908590f3b4250c8f62d8353238aebfa38ed373a9b91f52ff0f8e13178b: Status 404 returned error can't find the container with id 32cff2908590f3b4250c8f62d8353238aebfa38ed373a9b91f52ff0f8e13178b Jan 03 05:59:47 crc kubenswrapper[4854]: I0103 05:59:47.140514 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7666dbdd4f-46t4f"] Jan 03 05:59:47 crc kubenswrapper[4854]: W0103 05:59:47.142375 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f21d9f8_0bdd_43de_8196_186dccb7b2f8.slice/crio-4a8618fd56812c111fb2ea946074a0570676216fde6b9f93a65fc1e368b3cd09 WatchSource:0}: Error finding container 4a8618fd56812c111fb2ea946074a0570676216fde6b9f93a65fc1e368b3cd09: Status 404 returned error can't find the container with id 4a8618fd56812c111fb2ea946074a0570676216fde6b9f93a65fc1e368b3cd09 Jan 03 05:59:47 crc kubenswrapper[4854]: I0103 05:59:47.147021 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-598945d5b8-z7cfx"] Jan 03 05:59:47 crc kubenswrapper[4854]: E0103 05:59:47.163835 4854 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:c846ab4a49272557884db6b976f979e6b9dce1aa73e5eb7872b4472f44602a1c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6m8hq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-598945d5b8-z7cfx_openstack-operators(fe7f33a3-c4b8-44b6-81f1-c2143cbb9dd1): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 03 05:59:47 crc kubenswrapper[4854]: E0103 05:59:47.166641 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-z7cfx" podUID="fe7f33a3-c4b8-44b6-81f1-c2143cbb9dd1" Jan 03 05:59:47 crc kubenswrapper[4854]: I0103 05:59:47.169905 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bf6d4f946-jqj54"] Jan 03 05:59:47 crc kubenswrapper[4854]: I0103 05:59:47.176411 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-9dbdf6486-xrghz"] Jan 03 05:59:47 crc kubenswrapper[4854]: E0103 05:59:47.180122 4854 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nzxkc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-bf6d4f946-jqj54_openstack-operators(e62c43c5-cac2-4f9f-9e1b-de61827c4c94): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 03 05:59:47 crc kubenswrapper[4854]: E0103 05:59:47.181328 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-jqj54" podUID="e62c43c5-cac2-4f9f-9e1b-de61827c4c94" Jan 03 05:59:47 crc kubenswrapper[4854]: E0103 05:59:47.181555 4854 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:f0ece9a81e4be3dbc1ff752a951970380546d8c0dea910953f862c219444b97a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nwgll,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-9dbdf6486-xrghz_openstack-operators(56476ba9-ae33-4d34-855c-0e144e4f5da3): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 03 05:59:47 crc kubenswrapper[4854]: E0103 05:59:47.183918 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-xrghz" podUID="56476ba9-ae33-4d34-855c-0e144e4f5da3" Jan 03 05:59:47 crc kubenswrapper[4854]: I0103 05:59:47.189893 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-6c866cfdcb-7lvxp"] Jan 03 05:59:47 crc kubenswrapper[4854]: W0103 05:59:47.199635 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad6a18d3_e1d2_446a_9b41_a9fca5e8b574.slice/crio-2e40975240d8bd2ed7dad243f58a05b341e69befa8427906ff4e0f1692375eca WatchSource:0}: Error finding container 2e40975240d8bd2ed7dad243f58a05b341e69befa8427906ff4e0f1692375eca: Status 404 returned error can't find the container with id 2e40975240d8bd2ed7dad243f58a05b341e69befa8427906ff4e0f1692375eca Jan 03 05:59:47 crc kubenswrapper[4854]: E0103 05:59:47.202984 4854 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:4e3d234c1398039c2593611f7b0fd2a6b284cafb1563e6737876a265b9af42b6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c5pqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-6c866cfdcb-7lvxp_openstack-operators(ad6a18d3-e1d2-446a-9b41-a9fca5e8b574): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 03 05:59:47 crc kubenswrapper[4854]: E0103 05:59:47.204314 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-7lvxp" podUID="ad6a18d3-e1d2-446a-9b41-a9fca5e8b574" Jan 03 05:59:47 crc kubenswrapper[4854]: I0103 05:59:47.265373 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ba0f32da-a0e3-4c43-8dde-d6212a1c63e1-cert\") pod \"infra-operator-controller-manager-6d99759cf-qqbq9\" (UID: \"ba0f32da-a0e3-4c43-8dde-d6212a1c63e1\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qqbq9" Jan 03 05:59:47 crc kubenswrapper[4854]: E0103 05:59:47.265703 4854 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 03 05:59:47 crc kubenswrapper[4854]: E0103 05:59:47.265858 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba0f32da-a0e3-4c43-8dde-d6212a1c63e1-cert podName:ba0f32da-a0e3-4c43-8dde-d6212a1c63e1 nodeName:}" failed. No retries permitted until 2026-01-03 05:59:51.265819106 +0000 UTC m=+1169.592395868 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ba0f32da-a0e3-4c43-8dde-d6212a1c63e1-cert") pod "infra-operator-controller-manager-6d99759cf-qqbq9" (UID: "ba0f32da-a0e3-4c43-8dde-d6212a1c63e1") : secret "infra-operator-webhook-server-cert" not found Jan 03 05:59:47 crc kubenswrapper[4854]: I0103 05:59:47.674873 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1f9928f3-0c28-40df-b6ad-c871424ad3a6-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7nclw5\" (UID: \"1f9928f3-0c28-40df-b6ad-c871424ad3a6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7nclw5" Jan 03 05:59:47 crc kubenswrapper[4854]: E0103 05:59:47.675122 4854 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 03 05:59:47 crc kubenswrapper[4854]: E0103 05:59:47.675234 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1f9928f3-0c28-40df-b6ad-c871424ad3a6-cert podName:1f9928f3-0c28-40df-b6ad-c871424ad3a6 nodeName:}" failed. No retries permitted until 2026-01-03 05:59:51.675208613 +0000 UTC m=+1170.001785265 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1f9928f3-0c28-40df-b6ad-c871424ad3a6-cert") pod "openstack-baremetal-operator-controller-manager-78948ddfd7nclw5" (UID: "1f9928f3-0c28-40df-b6ad-c871424ad3a6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 03 05:59:48 crc kubenswrapper[4854]: I0103 05:59:48.033489 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7666dbdd4f-46t4f" event={"ID":"8f21d9f8-0bdd-43de-8196-186dccb7b2f8","Type":"ContainerStarted","Data":"4a8618fd56812c111fb2ea946074a0570676216fde6b9f93a65fc1e368b3cd09"} Jan 03 05:59:48 crc kubenswrapper[4854]: I0103 05:59:48.041118 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-dprp4" event={"ID":"6515eec5-5595-42cb-8588-81baa0db47c1","Type":"ContainerStarted","Data":"6c96b2ad7617c3696b621980547f43206b64591e4d04fa9fb9212b449191b4a0"} Jan 03 05:59:48 crc kubenswrapper[4854]: I0103 05:59:48.043690 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-jqj54" event={"ID":"e62c43c5-cac2-4f9f-9e1b-de61827c4c94","Type":"ContainerStarted","Data":"278ac0523d70a476b0d56305a2ba3107312778a6edbd4db0732021b9fdcd536d"} Jan 03 05:59:48 crc kubenswrapper[4854]: E0103 05:59:48.049754 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-jqj54" podUID="e62c43c5-cac2-4f9f-9e1b-de61827c4c94" Jan 03 05:59:48 crc kubenswrapper[4854]: I0103 05:59:48.050104 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cfs9s" event={"ID":"b6397338-ed12-4f81-98aa-97a84e4256f6","Type":"ContainerStarted","Data":"dc98d3d1b12c088512ae14a0b7dde0c98482bb716d94016d40e1056632be4b03"} Jan 03 05:59:48 crc kubenswrapper[4854]: I0103 05:59:48.052964 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-z7cfx" event={"ID":"fe7f33a3-c4b8-44b6-81f1-c2143cbb9dd1","Type":"ContainerStarted","Data":"a52757062e5ca8d7b438971893979636f1f3c1d668d21f32247d910d4cb52d68"} Jan 03 05:59:48 crc kubenswrapper[4854]: E0103 05:59:48.055186 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:c846ab4a49272557884db6b976f979e6b9dce1aa73e5eb7872b4472f44602a1c\\\"\"" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-z7cfx" podUID="fe7f33a3-c4b8-44b6-81f1-c2143cbb9dd1" Jan 03 05:59:48 crc kubenswrapper[4854]: I0103 05:59:48.064455 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-qzzw2" event={"ID":"ddf8e54e-858e-432c-ab2d-8b4d83f6282b","Type":"ContainerStarted","Data":"32cff2908590f3b4250c8f62d8353238aebfa38ed373a9b91f52ff0f8e13178b"} Jan 03 05:59:48 crc kubenswrapper[4854]: I0103 05:59:48.077906 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-7lvxp" event={"ID":"ad6a18d3-e1d2-446a-9b41-a9fca5e8b574","Type":"ContainerStarted","Data":"2e40975240d8bd2ed7dad243f58a05b341e69befa8427906ff4e0f1692375eca"} Jan 03 05:59:48 crc kubenswrapper[4854]: E0103 05:59:48.079989 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:4e3d234c1398039c2593611f7b0fd2a6b284cafb1563e6737876a265b9af42b6\\\"\"" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-7lvxp" podUID="ad6a18d3-e1d2-446a-9b41-a9fca5e8b574" Jan 03 05:59:48 crc kubenswrapper[4854]: I0103 05:59:48.081062 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-xrghz" event={"ID":"56476ba9-ae33-4d34-855c-0e144e4f5da3","Type":"ContainerStarted","Data":"a99c3fc3b705c632972f7618cd4ffd021e957e168a624f148c85a1f5fb7a4b8e"} Jan 03 05:59:48 crc kubenswrapper[4854]: E0103 05:59:48.108318 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:f0ece9a81e4be3dbc1ff752a951970380546d8c0dea910953f862c219444b97a\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-xrghz" podUID="56476ba9-ae33-4d34-855c-0e144e4f5da3" Jan 03 05:59:48 crc kubenswrapper[4854]: I0103 05:59:48.401590 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7f7c87f2-5743-4000-a36a-3a9400e24cdd-webhook-certs\") pod \"openstack-operator-controller-manager-85b679bdc6-qrnbc\" (UID: \"7f7c87f2-5743-4000-a36a-3a9400e24cdd\") " pod="openstack-operators/openstack-operator-controller-manager-85b679bdc6-qrnbc" Jan 03 05:59:48 crc kubenswrapper[4854]: E0103 05:59:48.401898 4854 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 03 05:59:48 crc kubenswrapper[4854]: I0103 05:59:48.402343 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f7c87f2-5743-4000-a36a-3a9400e24cdd-metrics-certs\") pod \"openstack-operator-controller-manager-85b679bdc6-qrnbc\" (UID: \"7f7c87f2-5743-4000-a36a-3a9400e24cdd\") " pod="openstack-operators/openstack-operator-controller-manager-85b679bdc6-qrnbc" Jan 03 05:59:48 crc kubenswrapper[4854]: E0103 05:59:48.402377 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f7c87f2-5743-4000-a36a-3a9400e24cdd-webhook-certs podName:7f7c87f2-5743-4000-a36a-3a9400e24cdd nodeName:}" failed. No retries permitted until 2026-01-03 05:59:52.402347084 +0000 UTC m=+1170.728923656 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7f7c87f2-5743-4000-a36a-3a9400e24cdd-webhook-certs") pod "openstack-operator-controller-manager-85b679bdc6-qrnbc" (UID: "7f7c87f2-5743-4000-a36a-3a9400e24cdd") : secret "webhook-server-cert" not found Jan 03 05:59:48 crc kubenswrapper[4854]: E0103 05:59:48.402610 4854 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 03 05:59:48 crc kubenswrapper[4854]: E0103 05:59:48.402735 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f7c87f2-5743-4000-a36a-3a9400e24cdd-metrics-certs podName:7f7c87f2-5743-4000-a36a-3a9400e24cdd nodeName:}" failed. No retries permitted until 2026-01-03 05:59:52.402702303 +0000 UTC m=+1170.729279045 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7f7c87f2-5743-4000-a36a-3a9400e24cdd-metrics-certs") pod "openstack-operator-controller-manager-85b679bdc6-qrnbc" (UID: "7f7c87f2-5743-4000-a36a-3a9400e24cdd") : secret "metrics-server-cert" not found Jan 03 05:59:49 crc kubenswrapper[4854]: E0103 05:59:49.150755 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:c846ab4a49272557884db6b976f979e6b9dce1aa73e5eb7872b4472f44602a1c\\\"\"" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-z7cfx" podUID="fe7f33a3-c4b8-44b6-81f1-c2143cbb9dd1" Jan 03 05:59:49 crc kubenswrapper[4854]: E0103 05:59:49.152217 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:f0ece9a81e4be3dbc1ff752a951970380546d8c0dea910953f862c219444b97a\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-xrghz" podUID="56476ba9-ae33-4d34-855c-0e144e4f5da3" Jan 03 05:59:49 crc kubenswrapper[4854]: E0103 05:59:49.152583 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-jqj54" podUID="e62c43c5-cac2-4f9f-9e1b-de61827c4c94" Jan 03 05:59:49 crc kubenswrapper[4854]: E0103 05:59:49.153348 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:4e3d234c1398039c2593611f7b0fd2a6b284cafb1563e6737876a265b9af42b6\\\"\"" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-7lvxp" podUID="ad6a18d3-e1d2-446a-9b41-a9fca5e8b574" Jan 03 05:59:51 crc kubenswrapper[4854]: I0103 05:59:51.341794 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ba0f32da-a0e3-4c43-8dde-d6212a1c63e1-cert\") pod \"infra-operator-controller-manager-6d99759cf-qqbq9\" (UID: \"ba0f32da-a0e3-4c43-8dde-d6212a1c63e1\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qqbq9" Jan 03 05:59:51 crc kubenswrapper[4854]: E0103 05:59:51.342000 4854 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 03 05:59:51 crc kubenswrapper[4854]: E0103 05:59:51.342542 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba0f32da-a0e3-4c43-8dde-d6212a1c63e1-cert podName:ba0f32da-a0e3-4c43-8dde-d6212a1c63e1 nodeName:}" failed. No retries permitted until 2026-01-03 05:59:59.342511006 +0000 UTC m=+1177.669087678 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ba0f32da-a0e3-4c43-8dde-d6212a1c63e1-cert") pod "infra-operator-controller-manager-6d99759cf-qqbq9" (UID: "ba0f32da-a0e3-4c43-8dde-d6212a1c63e1") : secret "infra-operator-webhook-server-cert" not found Jan 03 05:59:51 crc kubenswrapper[4854]: I0103 05:59:51.754888 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1f9928f3-0c28-40df-b6ad-c871424ad3a6-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7nclw5\" (UID: \"1f9928f3-0c28-40df-b6ad-c871424ad3a6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7nclw5" Jan 03 05:59:51 crc kubenswrapper[4854]: E0103 05:59:51.755205 4854 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 03 05:59:51 crc kubenswrapper[4854]: E0103 05:59:51.755260 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1f9928f3-0c28-40df-b6ad-c871424ad3a6-cert podName:1f9928f3-0c28-40df-b6ad-c871424ad3a6 nodeName:}" failed. No retries permitted until 2026-01-03 05:59:59.755242905 +0000 UTC m=+1178.081819477 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1f9928f3-0c28-40df-b6ad-c871424ad3a6-cert") pod "openstack-baremetal-operator-controller-manager-78948ddfd7nclw5" (UID: "1f9928f3-0c28-40df-b6ad-c871424ad3a6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 03 05:59:52 crc kubenswrapper[4854]: I0103 05:59:52.468799 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f7c87f2-5743-4000-a36a-3a9400e24cdd-metrics-certs\") pod \"openstack-operator-controller-manager-85b679bdc6-qrnbc\" (UID: \"7f7c87f2-5743-4000-a36a-3a9400e24cdd\") " pod="openstack-operators/openstack-operator-controller-manager-85b679bdc6-qrnbc" Jan 03 05:59:52 crc kubenswrapper[4854]: I0103 05:59:52.468992 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7f7c87f2-5743-4000-a36a-3a9400e24cdd-webhook-certs\") pod \"openstack-operator-controller-manager-85b679bdc6-qrnbc\" (UID: \"7f7c87f2-5743-4000-a36a-3a9400e24cdd\") " pod="openstack-operators/openstack-operator-controller-manager-85b679bdc6-qrnbc" Jan 03 05:59:52 crc kubenswrapper[4854]: E0103 05:59:52.469044 4854 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 03 05:59:52 crc kubenswrapper[4854]: E0103 05:59:52.469181 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f7c87f2-5743-4000-a36a-3a9400e24cdd-metrics-certs podName:7f7c87f2-5743-4000-a36a-3a9400e24cdd nodeName:}" failed. No retries permitted until 2026-01-03 06:00:00.469155766 +0000 UTC m=+1178.795732548 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7f7c87f2-5743-4000-a36a-3a9400e24cdd-metrics-certs") pod "openstack-operator-controller-manager-85b679bdc6-qrnbc" (UID: "7f7c87f2-5743-4000-a36a-3a9400e24cdd") : secret "metrics-server-cert" not found Jan 03 05:59:52 crc kubenswrapper[4854]: E0103 05:59:52.469211 4854 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 03 05:59:52 crc kubenswrapper[4854]: E0103 05:59:52.469290 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f7c87f2-5743-4000-a36a-3a9400e24cdd-webhook-certs podName:7f7c87f2-5743-4000-a36a-3a9400e24cdd nodeName:}" failed. No retries permitted until 2026-01-03 06:00:00.469268229 +0000 UTC m=+1178.795844791 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7f7c87f2-5743-4000-a36a-3a9400e24cdd-webhook-certs") pod "openstack-operator-controller-manager-85b679bdc6-qrnbc" (UID: "7f7c87f2-5743-4000-a36a-3a9400e24cdd") : secret "webhook-server-cert" not found Jan 03 05:59:59 crc kubenswrapper[4854]: I0103 05:59:59.387117 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ba0f32da-a0e3-4c43-8dde-d6212a1c63e1-cert\") pod \"infra-operator-controller-manager-6d99759cf-qqbq9\" (UID: \"ba0f32da-a0e3-4c43-8dde-d6212a1c63e1\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qqbq9" Jan 03 05:59:59 crc kubenswrapper[4854]: I0103 05:59:59.394604 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ba0f32da-a0e3-4c43-8dde-d6212a1c63e1-cert\") pod \"infra-operator-controller-manager-6d99759cf-qqbq9\" (UID: \"ba0f32da-a0e3-4c43-8dde-d6212a1c63e1\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qqbq9" Jan 03 05:59:59 crc kubenswrapper[4854]: I0103 05:59:59.497582 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qqbq9" Jan 03 05:59:59 crc kubenswrapper[4854]: I0103 05:59:59.794252 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1f9928f3-0c28-40df-b6ad-c871424ad3a6-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7nclw5\" (UID: \"1f9928f3-0c28-40df-b6ad-c871424ad3a6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7nclw5" Jan 03 05:59:59 crc kubenswrapper[4854]: I0103 05:59:59.800428 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1f9928f3-0c28-40df-b6ad-c871424ad3a6-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7nclw5\" (UID: \"1f9928f3-0c28-40df-b6ad-c871424ad3a6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7nclw5" Jan 03 06:00:00 crc kubenswrapper[4854]: I0103 06:00:00.045019 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7nclw5" Jan 03 06:00:00 crc kubenswrapper[4854]: I0103 06:00:00.213307 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29457000-4b58c"] Jan 03 06:00:00 crc kubenswrapper[4854]: I0103 06:00:00.215774 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29457000-4b58c" Jan 03 06:00:00 crc kubenswrapper[4854]: I0103 06:00:00.234300 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29457000-4b58c"] Jan 03 06:00:00 crc kubenswrapper[4854]: I0103 06:00:00.239506 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 03 06:00:00 crc kubenswrapper[4854]: I0103 06:00:00.239757 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 03 06:00:00 crc kubenswrapper[4854]: I0103 06:00:00.314835 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jljnz\" (UniqueName: \"kubernetes.io/projected/bef59ea8-bada-439a-a6fe-1745e38b01c7-kube-api-access-jljnz\") pod \"collect-profiles-29457000-4b58c\" (UID: \"bef59ea8-bada-439a-a6fe-1745e38b01c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29457000-4b58c" Jan 03 06:00:00 crc kubenswrapper[4854]: I0103 06:00:00.315149 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bef59ea8-bada-439a-a6fe-1745e38b01c7-config-volume\") pod \"collect-profiles-29457000-4b58c\" (UID: \"bef59ea8-bada-439a-a6fe-1745e38b01c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29457000-4b58c" Jan 03 06:00:00 crc kubenswrapper[4854]: I0103 06:00:00.315284 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bef59ea8-bada-439a-a6fe-1745e38b01c7-secret-volume\") pod \"collect-profiles-29457000-4b58c\" (UID: \"bef59ea8-bada-439a-a6fe-1745e38b01c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29457000-4b58c" Jan 03 06:00:00 crc kubenswrapper[4854]: I0103 06:00:00.417699 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bef59ea8-bada-439a-a6fe-1745e38b01c7-secret-volume\") pod \"collect-profiles-29457000-4b58c\" (UID: \"bef59ea8-bada-439a-a6fe-1745e38b01c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29457000-4b58c" Jan 03 06:00:00 crc kubenswrapper[4854]: I0103 06:00:00.417907 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jljnz\" (UniqueName: \"kubernetes.io/projected/bef59ea8-bada-439a-a6fe-1745e38b01c7-kube-api-access-jljnz\") pod \"collect-profiles-29457000-4b58c\" (UID: \"bef59ea8-bada-439a-a6fe-1745e38b01c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29457000-4b58c" Jan 03 06:00:00 crc kubenswrapper[4854]: I0103 06:00:00.417940 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bef59ea8-bada-439a-a6fe-1745e38b01c7-config-volume\") pod \"collect-profiles-29457000-4b58c\" (UID: \"bef59ea8-bada-439a-a6fe-1745e38b01c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29457000-4b58c" Jan 03 06:00:00 crc kubenswrapper[4854]: I0103 06:00:00.419809 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bef59ea8-bada-439a-a6fe-1745e38b01c7-config-volume\") pod \"collect-profiles-29457000-4b58c\" (UID: \"bef59ea8-bada-439a-a6fe-1745e38b01c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29457000-4b58c" Jan 03 06:00:00 crc kubenswrapper[4854]: I0103 06:00:00.422815 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bef59ea8-bada-439a-a6fe-1745e38b01c7-secret-volume\") pod \"collect-profiles-29457000-4b58c\" (UID: \"bef59ea8-bada-439a-a6fe-1745e38b01c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29457000-4b58c" Jan 03 06:00:00 crc kubenswrapper[4854]: I0103 06:00:00.443827 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jljnz\" (UniqueName: \"kubernetes.io/projected/bef59ea8-bada-439a-a6fe-1745e38b01c7-kube-api-access-jljnz\") pod \"collect-profiles-29457000-4b58c\" (UID: \"bef59ea8-bada-439a-a6fe-1745e38b01c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29457000-4b58c" Jan 03 06:00:00 crc kubenswrapper[4854]: I0103 06:00:00.520041 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7f7c87f2-5743-4000-a36a-3a9400e24cdd-webhook-certs\") pod \"openstack-operator-controller-manager-85b679bdc6-qrnbc\" (UID: \"7f7c87f2-5743-4000-a36a-3a9400e24cdd\") " pod="openstack-operators/openstack-operator-controller-manager-85b679bdc6-qrnbc" Jan 03 06:00:00 crc kubenswrapper[4854]: I0103 06:00:00.520589 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f7c87f2-5743-4000-a36a-3a9400e24cdd-metrics-certs\") pod \"openstack-operator-controller-manager-85b679bdc6-qrnbc\" (UID: \"7f7c87f2-5743-4000-a36a-3a9400e24cdd\") " pod="openstack-operators/openstack-operator-controller-manager-85b679bdc6-qrnbc" Jan 03 06:00:00 crc kubenswrapper[4854]: I0103 06:00:00.523365 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7f7c87f2-5743-4000-a36a-3a9400e24cdd-webhook-certs\") pod \"openstack-operator-controller-manager-85b679bdc6-qrnbc\" (UID: \"7f7c87f2-5743-4000-a36a-3a9400e24cdd\") " pod="openstack-operators/openstack-operator-controller-manager-85b679bdc6-qrnbc" Jan 03 06:00:00 crc kubenswrapper[4854]: I0103 06:00:00.524723 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f7c87f2-5743-4000-a36a-3a9400e24cdd-metrics-certs\") pod \"openstack-operator-controller-manager-85b679bdc6-qrnbc\" (UID: \"7f7c87f2-5743-4000-a36a-3a9400e24cdd\") " pod="openstack-operators/openstack-operator-controller-manager-85b679bdc6-qrnbc" Jan 03 06:00:00 crc kubenswrapper[4854]: I0103 06:00:00.580755 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29457000-4b58c" Jan 03 06:00:00 crc kubenswrapper[4854]: I0103 06:00:00.604332 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-85b679bdc6-qrnbc" Jan 03 06:00:08 crc kubenswrapper[4854]: E0103 06:00:08.174409 4854 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168" Jan 03 06:00:08 crc kubenswrapper[4854]: E0103 06:00:08.175527 4854 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pr4k6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-68c649d9d-8xksh_openstack-operators(05f5522f-8e47-4d35-be75-2edee0f16f77): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 03 06:00:08 crc kubenswrapper[4854]: E0103 06:00:08.176783 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-8xksh" podUID="05f5522f-8e47-4d35-be75-2edee0f16f77" Jan 03 06:00:09 crc kubenswrapper[4854]: E0103 06:00:09.018330 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-8xksh" podUID="05f5522f-8e47-4d35-be75-2edee0f16f77" Jan 03 06:00:10 crc kubenswrapper[4854]: E0103 06:00:10.895774 4854 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:174acf70c084144827fb8f96c5401a0a8def953bf0ff8929dccd629a550491b7" Jan 03 06:00:10 crc kubenswrapper[4854]: E0103 06:00:10.896584 4854 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:174acf70c084144827fb8f96c5401a0a8def953bf0ff8929dccd629a550491b7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9tz65,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-78979fc445-jx5q2_openstack-operators(40ad961e-d740-49fa-9a1f-e9d950002a3e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 03 06:00:10 crc kubenswrapper[4854]: E0103 06:00:10.897770 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-jx5q2" podUID="40ad961e-d740-49fa-9a1f-e9d950002a3e" Jan 03 06:00:11 crc kubenswrapper[4854]: E0103 06:00:11.467355 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:174acf70c084144827fb8f96c5401a0a8def953bf0ff8929dccd629a550491b7\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-jx5q2" podUID="40ad961e-d740-49fa-9a1f-e9d950002a3e" Jan 03 06:00:11 crc kubenswrapper[4854]: E0103 06:00:11.873425 4854 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:c10647131e6fa6afeb11ea28e513b60f22dbfbb4ddc3727850b1fe5799890c41" Jan 03 06:00:11 crc kubenswrapper[4854]: E0103 06:00:11.873681 4854 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:c10647131e6fa6afeb11ea28e513b60f22dbfbb4ddc3727850b1fe5799890c41,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hktn9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-7b88bfc995-vdnq9_openstack-operators(25988b2b-1924-4007-a6b1-5e5403d5dc68): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 03 06:00:11 crc kubenswrapper[4854]: E0103 06:00:11.874964 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-vdnq9" podUID="25988b2b-1924-4007-a6b1-5e5403d5dc68" Jan 03 06:00:12 crc kubenswrapper[4854]: E0103 06:00:12.474792 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:c10647131e6fa6afeb11ea28e513b60f22dbfbb4ddc3727850b1fe5799890c41\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-vdnq9" podUID="25988b2b-1924-4007-a6b1-5e5403d5dc68" Jan 03 06:00:17 crc kubenswrapper[4854]: E0103 06:00:17.914247 4854 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.18:5001/openstack-k8s-operators/telemetry-operator:343fa80cc4dcd40f7861f392faf5d441f3ce7670" Jan 03 06:00:17 crc kubenswrapper[4854]: E0103 06:00:17.914751 4854 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.18:5001/openstack-k8s-operators/telemetry-operator:343fa80cc4dcd40f7861f392faf5d441f3ce7670" Jan 03 06:00:17 crc kubenswrapper[4854]: E0103 06:00:17.914926 4854 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.18:5001/openstack-k8s-operators/telemetry-operator:343fa80cc4dcd40f7861f392faf5d441f3ce7670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tj2lk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-7666dbdd4f-46t4f_openstack-operators(8f21d9f8-0bdd-43de-8196-186dccb7b2f8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 03 06:00:17 crc kubenswrapper[4854]: E0103 06:00:17.916389 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-7666dbdd4f-46t4f" podUID="8f21d9f8-0bdd-43de-8196-186dccb7b2f8" Jan 03 06:00:18 crc kubenswrapper[4854]: E0103 06:00:18.547457 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.18:5001/openstack-k8s-operators/telemetry-operator:343fa80cc4dcd40f7861f392faf5d441f3ce7670\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7666dbdd4f-46t4f" podUID="8f21d9f8-0bdd-43de-8196-186dccb7b2f8" Jan 03 06:00:18 crc kubenswrapper[4854]: E0103 06:00:18.812753 4854 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:573d7dba212cbc32101496a7cbe01e391af9891bed3bec717f16bed4d6c23e04" Jan 03 06:00:18 crc kubenswrapper[4854]: E0103 06:00:18.812957 4854 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:573d7dba212cbc32101496a7cbe01e391af9891bed3bec717f16bed4d6c23e04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5lxd2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-658dd65b86-k6nnf_openstack-operators(7d4776d0-290f-4c82-aa5c-6412b5bb4608): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 03 06:00:18 crc kubenswrapper[4854]: E0103 06:00:18.814167 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-k6nnf" podUID="7d4776d0-290f-4c82-aa5c-6412b5bb4608" Jan 03 06:00:19 crc kubenswrapper[4854]: E0103 06:00:19.558436 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:573d7dba212cbc32101496a7cbe01e391af9891bed3bec717f16bed4d6c23e04\\\"\"" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-k6nnf" podUID="7d4776d0-290f-4c82-aa5c-6412b5bb4608" Jan 03 06:00:23 crc kubenswrapper[4854]: E0103 06:00:23.333245 4854 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557" Jan 03 06:00:23 crc kubenswrapper[4854]: E0103 06:00:23.334010 4854 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c725l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-7cd87b778f-xgtzc_openstack-operators(04d8c7f1-6674-45b0-9506-9d62c1a2f892): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 03 06:00:23 crc kubenswrapper[4854]: E0103 06:00:23.335243 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-xgtzc" podUID="04d8c7f1-6674-45b0-9506-9d62c1a2f892" Jan 03 06:00:23 crc kubenswrapper[4854]: E0103 06:00:23.592850 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-xgtzc" podUID="04d8c7f1-6674-45b0-9506-9d62c1a2f892" Jan 03 06:00:24 crc kubenswrapper[4854]: E0103 06:00:24.052628 4854 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:1b684c4ca525a279deee45980140d895e264526c5c7e0a6981d6fae6cbcaa420" Jan 03 06:00:24 crc kubenswrapper[4854]: E0103 06:00:24.052829 4854 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:1b684c4ca525a279deee45980140d895e264526c5c7e0a6981d6fae6cbcaa420,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xvqcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-9b6f8f78c-dprp4_openstack-operators(6515eec5-5595-42cb-8588-81baa0db47c1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 03 06:00:24 crc kubenswrapper[4854]: E0103 06:00:24.054053 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-dprp4" podUID="6515eec5-5595-42cb-8588-81baa0db47c1" Jan 03 06:00:24 crc kubenswrapper[4854]: E0103 06:00:24.601728 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:1b684c4ca525a279deee45980140d895e264526c5c7e0a6981d6fae6cbcaa420\\\"\"" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-dprp4" podUID="6515eec5-5595-42cb-8588-81baa0db47c1" Jan 03 06:00:28 crc kubenswrapper[4854]: E0103 06:00:28.099542 4854 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:b7111c690e8fda3cb0c5969bcfa68308907fd0cf05f73ecdcb9ac1423aa7bba3" Jan 03 06:00:28 crc kubenswrapper[4854]: E0103 06:00:28.099982 4854 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:b7111c690e8fda3cb0c5969bcfa68308907fd0cf05f73ecdcb9ac1423aa7bba3,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-92xv6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-7f5ddd8d7b-trsxr_openstack-operators(f5b690cb-eb48-469c-a774-eff5eda46f89): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 03 06:00:28 crc kubenswrapper[4854]: E0103 06:00:28.101233 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-trsxr" podUID="f5b690cb-eb48-469c-a774-eff5eda46f89" Jan 03 06:00:28 crc kubenswrapper[4854]: E0103 06:00:28.640841 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:b7111c690e8fda3cb0c5969bcfa68308907fd0cf05f73ecdcb9ac1423aa7bba3\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-trsxr" podUID="f5b690cb-eb48-469c-a774-eff5eda46f89" Jan 03 06:00:28 crc kubenswrapper[4854]: E0103 06:00:28.914276 4854 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:202756538820b5fa874d07a71ece4f048f41ccca8228d359c8cd25a00e9c0848" Jan 03 06:00:28 crc kubenswrapper[4854]: E0103 06:00:28.914463 4854 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:202756538820b5fa874d07a71ece4f048f41ccca8228d359c8cd25a00e9c0848,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-82z6r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-f99f54bc8-4pbfn_openstack-operators(1d8399ce-3c90-4601-9a32-31dc20da4552): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 03 06:00:28 crc kubenswrapper[4854]: E0103 06:00:28.915742 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-4pbfn" podUID="1d8399ce-3c90-4601-9a32-31dc20da4552" Jan 03 06:00:29 crc kubenswrapper[4854]: E0103 06:00:29.648470 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:202756538820b5fa874d07a71ece4f048f41ccca8228d359c8cd25a00e9c0848\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-4pbfn" podUID="1d8399ce-3c90-4601-9a32-31dc20da4552" Jan 03 06:00:30 crc kubenswrapper[4854]: E0103 06:00:30.675613 4854 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:afb66a0f8e1aa057888f7c304cc34cfea711805d9d1f05798aceb4029fef2989" Jan 03 06:00:30 crc kubenswrapper[4854]: E0103 06:00:30.675842 4854 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:afb66a0f8e1aa057888f7c304cc34cfea711805d9d1f05798aceb4029fef2989,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cbtlt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-f6f74d6db-jvp7v_openstack-operators(81de0b3b-e6fc-45c9-b347-995726d00213): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 03 06:00:30 crc kubenswrapper[4854]: E0103 06:00:30.677125 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-jvp7v" podUID="81de0b3b-e6fc-45c9-b347-995726d00213" Jan 03 06:00:31 crc kubenswrapper[4854]: E0103 06:00:31.508168 4854 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:19345236c6b6bd5ae772e336fa6065c6e94c8990d1bf05d30073ddb95ffffb4d" Jan 03 06:00:31 crc kubenswrapper[4854]: E0103 06:00:31.508840 4854 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:19345236c6b6bd5ae772e336fa6065c6e94c8990d1bf05d30073ddb95ffffb4d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wbklk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-7b549fc966-hgwsb_openstack-operators(a327e8cf-824f-41b1-9076-5fd57a8b4352): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 03 06:00:31 crc kubenswrapper[4854]: E0103 06:00:31.510149 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-hgwsb" podUID="a327e8cf-824f-41b1-9076-5fd57a8b4352" Jan 03 06:00:31 crc kubenswrapper[4854]: E0103 06:00:31.664760 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:19345236c6b6bd5ae772e336fa6065c6e94c8990d1bf05d30073ddb95ffffb4d\\\"\"" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-hgwsb" podUID="a327e8cf-824f-41b1-9076-5fd57a8b4352" Jan 03 06:00:31 crc kubenswrapper[4854]: E0103 06:00:31.665137 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:afb66a0f8e1aa057888f7c304cc34cfea711805d9d1f05798aceb4029fef2989\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-jvp7v" podUID="81de0b3b-e6fc-45c9-b347-995726d00213" Jan 03 06:00:37 crc kubenswrapper[4854]: E0103 06:00:37.157016 4854 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:900050d3501c0785b227db34b89883efe68247816e5c7427cacb74f8aa10605a" Jan 03 06:00:37 crc kubenswrapper[4854]: E0103 06:00:37.158327 4854 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:900050d3501c0785b227db34b89883efe68247816e5c7427cacb74f8aa10605a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t2zd5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-66f8b87655-msvf6_openstack-operators(c2f6c336-91f0-41e6-b439-c5d940264b7f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 03 06:00:37 crc kubenswrapper[4854]: E0103 06:00:37.159607 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-msvf6" podUID="c2f6c336-91f0-41e6-b439-c5d940264b7f" Jan 03 06:00:37 crc kubenswrapper[4854]: E0103 06:00:37.734000 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:900050d3501c0785b227db34b89883efe68247816e5c7427cacb74f8aa10605a\\\"\"" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-msvf6" podUID="c2f6c336-91f0-41e6-b439-c5d940264b7f" Jan 03 06:00:37 crc kubenswrapper[4854]: E0103 06:00:37.795778 4854 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:879d3d679b58ae84419b7907ad092ad4d24bcc9222ce621ce464fd0fea347b0c" Jan 03 06:00:37 crc kubenswrapper[4854]: E0103 06:00:37.796165 4854 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:879d3d679b58ae84419b7907ad092ad4d24bcc9222ce621ce464fd0fea347b0c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nkf5h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-568985c78-x78fv_openstack-operators(14991c3c-8c35-4008-b1a0-1b8690074322): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 03 06:00:37 crc kubenswrapper[4854]: E0103 06:00:37.797668 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-568985c78-x78fv" podUID="14991c3c-8c35-4008-b1a0-1b8690074322" Jan 03 06:00:38 crc kubenswrapper[4854]: E0103 06:00:38.436107 4854 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 03 06:00:38 crc kubenswrapper[4854]: E0103 06:00:38.436846 4854 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z4n5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-cfs9s_openstack-operators(b6397338-ed12-4f81-98aa-97a84e4256f6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 03 06:00:38 crc kubenswrapper[4854]: E0103 06:00:38.438198 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cfs9s" podUID="b6397338-ed12-4f81-98aa-97a84e4256f6" Jan 03 06:00:38 crc kubenswrapper[4854]: E0103 06:00:38.742017 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:879d3d679b58ae84419b7907ad092ad4d24bcc9222ce621ce464fd0fea347b0c\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-568985c78-x78fv" podUID="14991c3c-8c35-4008-b1a0-1b8690074322" Jan 03 06:00:38 crc kubenswrapper[4854]: E0103 06:00:38.742394 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cfs9s" podUID="b6397338-ed12-4f81-98aa-97a84e4256f6" Jan 03 06:00:39 crc kubenswrapper[4854]: E0103 06:00:39.304654 4854 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:4e3d234c1398039c2593611f7b0fd2a6b284cafb1563e6737876a265b9af42b6" Jan 03 06:00:39 crc kubenswrapper[4854]: E0103 06:00:39.304888 4854 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:4e3d234c1398039c2593611f7b0fd2a6b284cafb1563e6737876a265b9af42b6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c5pqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-6c866cfdcb-7lvxp_openstack-operators(ad6a18d3-e1d2-446a-9b41-a9fca5e8b574): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 03 06:00:39 crc kubenswrapper[4854]: E0103 06:00:39.306148 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-7lvxp" podUID="ad6a18d3-e1d2-446a-9b41-a9fca5e8b574" Jan 03 06:00:39 crc kubenswrapper[4854]: E0103 06:00:39.995236 4854 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59" Jan 03 06:00:39 crc kubenswrapper[4854]: E0103 06:00:39.995476 4854 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nzxkc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-bf6d4f946-jqj54_openstack-operators(e62c43c5-cac2-4f9f-9e1b-de61827c4c94): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 03 06:00:39 crc kubenswrapper[4854]: E0103 06:00:39.996656 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-jqj54" podUID="e62c43c5-cac2-4f9f-9e1b-de61827c4c94" Jan 03 06:00:40 crc kubenswrapper[4854]: E0103 06:00:40.562020 4854 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:c846ab4a49272557884db6b976f979e6b9dce1aa73e5eb7872b4472f44602a1c" Jan 03 06:00:40 crc kubenswrapper[4854]: E0103 06:00:40.562208 4854 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:c846ab4a49272557884db6b976f979e6b9dce1aa73e5eb7872b4472f44602a1c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6m8hq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-598945d5b8-z7cfx_openstack-operators(fe7f33a3-c4b8-44b6-81f1-c2143cbb9dd1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 03 06:00:40 crc kubenswrapper[4854]: E0103 06:00:40.563391 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-z7cfx" podUID="fe7f33a3-c4b8-44b6-81f1-c2143cbb9dd1" Jan 03 06:00:41 crc kubenswrapper[4854]: E0103 06:00:41.211434 4854 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670" Jan 03 06:00:41 crc kubenswrapper[4854]: E0103 06:00:41.211666 4854 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xjtz7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-5fbbf8b6cc-ncjlb_openstack-operators(402a077e-f741-447d-ab1c-25bc62cd24cf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 03 06:00:41 crc kubenswrapper[4854]: E0103 06:00:41.213522 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-ncjlb" podUID="402a077e-f741-447d-ab1c-25bc62cd24cf" Jan 03 06:00:41 crc kubenswrapper[4854]: I0103 06:00:41.731450 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-6d99759cf-qqbq9"] Jan 03 06:00:41 crc kubenswrapper[4854]: I0103 06:00:41.766217 4854 patch_prober.go:28] interesting pod/machine-config-daemon-qdhfx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 06:00:41 crc kubenswrapper[4854]: I0103 06:00:41.766308 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 06:00:41 crc kubenswrapper[4854]: I0103 06:00:41.803747 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-8xksh" event={"ID":"05f5522f-8e47-4d35-be75-2edee0f16f77","Type":"ContainerStarted","Data":"552f47762794e0c39bb4081c8db206bd34205a674c1b762980d168b1617b9e91"} Jan 03 06:00:41 crc kubenswrapper[4854]: I0103 06:00:41.804034 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-8xksh" Jan 03 06:00:41 crc kubenswrapper[4854]: I0103 06:00:41.809001 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-vdnq9" event={"ID":"25988b2b-1924-4007-a6b1-5e5403d5dc68","Type":"ContainerStarted","Data":"5a78d09ec546ee13bf3ea690431552445ce59db383b12a2c96a94ff28061ccd8"} Jan 03 06:00:41 crc kubenswrapper[4854]: I0103 06:00:41.809263 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-vdnq9" Jan 03 06:00:41 crc kubenswrapper[4854]: I0103 06:00:41.815569 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-qzzw2" event={"ID":"ddf8e54e-858e-432c-ab2d-8b4d83f6282b","Type":"ContainerStarted","Data":"9ad77647bc8c7a303f8c28d2bec0f7ab94d0e6d882ca027fcc923850bed2e1e6"} Jan 03 06:00:41 crc kubenswrapper[4854]: I0103 06:00:41.815780 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-qzzw2" Jan 03 06:00:41 crc kubenswrapper[4854]: I0103 06:00:41.836913 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-8xksh" podStartSLOduration=4.305020182 podStartE2EDuration="58.836889032s" podCreationTimestamp="2026-01-03 05:59:43 +0000 UTC" firstStartedPulling="2026-01-03 05:59:46.701298636 +0000 UTC m=+1165.027875198" lastFinishedPulling="2026-01-03 06:00:41.233167466 +0000 UTC m=+1219.559744048" observedRunningTime="2026-01-03 06:00:41.820147934 +0000 UTC m=+1220.146724526" watchObservedRunningTime="2026-01-03 06:00:41.836889032 +0000 UTC m=+1220.163465604" Jan 03 06:00:41 crc kubenswrapper[4854]: I0103 06:00:41.846656 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7nclw5"] Jan 03 06:00:41 crc kubenswrapper[4854]: I0103 06:00:41.868631 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-85b679bdc6-qrnbc"] Jan 03 06:00:41 crc kubenswrapper[4854]: I0103 06:00:41.877032 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-qzzw2" podStartSLOduration=6.74698835 podStartE2EDuration="58.877014705s" podCreationTimestamp="2026-01-03 05:59:43 +0000 UTC" firstStartedPulling="2026-01-03 05:59:47.155808675 +0000 UTC m=+1165.482385247" lastFinishedPulling="2026-01-03 06:00:39.28583504 +0000 UTC m=+1217.612411602" observedRunningTime="2026-01-03 06:00:41.87561352 +0000 UTC m=+1220.202190092" watchObservedRunningTime="2026-01-03 06:00:41.877014705 +0000 UTC m=+1220.203591287" Jan 03 06:00:41 crc kubenswrapper[4854]: E0103 06:00:41.911219 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670\\\"\"" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-ncjlb" podUID="402a077e-f741-447d-ab1c-25bc62cd24cf" Jan 03 06:00:41 crc kubenswrapper[4854]: I0103 06:00:41.913110 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-vdnq9" podStartSLOduration=4.246987757 podStartE2EDuration="58.913062006s" podCreationTimestamp="2026-01-03 05:59:43 +0000 UTC" firstStartedPulling="2026-01-03 05:59:46.650259075 +0000 UTC m=+1164.976835647" lastFinishedPulling="2026-01-03 06:00:41.316333324 +0000 UTC m=+1219.642909896" observedRunningTime="2026-01-03 06:00:41.899435295 +0000 UTC m=+1220.226011867" watchObservedRunningTime="2026-01-03 06:00:41.913062006 +0000 UTC m=+1220.239638588" Jan 03 06:00:42 crc kubenswrapper[4854]: I0103 06:00:42.020825 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29457000-4b58c"] Jan 03 06:00:42 crc kubenswrapper[4854]: W0103 06:00:42.367635 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbef59ea8_bada_439a_a6fe_1745e38b01c7.slice/crio-acac295213f5ba77b4b2fefe3e65b9badc59c026cd78f70546e91eb48c099309 WatchSource:0}: Error finding container acac295213f5ba77b4b2fefe3e65b9badc59c026cd78f70546e91eb48c099309: Status 404 returned error can't find the container with id acac295213f5ba77b4b2fefe3e65b9badc59c026cd78f70546e91eb48c099309 Jan 03 06:00:42 crc kubenswrapper[4854]: I0103 06:00:42.854328 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-xgtzc" event={"ID":"04d8c7f1-6674-45b0-9506-9d62c1a2f892","Type":"ContainerStarted","Data":"3f4692302ff77bfafcff5ca37f1422fddb733d20b795c9b3ca159f49df47472f"} Jan 03 06:00:42 crc kubenswrapper[4854]: I0103 06:00:42.857303 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-dprp4" event={"ID":"6515eec5-5595-42cb-8588-81baa0db47c1","Type":"ContainerStarted","Data":"624bad035cc975d2993cfecbbce65f6e0bdf8f1a0acb430a45c97693d78d33ab"} Jan 03 06:00:42 crc kubenswrapper[4854]: I0103 06:00:42.859439 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-k6nnf" event={"ID":"7d4776d0-290f-4c82-aa5c-6412b5bb4608","Type":"ContainerStarted","Data":"7a94f73858bd6bc637fdf88e96a68dc87a1aefec805ad7273825018854334617"} Jan 03 06:00:42 crc kubenswrapper[4854]: I0103 06:00:42.860486 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-k6nnf" Jan 03 06:00:42 crc kubenswrapper[4854]: I0103 06:00:42.861989 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-jx5q2" event={"ID":"40ad961e-d740-49fa-9a1f-e9d950002a3e","Type":"ContainerStarted","Data":"f98ccd1abe9f99027047f10f3722103227a265ab76bb83c1979302bbb5643ac3"} Jan 03 06:00:42 crc kubenswrapper[4854]: I0103 06:00:42.862489 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-jx5q2" Jan 03 06:00:42 crc kubenswrapper[4854]: I0103 06:00:42.876598 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-xrghz" event={"ID":"56476ba9-ae33-4d34-855c-0e144e4f5da3","Type":"ContainerStarted","Data":"7797cd0dbfc31494234b66b2ec1186c0b6f0cb586b6282d6a6e7c1bac6d18947"} Jan 03 06:00:42 crc kubenswrapper[4854]: I0103 06:00:42.877327 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-xrghz" Jan 03 06:00:42 crc kubenswrapper[4854]: I0103 06:00:42.878751 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7nclw5" event={"ID":"1f9928f3-0c28-40df-b6ad-c871424ad3a6","Type":"ContainerStarted","Data":"d5771dc6f25977f17d86a10e35b7c60dc3014bbe6486864b8fb37ecd08db9809"} Jan 03 06:00:42 crc kubenswrapper[4854]: I0103 06:00:42.899920 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-k6nnf" podStartSLOduration=4.664509496 podStartE2EDuration="59.899906538s" podCreationTimestamp="2026-01-03 05:59:43 +0000 UTC" firstStartedPulling="2026-01-03 05:59:46.245645168 +0000 UTC m=+1164.572221760" lastFinishedPulling="2026-01-03 06:00:41.48104223 +0000 UTC m=+1219.807618802" observedRunningTime="2026-01-03 06:00:42.898488873 +0000 UTC m=+1221.225065435" watchObservedRunningTime="2026-01-03 06:00:42.899906538 +0000 UTC m=+1221.226483110" Jan 03 06:00:42 crc kubenswrapper[4854]: I0103 06:00:42.913304 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29457000-4b58c" event={"ID":"bef59ea8-bada-439a-a6fe-1745e38b01c7","Type":"ContainerStarted","Data":"acac295213f5ba77b4b2fefe3e65b9badc59c026cd78f70546e91eb48c099309"} Jan 03 06:00:42 crc kubenswrapper[4854]: I0103 06:00:42.938307 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-jx5q2" podStartSLOduration=4.041419582 podStartE2EDuration="59.938284487s" podCreationTimestamp="2026-01-03 05:59:43 +0000 UTC" firstStartedPulling="2026-01-03 05:59:45.41949943 +0000 UTC m=+1163.746076002" lastFinishedPulling="2026-01-03 06:00:41.316364335 +0000 UTC m=+1219.642940907" observedRunningTime="2026-01-03 06:00:42.921524758 +0000 UTC m=+1221.248101330" watchObservedRunningTime="2026-01-03 06:00:42.938284487 +0000 UTC m=+1221.264861059" Jan 03 06:00:42 crc kubenswrapper[4854]: I0103 06:00:42.940009 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-85b679bdc6-qrnbc" event={"ID":"7f7c87f2-5743-4000-a36a-3a9400e24cdd","Type":"ContainerStarted","Data":"a39b931450efb5b89658c55982ba169bda31287c4fee82635ad0e17f4857d1ba"} Jan 03 06:00:42 crc kubenswrapper[4854]: I0103 06:00:42.955072 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qqbq9" event={"ID":"ba0f32da-a0e3-4c43-8dde-d6212a1c63e1","Type":"ContainerStarted","Data":"e03ce0cef154607b31d8470bc1aee67508447805ef9cc81c241b4a33a7a4bcce"} Jan 03 06:00:42 crc kubenswrapper[4854]: I0103 06:00:42.956865 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-xrghz" podStartSLOduration=5.9033466390000005 podStartE2EDuration="59.956848691s" podCreationTimestamp="2026-01-03 05:59:43 +0000 UTC" firstStartedPulling="2026-01-03 05:59:47.181360216 +0000 UTC m=+1165.507936788" lastFinishedPulling="2026-01-03 06:00:41.234862258 +0000 UTC m=+1219.561438840" observedRunningTime="2026-01-03 06:00:42.952972124 +0000 UTC m=+1221.279548706" watchObservedRunningTime="2026-01-03 06:00:42.956848691 +0000 UTC m=+1221.283425263" Jan 03 06:00:43 crc kubenswrapper[4854]: I0103 06:00:43.963207 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-dprp4" Jan 03 06:00:43 crc kubenswrapper[4854]: I0103 06:00:43.980542 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-dprp4" podStartSLOduration=6.641220528 podStartE2EDuration="1m0.980523434s" podCreationTimestamp="2026-01-03 05:59:43 +0000 UTC" firstStartedPulling="2026-01-03 05:59:47.161478167 +0000 UTC m=+1165.488054739" lastFinishedPulling="2026-01-03 06:00:41.500781073 +0000 UTC m=+1219.827357645" observedRunningTime="2026-01-03 06:00:43.977641612 +0000 UTC m=+1222.304218184" watchObservedRunningTime="2026-01-03 06:00:43.980523434 +0000 UTC m=+1222.307100006" Jan 03 06:00:44 crc kubenswrapper[4854]: I0103 06:00:44.978435 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-4pbfn" event={"ID":"1d8399ce-3c90-4601-9a32-31dc20da4552","Type":"ContainerStarted","Data":"02a72f3526af7de403502873243d9df5f78fcecb9259c58c32cf0517bd4002fe"} Jan 03 06:00:45 crc kubenswrapper[4854]: I0103 06:00:45.987278 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-xgtzc" Jan 03 06:00:46 crc kubenswrapper[4854]: I0103 06:00:46.999209 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-85b679bdc6-qrnbc" event={"ID":"7f7c87f2-5743-4000-a36a-3a9400e24cdd","Type":"ContainerStarted","Data":"b29a56eea2c1e72f35b4e50d6c1fd1f33dff3c436704cd4fc9bcdf70e3c082ee"} Jan 03 06:00:47 crc kubenswrapper[4854]: I0103 06:00:46.999907 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-85b679bdc6-qrnbc" Jan 03 06:00:47 crc kubenswrapper[4854]: I0103 06:00:47.002459 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-trsxr" event={"ID":"f5b690cb-eb48-469c-a774-eff5eda46f89","Type":"ContainerStarted","Data":"8564758a866053787b8c7c3719c9e2c0aafc3cfb635325e6e19ddeef1b7ed0e6"} Jan 03 06:00:47 crc kubenswrapper[4854]: I0103 06:00:47.002646 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-trsxr" Jan 03 06:00:47 crc kubenswrapper[4854]: I0103 06:00:47.004970 4854 generic.go:334] "Generic (PLEG): container finished" podID="bef59ea8-bada-439a-a6fe-1745e38b01c7" containerID="4d6e53659ed39a5dd6a8c8ba3ef8f6f3d84c57c3ace82ad4b9809b2f249492a2" exitCode=0 Jan 03 06:00:47 crc kubenswrapper[4854]: I0103 06:00:47.005053 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29457000-4b58c" event={"ID":"bef59ea8-bada-439a-a6fe-1745e38b01c7","Type":"ContainerDied","Data":"4d6e53659ed39a5dd6a8c8ba3ef8f6f3d84c57c3ace82ad4b9809b2f249492a2"} Jan 03 06:00:47 crc kubenswrapper[4854]: I0103 06:00:47.007832 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7666dbdd4f-46t4f" event={"ID":"8f21d9f8-0bdd-43de-8196-186dccb7b2f8","Type":"ContainerStarted","Data":"a8927722b8bbd83aacc60038a50e756fcdf8482d9a1bc2a0bbf7d1a130d46153"} Jan 03 06:00:47 crc kubenswrapper[4854]: I0103 06:00:47.007972 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7666dbdd4f-46t4f" Jan 03 06:00:47 crc kubenswrapper[4854]: I0103 06:00:47.008457 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-4pbfn" Jan 03 06:00:47 crc kubenswrapper[4854]: I0103 06:00:47.011512 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-xgtzc" Jan 03 06:00:47 crc kubenswrapper[4854]: I0103 06:00:47.039205 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-85b679bdc6-qrnbc" podStartSLOduration=63.039187282 podStartE2EDuration="1m3.039187282s" podCreationTimestamp="2026-01-03 05:59:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:00:47.032439193 +0000 UTC m=+1225.359015765" watchObservedRunningTime="2026-01-03 06:00:47.039187282 +0000 UTC m=+1225.365763864" Jan 03 06:00:47 crc kubenswrapper[4854]: I0103 06:00:47.039706 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-xgtzc" podStartSLOduration=9.0443591 podStartE2EDuration="1m4.039701745s" podCreationTimestamp="2026-01-03 05:59:43 +0000 UTC" firstStartedPulling="2026-01-03 05:59:46.50069554 +0000 UTC m=+1164.827272112" lastFinishedPulling="2026-01-03 06:00:41.496038185 +0000 UTC m=+1219.822614757" observedRunningTime="2026-01-03 06:00:46.013242892 +0000 UTC m=+1224.339819464" watchObservedRunningTime="2026-01-03 06:00:47.039701745 +0000 UTC m=+1225.366278317" Jan 03 06:00:47 crc kubenswrapper[4854]: I0103 06:00:47.061926 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-4pbfn" podStartSLOduration=8.69236123 podStartE2EDuration="1m4.06190672s" podCreationTimestamp="2026-01-03 05:59:43 +0000 UTC" firstStartedPulling="2026-01-03 05:59:46.546299485 +0000 UTC m=+1164.872876057" lastFinishedPulling="2026-01-03 06:00:41.915844975 +0000 UTC m=+1220.242421547" observedRunningTime="2026-01-03 06:00:47.05313937 +0000 UTC m=+1225.379715942" watchObservedRunningTime="2026-01-03 06:00:47.06190672 +0000 UTC m=+1225.388483282" Jan 03 06:00:47 crc kubenswrapper[4854]: I0103 06:00:47.141448 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-7666dbdd4f-46t4f" podStartSLOduration=9.802371496 podStartE2EDuration="1m4.141415497s" podCreationTimestamp="2026-01-03 05:59:43 +0000 UTC" firstStartedPulling="2026-01-03 05:59:47.156136963 +0000 UTC m=+1165.482713535" lastFinishedPulling="2026-01-03 06:00:41.495180974 +0000 UTC m=+1219.821757536" observedRunningTime="2026-01-03 06:00:47.133461678 +0000 UTC m=+1225.460038260" watchObservedRunningTime="2026-01-03 06:00:47.141415497 +0000 UTC m=+1225.467992089" Jan 03 06:00:47 crc kubenswrapper[4854]: I0103 06:00:47.170281 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-trsxr" podStartSLOduration=8.556433849 podStartE2EDuration="1m4.170264978s" podCreationTimestamp="2026-01-03 05:59:43 +0000 UTC" firstStartedPulling="2026-01-03 05:59:46.244425037 +0000 UTC m=+1164.571001609" lastFinishedPulling="2026-01-03 06:00:41.858256166 +0000 UTC m=+1220.184832738" observedRunningTime="2026-01-03 06:00:47.16956495 +0000 UTC m=+1225.496141522" watchObservedRunningTime="2026-01-03 06:00:47.170264978 +0000 UTC m=+1225.496841550" Jan 03 06:00:48 crc kubenswrapper[4854]: I0103 06:00:48.453432 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29457000-4b58c" Jan 03 06:00:48 crc kubenswrapper[4854]: I0103 06:00:48.580839 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bef59ea8-bada-439a-a6fe-1745e38b01c7-config-volume\") pod \"bef59ea8-bada-439a-a6fe-1745e38b01c7\" (UID: \"bef59ea8-bada-439a-a6fe-1745e38b01c7\") " Jan 03 06:00:48 crc kubenswrapper[4854]: I0103 06:00:48.581202 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bef59ea8-bada-439a-a6fe-1745e38b01c7-secret-volume\") pod \"bef59ea8-bada-439a-a6fe-1745e38b01c7\" (UID: \"bef59ea8-bada-439a-a6fe-1745e38b01c7\") " Jan 03 06:00:48 crc kubenswrapper[4854]: I0103 06:00:48.581328 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jljnz\" (UniqueName: \"kubernetes.io/projected/bef59ea8-bada-439a-a6fe-1745e38b01c7-kube-api-access-jljnz\") pod \"bef59ea8-bada-439a-a6fe-1745e38b01c7\" (UID: \"bef59ea8-bada-439a-a6fe-1745e38b01c7\") " Jan 03 06:00:48 crc kubenswrapper[4854]: I0103 06:00:48.583024 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bef59ea8-bada-439a-a6fe-1745e38b01c7-config-volume" (OuterVolumeSpecName: "config-volume") pod "bef59ea8-bada-439a-a6fe-1745e38b01c7" (UID: "bef59ea8-bada-439a-a6fe-1745e38b01c7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:00:48 crc kubenswrapper[4854]: I0103 06:00:48.599393 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bef59ea8-bada-439a-a6fe-1745e38b01c7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bef59ea8-bada-439a-a6fe-1745e38b01c7" (UID: "bef59ea8-bada-439a-a6fe-1745e38b01c7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:00:48 crc kubenswrapper[4854]: I0103 06:00:48.600797 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bef59ea8-bada-439a-a6fe-1745e38b01c7-kube-api-access-jljnz" (OuterVolumeSpecName: "kube-api-access-jljnz") pod "bef59ea8-bada-439a-a6fe-1745e38b01c7" (UID: "bef59ea8-bada-439a-a6fe-1745e38b01c7"). InnerVolumeSpecName "kube-api-access-jljnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:00:48 crc kubenswrapper[4854]: I0103 06:00:48.683541 4854 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bef59ea8-bada-439a-a6fe-1745e38b01c7-config-volume\") on node \"crc\" DevicePath \"\"" Jan 03 06:00:48 crc kubenswrapper[4854]: I0103 06:00:48.683591 4854 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bef59ea8-bada-439a-a6fe-1745e38b01c7-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 03 06:00:48 crc kubenswrapper[4854]: I0103 06:00:48.683603 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jljnz\" (UniqueName: \"kubernetes.io/projected/bef59ea8-bada-439a-a6fe-1745e38b01c7-kube-api-access-jljnz\") on node \"crc\" DevicePath \"\"" Jan 03 06:00:49 crc kubenswrapper[4854]: I0103 06:00:49.036206 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29457000-4b58c" event={"ID":"bef59ea8-bada-439a-a6fe-1745e38b01c7","Type":"ContainerDied","Data":"acac295213f5ba77b4b2fefe3e65b9badc59c026cd78f70546e91eb48c099309"} Jan 03 06:00:49 crc kubenswrapper[4854]: I0103 06:00:49.036244 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acac295213f5ba77b4b2fefe3e65b9badc59c026cd78f70546e91eb48c099309" Jan 03 06:00:49 crc kubenswrapper[4854]: I0103 06:00:49.036271 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29457000-4b58c" Jan 03 06:00:51 crc kubenswrapper[4854]: I0103 06:00:51.054652 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qqbq9" event={"ID":"ba0f32da-a0e3-4c43-8dde-d6212a1c63e1","Type":"ContainerStarted","Data":"e5a91fcb715bdf60c1d37a559ebbb7addbd8d8b8f95e6c6e300d56858e664bd6"} Jan 03 06:00:51 crc kubenswrapper[4854]: I0103 06:00:51.055313 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qqbq9" Jan 03 06:00:51 crc kubenswrapper[4854]: I0103 06:00:51.057730 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-hgwsb" event={"ID":"a327e8cf-824f-41b1-9076-5fd57a8b4352","Type":"ContainerStarted","Data":"6276d73e5970a2f94289dded2de6b14873d5cb520a1efb79f3fa5ee5db4cac7c"} Jan 03 06:00:51 crc kubenswrapper[4854]: I0103 06:00:51.058011 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-hgwsb" Jan 03 06:00:51 crc kubenswrapper[4854]: I0103 06:00:51.059451 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-jvp7v" event={"ID":"81de0b3b-e6fc-45c9-b347-995726d00213","Type":"ContainerStarted","Data":"42580edc27d34903604c0511b72307f02c183363e82be201ab729031ea338806"} Jan 03 06:00:51 crc kubenswrapper[4854]: I0103 06:00:51.059720 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-jvp7v" Jan 03 06:00:51 crc kubenswrapper[4854]: I0103 06:00:51.061041 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7nclw5" event={"ID":"1f9928f3-0c28-40df-b6ad-c871424ad3a6","Type":"ContainerStarted","Data":"7a17508ff67dfec2d67f7f271686ca65908119c01fe986e63df083ab37deb07e"} Jan 03 06:00:51 crc kubenswrapper[4854]: I0103 06:00:51.061216 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7nclw5" Jan 03 06:00:51 crc kubenswrapper[4854]: I0103 06:00:51.071471 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qqbq9" podStartSLOduration=59.503780229 podStartE2EDuration="1m8.071453181s" podCreationTimestamp="2026-01-03 05:59:43 +0000 UTC" firstStartedPulling="2026-01-03 06:00:41.859435596 +0000 UTC m=+1220.186012168" lastFinishedPulling="2026-01-03 06:00:50.427108548 +0000 UTC m=+1228.753685120" observedRunningTime="2026-01-03 06:00:51.071441481 +0000 UTC m=+1229.398018063" watchObservedRunningTime="2026-01-03 06:00:51.071453181 +0000 UTC m=+1229.398029753" Jan 03 06:00:51 crc kubenswrapper[4854]: I0103 06:00:51.094890 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-hgwsb" podStartSLOduration=3.882998652 podStartE2EDuration="1m8.094868986s" podCreationTimestamp="2026-01-03 05:59:43 +0000 UTC" firstStartedPulling="2026-01-03 05:59:46.205722736 +0000 UTC m=+1164.532299308" lastFinishedPulling="2026-01-03 06:00:50.41759306 +0000 UTC m=+1228.744169642" observedRunningTime="2026-01-03 06:00:51.090352323 +0000 UTC m=+1229.416928895" watchObservedRunningTime="2026-01-03 06:00:51.094868986 +0000 UTC m=+1229.421445558" Jan 03 06:00:51 crc kubenswrapper[4854]: I0103 06:00:51.144504 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7nclw5" podStartSLOduration=59.603703416 podStartE2EDuration="1m8.144483236s" podCreationTimestamp="2026-01-03 05:59:43 +0000 UTC" firstStartedPulling="2026-01-03 06:00:41.911454256 +0000 UTC m=+1220.238030828" lastFinishedPulling="2026-01-03 06:00:50.452234036 +0000 UTC m=+1228.778810648" observedRunningTime="2026-01-03 06:00:51.133354508 +0000 UTC m=+1229.459931080" watchObservedRunningTime="2026-01-03 06:00:51.144483236 +0000 UTC m=+1229.471059808" Jan 03 06:00:51 crc kubenswrapper[4854]: I0103 06:00:51.155668 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-jvp7v" podStartSLOduration=3.198680981 podStartE2EDuration="1m8.155645975s" podCreationTimestamp="2026-01-03 05:59:43 +0000 UTC" firstStartedPulling="2026-01-03 05:59:45.40154297 +0000 UTC m=+1163.728119542" lastFinishedPulling="2026-01-03 06:00:50.358507954 +0000 UTC m=+1228.685084536" observedRunningTime="2026-01-03 06:00:51.152555928 +0000 UTC m=+1229.479132520" watchObservedRunningTime="2026-01-03 06:00:51.155645975 +0000 UTC m=+1229.482222547" Jan 03 06:00:52 crc kubenswrapper[4854]: I0103 06:00:52.072597 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-568985c78-x78fv" event={"ID":"14991c3c-8c35-4008-b1a0-1b8690074322","Type":"ContainerStarted","Data":"95bce15c2a178ac1cba0a18dc148e3ffba44a4cc32babeb5c7258243a1c05990"} Jan 03 06:00:52 crc kubenswrapper[4854]: I0103 06:00:52.073690 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-568985c78-x78fv" Jan 03 06:00:52 crc kubenswrapper[4854]: I0103 06:00:52.093304 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-568985c78-x78fv" podStartSLOduration=4.2877702509999995 podStartE2EDuration="1m9.093242697s" podCreationTimestamp="2026-01-03 05:59:43 +0000 UTC" firstStartedPulling="2026-01-03 05:59:46.234315164 +0000 UTC m=+1164.560891736" lastFinishedPulling="2026-01-03 06:00:51.03978761 +0000 UTC m=+1229.366364182" observedRunningTime="2026-01-03 06:00:52.086797316 +0000 UTC m=+1230.413373928" watchObservedRunningTime="2026-01-03 06:00:52.093242697 +0000 UTC m=+1230.419819279" Jan 03 06:00:53 crc kubenswrapper[4854]: I0103 06:00:53.085277 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-msvf6" event={"ID":"c2f6c336-91f0-41e6-b439-c5d940264b7f","Type":"ContainerStarted","Data":"404fb52f064b6dcc85411ed7ef7cc4d30a115e2fd32313478d77f5c379b1bdd0"} Jan 03 06:00:53 crc kubenswrapper[4854]: I0103 06:00:53.085890 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-msvf6" Jan 03 06:00:53 crc kubenswrapper[4854]: I0103 06:00:53.087503 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cfs9s" event={"ID":"b6397338-ed12-4f81-98aa-97a84e4256f6","Type":"ContainerStarted","Data":"5ebd6e2ae48cad9c7677d6417f2c21ca46e3deed27593b77060a030a4cf7fcaf"} Jan 03 06:00:53 crc kubenswrapper[4854]: I0103 06:00:53.111470 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-msvf6" podStartSLOduration=4.019824625 podStartE2EDuration="1m10.111450452s" podCreationTimestamp="2026-01-03 05:59:43 +0000 UTC" firstStartedPulling="2026-01-03 05:59:46.251833773 +0000 UTC m=+1164.578410345" lastFinishedPulling="2026-01-03 06:00:52.3434596 +0000 UTC m=+1230.670036172" observedRunningTime="2026-01-03 06:00:53.104104908 +0000 UTC m=+1231.430681490" watchObservedRunningTime="2026-01-03 06:00:53.111450452 +0000 UTC m=+1231.438027054" Jan 03 06:00:53 crc kubenswrapper[4854]: I0103 06:00:53.154937 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cfs9s" podStartSLOduration=4.022621457 podStartE2EDuration="1m9.154906468s" podCreationTimestamp="2026-01-03 05:59:44 +0000 UTC" firstStartedPulling="2026-01-03 05:59:47.155936898 +0000 UTC m=+1165.482513470" lastFinishedPulling="2026-01-03 06:00:52.288221889 +0000 UTC m=+1230.614798481" observedRunningTime="2026-01-03 06:00:53.138235111 +0000 UTC m=+1231.464811783" watchObservedRunningTime="2026-01-03 06:00:53.154906468 +0000 UTC m=+1231.481483080" Jan 03 06:00:53 crc kubenswrapper[4854]: I0103 06:00:53.521138 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-jx5q2" Jan 03 06:00:53 crc kubenswrapper[4854]: I0103 06:00:53.564437 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-trsxr" Jan 03 06:00:53 crc kubenswrapper[4854]: I0103 06:00:53.898642 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-k6nnf" Jan 03 06:00:53 crc kubenswrapper[4854]: I0103 06:00:53.964496 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-4pbfn" Jan 03 06:00:54 crc kubenswrapper[4854]: I0103 06:00:54.005165 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-vdnq9" Jan 03 06:00:54 crc kubenswrapper[4854]: E0103 06:00:54.119351 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-jqj54" podUID="e62c43c5-cac2-4f9f-9e1b-de61827c4c94" Jan 03 06:00:54 crc kubenswrapper[4854]: E0103 06:00:54.119533 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:4e3d234c1398039c2593611f7b0fd2a6b284cafb1563e6737876a265b9af42b6\\\"\"" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-7lvxp" podUID="ad6a18d3-e1d2-446a-9b41-a9fca5e8b574" Jan 03 06:00:54 crc kubenswrapper[4854]: I0103 06:00:54.268608 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-8xksh" Jan 03 06:00:54 crc kubenswrapper[4854]: I0103 06:00:54.314517 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-xrghz" Jan 03 06:00:54 crc kubenswrapper[4854]: I0103 06:00:54.387911 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-dprp4" Jan 03 06:00:54 crc kubenswrapper[4854]: I0103 06:00:54.425807 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7666dbdd4f-46t4f" Jan 03 06:00:54 crc kubenswrapper[4854]: I0103 06:00:54.574842 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-qzzw2" Jan 03 06:00:55 crc kubenswrapper[4854]: E0103 06:00:55.120233 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:c846ab4a49272557884db6b976f979e6b9dce1aa73e5eb7872b4472f44602a1c\\\"\"" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-z7cfx" podUID="fe7f33a3-c4b8-44b6-81f1-c2143cbb9dd1" Jan 03 06:00:57 crc kubenswrapper[4854]: I0103 06:00:57.130243 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-ncjlb" event={"ID":"402a077e-f741-447d-ab1c-25bc62cd24cf","Type":"ContainerStarted","Data":"451f22fd7dca98b9f9f7499fbbe0a62e1c5d3a9d28d0859afaee9fd992ccdad3"} Jan 03 06:00:57 crc kubenswrapper[4854]: I0103 06:00:57.131291 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-ncjlb" Jan 03 06:00:57 crc kubenswrapper[4854]: I0103 06:00:57.162673 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-ncjlb" podStartSLOduration=4.053487713 podStartE2EDuration="1m14.162650314s" podCreationTimestamp="2026-01-03 05:59:43 +0000 UTC" firstStartedPulling="2026-01-03 05:59:46.575293443 +0000 UTC m=+1164.901870015" lastFinishedPulling="2026-01-03 06:00:56.684456034 +0000 UTC m=+1235.011032616" observedRunningTime="2026-01-03 06:00:57.153465205 +0000 UTC m=+1235.480041797" watchObservedRunningTime="2026-01-03 06:00:57.162650314 +0000 UTC m=+1235.489226896" Jan 03 06:00:59 crc kubenswrapper[4854]: I0103 06:00:59.504851 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qqbq9" Jan 03 06:01:00 crc kubenswrapper[4854]: I0103 06:01:00.056055 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7nclw5" Jan 03 06:01:00 crc kubenswrapper[4854]: I0103 06:01:00.615501 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-85b679bdc6-qrnbc" Jan 03 06:01:03 crc kubenswrapper[4854]: I0103 06:01:03.519365 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-jvp7v" Jan 03 06:01:03 crc kubenswrapper[4854]: I0103 06:01:03.531903 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-msvf6" Jan 03 06:01:03 crc kubenswrapper[4854]: I0103 06:01:03.846603 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-hgwsb" Jan 03 06:01:03 crc kubenswrapper[4854]: I0103 06:01:03.990272 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-568985c78-x78fv" Jan 03 06:01:04 crc kubenswrapper[4854]: I0103 06:01:04.269274 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-ncjlb" Jan 03 06:01:07 crc kubenswrapper[4854]: I0103 06:01:07.121004 4854 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 03 06:01:07 crc kubenswrapper[4854]: I0103 06:01:07.268348 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-7lvxp" event={"ID":"ad6a18d3-e1d2-446a-9b41-a9fca5e8b574","Type":"ContainerStarted","Data":"782199bdda514fa01ba01343a904400694b5d17b3870189dbd47ddbd380a3384"} Jan 03 06:01:07 crc kubenswrapper[4854]: I0103 06:01:07.269512 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-7lvxp" Jan 03 06:01:07 crc kubenswrapper[4854]: I0103 06:01:07.296213 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-7lvxp" podStartSLOduration=4.6990982169999995 podStartE2EDuration="1m24.296189698s" podCreationTimestamp="2026-01-03 05:59:43 +0000 UTC" firstStartedPulling="2026-01-03 05:59:47.202663351 +0000 UTC m=+1165.529239953" lastFinishedPulling="2026-01-03 06:01:06.799754842 +0000 UTC m=+1245.126331434" observedRunningTime="2026-01-03 06:01:07.290089945 +0000 UTC m=+1245.616666537" watchObservedRunningTime="2026-01-03 06:01:07.296189698 +0000 UTC m=+1245.622766280" Jan 03 06:01:08 crc kubenswrapper[4854]: I0103 06:01:08.277112 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-z7cfx" event={"ID":"fe7f33a3-c4b8-44b6-81f1-c2143cbb9dd1","Type":"ContainerStarted","Data":"27e03b059df2eaacdaf62fc993ec49a70301713595a9b2b921343637a5e6ac56"} Jan 03 06:01:08 crc kubenswrapper[4854]: I0103 06:01:08.278043 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-z7cfx" Jan 03 06:01:08 crc kubenswrapper[4854]: I0103 06:01:08.298437 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-z7cfx" podStartSLOduration=4.738303222 podStartE2EDuration="1m25.298413333s" podCreationTimestamp="2026-01-03 05:59:43 +0000 UTC" firstStartedPulling="2026-01-03 05:59:47.163702763 +0000 UTC m=+1165.490279335" lastFinishedPulling="2026-01-03 06:01:07.723812874 +0000 UTC m=+1246.050389446" observedRunningTime="2026-01-03 06:01:08.289944952 +0000 UTC m=+1246.616521524" watchObservedRunningTime="2026-01-03 06:01:08.298413333 +0000 UTC m=+1246.624989915" Jan 03 06:01:10 crc kubenswrapper[4854]: I0103 06:01:10.298880 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-jqj54" event={"ID":"e62c43c5-cac2-4f9f-9e1b-de61827c4c94","Type":"ContainerStarted","Data":"681a58a1260cf4c81cb41d9d5ff37952fd310164763547c25d23c6d3b57e2a94"} Jan 03 06:01:10 crc kubenswrapper[4854]: I0103 06:01:10.300853 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-jqj54" Jan 03 06:01:10 crc kubenswrapper[4854]: I0103 06:01:10.317618 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-jqj54" podStartSLOduration=4.573589808 podStartE2EDuration="1m27.317601755s" podCreationTimestamp="2026-01-03 05:59:43 +0000 UTC" firstStartedPulling="2026-01-03 05:59:47.179966511 +0000 UTC m=+1165.506543083" lastFinishedPulling="2026-01-03 06:01:09.923978448 +0000 UTC m=+1248.250555030" observedRunningTime="2026-01-03 06:01:10.313066032 +0000 UTC m=+1248.639642614" watchObservedRunningTime="2026-01-03 06:01:10.317601755 +0000 UTC m=+1248.644178327" Jan 03 06:01:11 crc kubenswrapper[4854]: I0103 06:01:11.755757 4854 patch_prober.go:28] interesting pod/machine-config-daemon-qdhfx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 06:01:11 crc kubenswrapper[4854]: I0103 06:01:11.756174 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 06:01:14 crc kubenswrapper[4854]: I0103 06:01:14.151263 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-z7cfx" Jan 03 06:01:14 crc kubenswrapper[4854]: I0103 06:01:14.222014 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-7lvxp" Jan 03 06:01:24 crc kubenswrapper[4854]: I0103 06:01:24.278797 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-jqj54" Jan 03 06:01:41 crc kubenswrapper[4854]: I0103 06:01:41.755739 4854 patch_prober.go:28] interesting pod/machine-config-daemon-qdhfx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 06:01:41 crc kubenswrapper[4854]: I0103 06:01:41.756970 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 06:01:41 crc kubenswrapper[4854]: I0103 06:01:41.757049 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" Jan 03 06:01:41 crc kubenswrapper[4854]: I0103 06:01:41.758638 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"698413854ef7d83140b2bf1b7914886f1f1ee8bad9480a9e32b96368143c12a3"} pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 03 06:01:41 crc kubenswrapper[4854]: I0103 06:01:41.758723 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" containerID="cri-o://698413854ef7d83140b2bf1b7914886f1f1ee8bad9480a9e32b96368143c12a3" gracePeriod=600 Jan 03 06:01:42 crc kubenswrapper[4854]: I0103 06:01:42.644555 4854 generic.go:334] "Generic (PLEG): container finished" podID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerID="698413854ef7d83140b2bf1b7914886f1f1ee8bad9480a9e32b96368143c12a3" exitCode=0 Jan 03 06:01:42 crc kubenswrapper[4854]: I0103 06:01:42.644632 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" event={"ID":"e8c88d7d-092b-44f7-b4c8-3540be3c0e8b","Type":"ContainerDied","Data":"698413854ef7d83140b2bf1b7914886f1f1ee8bad9480a9e32b96368143c12a3"} Jan 03 06:01:42 crc kubenswrapper[4854]: I0103 06:01:42.645029 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" event={"ID":"e8c88d7d-092b-44f7-b4c8-3540be3c0e8b","Type":"ContainerStarted","Data":"9b4c3aaa2ac11419adcfab21b6c1450ea5c292a92e0be09a3fba503318e11474"} Jan 03 06:01:42 crc kubenswrapper[4854]: I0103 06:01:42.645060 4854 scope.go:117] "RemoveContainer" containerID="382eac6c86719b2cf06557df9d71397fec24546fd4a1359e257bb73a0fbe3ef6" Jan 03 06:01:44 crc kubenswrapper[4854]: I0103 06:01:44.790573 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-6mlr2"] Jan 03 06:01:44 crc kubenswrapper[4854]: E0103 06:01:44.793710 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bef59ea8-bada-439a-a6fe-1745e38b01c7" containerName="collect-profiles" Jan 03 06:01:44 crc kubenswrapper[4854]: I0103 06:01:44.793812 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="bef59ea8-bada-439a-a6fe-1745e38b01c7" containerName="collect-profiles" Jan 03 06:01:44 crc kubenswrapper[4854]: I0103 06:01:44.794116 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="bef59ea8-bada-439a-a6fe-1745e38b01c7" containerName="collect-profiles" Jan 03 06:01:44 crc kubenswrapper[4854]: I0103 06:01:44.795708 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-6mlr2" Jan 03 06:01:44 crc kubenswrapper[4854]: I0103 06:01:44.796401 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-6mlr2"] Jan 03 06:01:44 crc kubenswrapper[4854]: I0103 06:01:44.801325 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-ts65w" Jan 03 06:01:44 crc kubenswrapper[4854]: I0103 06:01:44.801648 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 03 06:01:44 crc kubenswrapper[4854]: I0103 06:01:44.801739 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 03 06:01:44 crc kubenswrapper[4854]: I0103 06:01:44.801871 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 03 06:01:44 crc kubenswrapper[4854]: I0103 06:01:44.859719 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-xwdk9"] Jan 03 06:01:44 crc kubenswrapper[4854]: I0103 06:01:44.862677 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-xwdk9" Jan 03 06:01:44 crc kubenswrapper[4854]: I0103 06:01:44.870693 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 03 06:01:44 crc kubenswrapper[4854]: I0103 06:01:44.880536 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-xwdk9"] Jan 03 06:01:44 crc kubenswrapper[4854]: I0103 06:01:44.923916 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snn75\" (UniqueName: \"kubernetes.io/projected/581c10f5-d4c5-41af-9930-dc5324b5b48a-kube-api-access-snn75\") pod \"dnsmasq-dns-78dd6ddcc-xwdk9\" (UID: \"581c10f5-d4c5-41af-9930-dc5324b5b48a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xwdk9" Jan 03 06:01:44 crc kubenswrapper[4854]: I0103 06:01:44.923970 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/581c10f5-d4c5-41af-9930-dc5324b5b48a-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-xwdk9\" (UID: \"581c10f5-d4c5-41af-9930-dc5324b5b48a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xwdk9" Jan 03 06:01:44 crc kubenswrapper[4854]: I0103 06:01:44.924049 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27abeea6-c224-4248-99aa-cfc50d1e911b-config\") pod \"dnsmasq-dns-675f4bcbfc-6mlr2\" (UID: \"27abeea6-c224-4248-99aa-cfc50d1e911b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-6mlr2" Jan 03 06:01:44 crc kubenswrapper[4854]: I0103 06:01:44.924323 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/581c10f5-d4c5-41af-9930-dc5324b5b48a-config\") pod \"dnsmasq-dns-78dd6ddcc-xwdk9\" (UID: \"581c10f5-d4c5-41af-9930-dc5324b5b48a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xwdk9" Jan 03 06:01:44 crc kubenswrapper[4854]: I0103 06:01:44.924544 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4d2k\" (UniqueName: \"kubernetes.io/projected/27abeea6-c224-4248-99aa-cfc50d1e911b-kube-api-access-j4d2k\") pod \"dnsmasq-dns-675f4bcbfc-6mlr2\" (UID: \"27abeea6-c224-4248-99aa-cfc50d1e911b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-6mlr2" Jan 03 06:01:45 crc kubenswrapper[4854]: I0103 06:01:45.026914 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/581c10f5-d4c5-41af-9930-dc5324b5b48a-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-xwdk9\" (UID: \"581c10f5-d4c5-41af-9930-dc5324b5b48a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xwdk9" Jan 03 06:01:45 crc kubenswrapper[4854]: I0103 06:01:45.027013 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27abeea6-c224-4248-99aa-cfc50d1e911b-config\") pod \"dnsmasq-dns-675f4bcbfc-6mlr2\" (UID: \"27abeea6-c224-4248-99aa-cfc50d1e911b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-6mlr2" Jan 03 06:01:45 crc kubenswrapper[4854]: I0103 06:01:45.027062 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/581c10f5-d4c5-41af-9930-dc5324b5b48a-config\") pod \"dnsmasq-dns-78dd6ddcc-xwdk9\" (UID: \"581c10f5-d4c5-41af-9930-dc5324b5b48a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xwdk9" Jan 03 06:01:45 crc kubenswrapper[4854]: I0103 06:01:45.027127 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4d2k\" (UniqueName: \"kubernetes.io/projected/27abeea6-c224-4248-99aa-cfc50d1e911b-kube-api-access-j4d2k\") pod \"dnsmasq-dns-675f4bcbfc-6mlr2\" (UID: \"27abeea6-c224-4248-99aa-cfc50d1e911b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-6mlr2" Jan 03 06:01:45 crc kubenswrapper[4854]: I0103 06:01:45.027213 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snn75\" (UniqueName: \"kubernetes.io/projected/581c10f5-d4c5-41af-9930-dc5324b5b48a-kube-api-access-snn75\") pod \"dnsmasq-dns-78dd6ddcc-xwdk9\" (UID: \"581c10f5-d4c5-41af-9930-dc5324b5b48a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xwdk9" Jan 03 06:01:45 crc kubenswrapper[4854]: I0103 06:01:45.028284 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27abeea6-c224-4248-99aa-cfc50d1e911b-config\") pod \"dnsmasq-dns-675f4bcbfc-6mlr2\" (UID: \"27abeea6-c224-4248-99aa-cfc50d1e911b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-6mlr2" Jan 03 06:01:45 crc kubenswrapper[4854]: I0103 06:01:45.028364 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/581c10f5-d4c5-41af-9930-dc5324b5b48a-config\") pod \"dnsmasq-dns-78dd6ddcc-xwdk9\" (UID: \"581c10f5-d4c5-41af-9930-dc5324b5b48a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xwdk9" Jan 03 06:01:45 crc kubenswrapper[4854]: I0103 06:01:45.029303 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/581c10f5-d4c5-41af-9930-dc5324b5b48a-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-xwdk9\" (UID: \"581c10f5-d4c5-41af-9930-dc5324b5b48a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xwdk9" Jan 03 06:01:45 crc kubenswrapper[4854]: I0103 06:01:45.049567 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snn75\" (UniqueName: \"kubernetes.io/projected/581c10f5-d4c5-41af-9930-dc5324b5b48a-kube-api-access-snn75\") pod \"dnsmasq-dns-78dd6ddcc-xwdk9\" (UID: \"581c10f5-d4c5-41af-9930-dc5324b5b48a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xwdk9" Jan 03 06:01:45 crc kubenswrapper[4854]: I0103 06:01:45.049667 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4d2k\" (UniqueName: \"kubernetes.io/projected/27abeea6-c224-4248-99aa-cfc50d1e911b-kube-api-access-j4d2k\") pod \"dnsmasq-dns-675f4bcbfc-6mlr2\" (UID: \"27abeea6-c224-4248-99aa-cfc50d1e911b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-6mlr2" Jan 03 06:01:45 crc kubenswrapper[4854]: I0103 06:01:45.125791 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-6mlr2" Jan 03 06:01:45 crc kubenswrapper[4854]: I0103 06:01:45.189979 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-xwdk9" Jan 03 06:01:45 crc kubenswrapper[4854]: I0103 06:01:45.649191 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-6mlr2"] Jan 03 06:01:45 crc kubenswrapper[4854]: I0103 06:01:45.692551 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-6mlr2" event={"ID":"27abeea6-c224-4248-99aa-cfc50d1e911b","Type":"ContainerStarted","Data":"04987a3aeb4d57bf5d1113abbbb1213f98b6feed629d7eaddfc21af1f915c716"} Jan 03 06:01:45 crc kubenswrapper[4854]: I0103 06:01:45.833304 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-xwdk9"] Jan 03 06:01:46 crc kubenswrapper[4854]: I0103 06:01:46.706571 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-xwdk9" event={"ID":"581c10f5-d4c5-41af-9930-dc5324b5b48a","Type":"ContainerStarted","Data":"d5be23cb32ae5f37ce4368038267cb0bb4a647d040baa3566d1663fb24974683"} Jan 03 06:01:47 crc kubenswrapper[4854]: I0103 06:01:47.524849 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-6mlr2"] Jan 03 06:01:47 crc kubenswrapper[4854]: I0103 06:01:47.553250 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2kbfm"] Jan 03 06:01:47 crc kubenswrapper[4854]: I0103 06:01:47.555472 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-2kbfm" Jan 03 06:01:47 crc kubenswrapper[4854]: I0103 06:01:47.579702 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2kbfm"] Jan 03 06:01:47 crc kubenswrapper[4854]: I0103 06:01:47.716946 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmzw2\" (UniqueName: \"kubernetes.io/projected/19db79b0-2939-4a6b-bf9a-9f35b6f63acd-kube-api-access-dmzw2\") pod \"dnsmasq-dns-666b6646f7-2kbfm\" (UID: \"19db79b0-2939-4a6b-bf9a-9f35b6f63acd\") " pod="openstack/dnsmasq-dns-666b6646f7-2kbfm" Jan 03 06:01:47 crc kubenswrapper[4854]: I0103 06:01:47.717006 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19db79b0-2939-4a6b-bf9a-9f35b6f63acd-config\") pod \"dnsmasq-dns-666b6646f7-2kbfm\" (UID: \"19db79b0-2939-4a6b-bf9a-9f35b6f63acd\") " pod="openstack/dnsmasq-dns-666b6646f7-2kbfm" Jan 03 06:01:47 crc kubenswrapper[4854]: I0103 06:01:47.719347 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/19db79b0-2939-4a6b-bf9a-9f35b6f63acd-dns-svc\") pod \"dnsmasq-dns-666b6646f7-2kbfm\" (UID: \"19db79b0-2939-4a6b-bf9a-9f35b6f63acd\") " pod="openstack/dnsmasq-dns-666b6646f7-2kbfm" Jan 03 06:01:47 crc kubenswrapper[4854]: I0103 06:01:47.821253 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19db79b0-2939-4a6b-bf9a-9f35b6f63acd-config\") pod \"dnsmasq-dns-666b6646f7-2kbfm\" (UID: \"19db79b0-2939-4a6b-bf9a-9f35b6f63acd\") " pod="openstack/dnsmasq-dns-666b6646f7-2kbfm" Jan 03 06:01:47 crc kubenswrapper[4854]: I0103 06:01:47.821320 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/19db79b0-2939-4a6b-bf9a-9f35b6f63acd-dns-svc\") pod \"dnsmasq-dns-666b6646f7-2kbfm\" (UID: \"19db79b0-2939-4a6b-bf9a-9f35b6f63acd\") " pod="openstack/dnsmasq-dns-666b6646f7-2kbfm" Jan 03 06:01:47 crc kubenswrapper[4854]: I0103 06:01:47.821445 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmzw2\" (UniqueName: \"kubernetes.io/projected/19db79b0-2939-4a6b-bf9a-9f35b6f63acd-kube-api-access-dmzw2\") pod \"dnsmasq-dns-666b6646f7-2kbfm\" (UID: \"19db79b0-2939-4a6b-bf9a-9f35b6f63acd\") " pod="openstack/dnsmasq-dns-666b6646f7-2kbfm" Jan 03 06:01:47 crc kubenswrapper[4854]: I0103 06:01:47.822472 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19db79b0-2939-4a6b-bf9a-9f35b6f63acd-config\") pod \"dnsmasq-dns-666b6646f7-2kbfm\" (UID: \"19db79b0-2939-4a6b-bf9a-9f35b6f63acd\") " pod="openstack/dnsmasq-dns-666b6646f7-2kbfm" Jan 03 06:01:47 crc kubenswrapper[4854]: I0103 06:01:47.823943 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/19db79b0-2939-4a6b-bf9a-9f35b6f63acd-dns-svc\") pod \"dnsmasq-dns-666b6646f7-2kbfm\" (UID: \"19db79b0-2939-4a6b-bf9a-9f35b6f63acd\") " pod="openstack/dnsmasq-dns-666b6646f7-2kbfm" Jan 03 06:01:47 crc kubenswrapper[4854]: I0103 06:01:47.852170 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmzw2\" (UniqueName: \"kubernetes.io/projected/19db79b0-2939-4a6b-bf9a-9f35b6f63acd-kube-api-access-dmzw2\") pod \"dnsmasq-dns-666b6646f7-2kbfm\" (UID: \"19db79b0-2939-4a6b-bf9a-9f35b6f63acd\") " pod="openstack/dnsmasq-dns-666b6646f7-2kbfm" Jan 03 06:01:47 crc kubenswrapper[4854]: I0103 06:01:47.892772 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-2kbfm" Jan 03 06:01:47 crc kubenswrapper[4854]: I0103 06:01:47.901635 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-xwdk9"] Jan 03 06:01:47 crc kubenswrapper[4854]: I0103 06:01:47.932424 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-9p2s9"] Jan 03 06:01:47 crc kubenswrapper[4854]: I0103 06:01:47.934033 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-9p2s9" Jan 03 06:01:47 crc kubenswrapper[4854]: I0103 06:01:47.945019 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-9p2s9"] Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.026211 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg85k\" (UniqueName: \"kubernetes.io/projected/11e65dd3-c929-4a34-aa76-94ad1d7464db-kube-api-access-wg85k\") pod \"dnsmasq-dns-57d769cc4f-9p2s9\" (UID: \"11e65dd3-c929-4a34-aa76-94ad1d7464db\") " pod="openstack/dnsmasq-dns-57d769cc4f-9p2s9" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.026509 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11e65dd3-c929-4a34-aa76-94ad1d7464db-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-9p2s9\" (UID: \"11e65dd3-c929-4a34-aa76-94ad1d7464db\") " pod="openstack/dnsmasq-dns-57d769cc4f-9p2s9" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.026719 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11e65dd3-c929-4a34-aa76-94ad1d7464db-config\") pod \"dnsmasq-dns-57d769cc4f-9p2s9\" (UID: \"11e65dd3-c929-4a34-aa76-94ad1d7464db\") " pod="openstack/dnsmasq-dns-57d769cc4f-9p2s9" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.129227 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wg85k\" (UniqueName: \"kubernetes.io/projected/11e65dd3-c929-4a34-aa76-94ad1d7464db-kube-api-access-wg85k\") pod \"dnsmasq-dns-57d769cc4f-9p2s9\" (UID: \"11e65dd3-c929-4a34-aa76-94ad1d7464db\") " pod="openstack/dnsmasq-dns-57d769cc4f-9p2s9" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.129758 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11e65dd3-c929-4a34-aa76-94ad1d7464db-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-9p2s9\" (UID: \"11e65dd3-c929-4a34-aa76-94ad1d7464db\") " pod="openstack/dnsmasq-dns-57d769cc4f-9p2s9" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.129874 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11e65dd3-c929-4a34-aa76-94ad1d7464db-config\") pod \"dnsmasq-dns-57d769cc4f-9p2s9\" (UID: \"11e65dd3-c929-4a34-aa76-94ad1d7464db\") " pod="openstack/dnsmasq-dns-57d769cc4f-9p2s9" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.131589 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11e65dd3-c929-4a34-aa76-94ad1d7464db-config\") pod \"dnsmasq-dns-57d769cc4f-9p2s9\" (UID: \"11e65dd3-c929-4a34-aa76-94ad1d7464db\") " pod="openstack/dnsmasq-dns-57d769cc4f-9p2s9" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.135490 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11e65dd3-c929-4a34-aa76-94ad1d7464db-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-9p2s9\" (UID: \"11e65dd3-c929-4a34-aa76-94ad1d7464db\") " pod="openstack/dnsmasq-dns-57d769cc4f-9p2s9" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.154023 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wg85k\" (UniqueName: \"kubernetes.io/projected/11e65dd3-c929-4a34-aa76-94ad1d7464db-kube-api-access-wg85k\") pod \"dnsmasq-dns-57d769cc4f-9p2s9\" (UID: \"11e65dd3-c929-4a34-aa76-94ad1d7464db\") " pod="openstack/dnsmasq-dns-57d769cc4f-9p2s9" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.319533 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-9p2s9" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.471721 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2kbfm"] Jan 03 06:01:48 crc kubenswrapper[4854]: W0103 06:01:48.482988 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19db79b0_2939_4a6b_bf9a_9f35b6f63acd.slice/crio-ee9a19919b6476108f1acb3bf12070dfe225ad886a5971c3ee790fe0b4942751 WatchSource:0}: Error finding container ee9a19919b6476108f1acb3bf12070dfe225ad886a5971c3ee790fe0b4942751: Status 404 returned error can't find the container with id ee9a19919b6476108f1acb3bf12070dfe225ad886a5971c3ee790fe0b4942751 Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.679884 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.684382 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.699417 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.704410 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.704658 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.705776 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.705942 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.706058 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-v2wjl" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.706189 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.707670 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.718908 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.720910 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.729749 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.731661 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.736544 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.743608 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b5742bd8-396a-4174-a8b7-dd6deec69632-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b5742bd8-396a-4174-a8b7-dd6deec69632\") " pod="openstack/rabbitmq-server-0" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.746540 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b5742bd8-396a-4174-a8b7-dd6deec69632-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b5742bd8-396a-4174-a8b7-dd6deec69632\") " pod="openstack/rabbitmq-server-0" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.746663 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b5742bd8-396a-4174-a8b7-dd6deec69632-config-data\") pod \"rabbitmq-server-0\" (UID: \"b5742bd8-396a-4174-a8b7-dd6deec69632\") " pod="openstack/rabbitmq-server-0" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.746800 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b5742bd8-396a-4174-a8b7-dd6deec69632-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b5742bd8-396a-4174-a8b7-dd6deec69632\") " pod="openstack/rabbitmq-server-0" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.747908 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b5742bd8-396a-4174-a8b7-dd6deec69632-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b5742bd8-396a-4174-a8b7-dd6deec69632\") " pod="openstack/rabbitmq-server-0" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.748102 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b5742bd8-396a-4174-a8b7-dd6deec69632-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b5742bd8-396a-4174-a8b7-dd6deec69632\") " pod="openstack/rabbitmq-server-0" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.748219 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d3da316d-166a-42b4-866b-872ff9ab007f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d3da316d-166a-42b4-866b-872ff9ab007f\") pod \"rabbitmq-server-0\" (UID: \"b5742bd8-396a-4174-a8b7-dd6deec69632\") " pod="openstack/rabbitmq-server-0" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.748389 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b5742bd8-396a-4174-a8b7-dd6deec69632-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b5742bd8-396a-4174-a8b7-dd6deec69632\") " pod="openstack/rabbitmq-server-0" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.748502 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b5742bd8-396a-4174-a8b7-dd6deec69632-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b5742bd8-396a-4174-a8b7-dd6deec69632\") " pod="openstack/rabbitmq-server-0" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.748588 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b5742bd8-396a-4174-a8b7-dd6deec69632-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b5742bd8-396a-4174-a8b7-dd6deec69632\") " pod="openstack/rabbitmq-server-0" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.748692 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p8q2\" (UniqueName: \"kubernetes.io/projected/b5742bd8-396a-4174-a8b7-dd6deec69632-kube-api-access-4p8q2\") pod \"rabbitmq-server-0\" (UID: \"b5742bd8-396a-4174-a8b7-dd6deec69632\") " pod="openstack/rabbitmq-server-0" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.754390 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.754513 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-2kbfm" event={"ID":"19db79b0-2939-4a6b-bf9a-9f35b6f63acd","Type":"ContainerStarted","Data":"ee9a19919b6476108f1acb3bf12070dfe225ad886a5971c3ee790fe0b4942751"} Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.841809 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-9p2s9"] Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.850836 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b5742bd8-396a-4174-a8b7-dd6deec69632-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b5742bd8-396a-4174-a8b7-dd6deec69632\") " pod="openstack/rabbitmq-server-0" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.850907 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b5742bd8-396a-4174-a8b7-dd6deec69632-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b5742bd8-396a-4174-a8b7-dd6deec69632\") " pod="openstack/rabbitmq-server-0" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.850927 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b5742bd8-396a-4174-a8b7-dd6deec69632-config-data\") pod \"rabbitmq-server-0\" (UID: \"b5742bd8-396a-4174-a8b7-dd6deec69632\") " pod="openstack/rabbitmq-server-0" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.850950 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/11d4187f-5938-4054-9eec-4d84f843bd73-pod-info\") pod \"rabbitmq-server-2\" (UID: \"11d4187f-5938-4054-9eec-4d84f843bd73\") " pod="openstack/rabbitmq-server-2" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.850986 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-785a3b7a-40c0-4233-b4eb-a94f0e723354\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-785a3b7a-40c0-4233-b4eb-a94f0e723354\") pod \"rabbitmq-server-1\" (UID: \"ba007649-daf8-445b-b2c8-73ce6ec54403\") " pod="openstack/rabbitmq-server-1" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.851012 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ba007649-daf8-445b-b2c8-73ce6ec54403-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"ba007649-daf8-445b-b2c8-73ce6ec54403\") " pod="openstack/rabbitmq-server-1" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.851041 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/11d4187f-5938-4054-9eec-4d84f843bd73-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"11d4187f-5938-4054-9eec-4d84f843bd73\") " pod="openstack/rabbitmq-server-2" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.851061 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b5742bd8-396a-4174-a8b7-dd6deec69632-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b5742bd8-396a-4174-a8b7-dd6deec69632\") " pod="openstack/rabbitmq-server-0" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.851094 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ba007649-daf8-445b-b2c8-73ce6ec54403-pod-info\") pod \"rabbitmq-server-1\" (UID: \"ba007649-daf8-445b-b2c8-73ce6ec54403\") " pod="openstack/rabbitmq-server-1" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.851110 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b5742bd8-396a-4174-a8b7-dd6deec69632-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b5742bd8-396a-4174-a8b7-dd6deec69632\") " pod="openstack/rabbitmq-server-0" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.851131 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ba007649-daf8-445b-b2c8-73ce6ec54403-config-data\") pod \"rabbitmq-server-1\" (UID: \"ba007649-daf8-445b-b2c8-73ce6ec54403\") " pod="openstack/rabbitmq-server-1" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.851145 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/11d4187f-5938-4054-9eec-4d84f843bd73-config-data\") pod \"rabbitmq-server-2\" (UID: \"11d4187f-5938-4054-9eec-4d84f843bd73\") " pod="openstack/rabbitmq-server-2" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.851164 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdxk9\" (UniqueName: \"kubernetes.io/projected/11d4187f-5938-4054-9eec-4d84f843bd73-kube-api-access-mdxk9\") pod \"rabbitmq-server-2\" (UID: \"11d4187f-5938-4054-9eec-4d84f843bd73\") " pod="openstack/rabbitmq-server-2" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.851193 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2mg4\" (UniqueName: \"kubernetes.io/projected/ba007649-daf8-445b-b2c8-73ce6ec54403-kube-api-access-p2mg4\") pod \"rabbitmq-server-1\" (UID: \"ba007649-daf8-445b-b2c8-73ce6ec54403\") " pod="openstack/rabbitmq-server-1" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.851219 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ee918d9a-3b30-49a0-833a-596dc7301cae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ee918d9a-3b30-49a0-833a-596dc7301cae\") pod \"rabbitmq-server-2\" (UID: \"11d4187f-5938-4054-9eec-4d84f843bd73\") " pod="openstack/rabbitmq-server-2" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.851238 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ba007649-daf8-445b-b2c8-73ce6ec54403-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"ba007649-daf8-445b-b2c8-73ce6ec54403\") " pod="openstack/rabbitmq-server-1" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.851257 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b5742bd8-396a-4174-a8b7-dd6deec69632-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b5742bd8-396a-4174-a8b7-dd6deec69632\") " pod="openstack/rabbitmq-server-0" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.851275 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/11d4187f-5938-4054-9eec-4d84f843bd73-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"11d4187f-5938-4054-9eec-4d84f843bd73\") " pod="openstack/rabbitmq-server-2" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.851291 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/11d4187f-5938-4054-9eec-4d84f843bd73-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"11d4187f-5938-4054-9eec-4d84f843bd73\") " pod="openstack/rabbitmq-server-2" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.851307 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ba007649-daf8-445b-b2c8-73ce6ec54403-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"ba007649-daf8-445b-b2c8-73ce6ec54403\") " pod="openstack/rabbitmq-server-1" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.851330 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d3da316d-166a-42b4-866b-872ff9ab007f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d3da316d-166a-42b4-866b-872ff9ab007f\") pod \"rabbitmq-server-0\" (UID: \"b5742bd8-396a-4174-a8b7-dd6deec69632\") " pod="openstack/rabbitmq-server-0" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.851375 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/11d4187f-5938-4054-9eec-4d84f843bd73-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"11d4187f-5938-4054-9eec-4d84f843bd73\") " pod="openstack/rabbitmq-server-2" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.851400 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b5742bd8-396a-4174-a8b7-dd6deec69632-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b5742bd8-396a-4174-a8b7-dd6deec69632\") " pod="openstack/rabbitmq-server-0" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.851427 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b5742bd8-396a-4174-a8b7-dd6deec69632-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b5742bd8-396a-4174-a8b7-dd6deec69632\") " pod="openstack/rabbitmq-server-0" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.851442 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b5742bd8-396a-4174-a8b7-dd6deec69632-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b5742bd8-396a-4174-a8b7-dd6deec69632\") " pod="openstack/rabbitmq-server-0" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.851460 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ba007649-daf8-445b-b2c8-73ce6ec54403-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"ba007649-daf8-445b-b2c8-73ce6ec54403\") " pod="openstack/rabbitmq-server-1" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.851484 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/11d4187f-5938-4054-9eec-4d84f843bd73-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"11d4187f-5938-4054-9eec-4d84f843bd73\") " pod="openstack/rabbitmq-server-2" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.851502 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4p8q2\" (UniqueName: \"kubernetes.io/projected/b5742bd8-396a-4174-a8b7-dd6deec69632-kube-api-access-4p8q2\") pod \"rabbitmq-server-0\" (UID: \"b5742bd8-396a-4174-a8b7-dd6deec69632\") " pod="openstack/rabbitmq-server-0" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.851517 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ba007649-daf8-445b-b2c8-73ce6ec54403-server-conf\") pod \"rabbitmq-server-1\" (UID: \"ba007649-daf8-445b-b2c8-73ce6ec54403\") " pod="openstack/rabbitmq-server-1" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.851535 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/11d4187f-5938-4054-9eec-4d84f843bd73-server-conf\") pod \"rabbitmq-server-2\" (UID: \"11d4187f-5938-4054-9eec-4d84f843bd73\") " pod="openstack/rabbitmq-server-2" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.851549 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ba007649-daf8-445b-b2c8-73ce6ec54403-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"ba007649-daf8-445b-b2c8-73ce6ec54403\") " pod="openstack/rabbitmq-server-1" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.851573 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ba007649-daf8-445b-b2c8-73ce6ec54403-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"ba007649-daf8-445b-b2c8-73ce6ec54403\") " pod="openstack/rabbitmq-server-1" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.851588 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/11d4187f-5938-4054-9eec-4d84f843bd73-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"11d4187f-5938-4054-9eec-4d84f843bd73\") " pod="openstack/rabbitmq-server-2" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.854195 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b5742bd8-396a-4174-a8b7-dd6deec69632-config-data\") pod \"rabbitmq-server-0\" (UID: \"b5742bd8-396a-4174-a8b7-dd6deec69632\") " pod="openstack/rabbitmq-server-0" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.855377 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b5742bd8-396a-4174-a8b7-dd6deec69632-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b5742bd8-396a-4174-a8b7-dd6deec69632\") " pod="openstack/rabbitmq-server-0" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.857115 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b5742bd8-396a-4174-a8b7-dd6deec69632-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b5742bd8-396a-4174-a8b7-dd6deec69632\") " pod="openstack/rabbitmq-server-0" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.859224 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b5742bd8-396a-4174-a8b7-dd6deec69632-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b5742bd8-396a-4174-a8b7-dd6deec69632\") " pod="openstack/rabbitmq-server-0" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.859420 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b5742bd8-396a-4174-a8b7-dd6deec69632-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b5742bd8-396a-4174-a8b7-dd6deec69632\") " pod="openstack/rabbitmq-server-0" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.860320 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b5742bd8-396a-4174-a8b7-dd6deec69632-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b5742bd8-396a-4174-a8b7-dd6deec69632\") " pod="openstack/rabbitmq-server-0" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.866147 4854 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.866295 4854 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d3da316d-166a-42b4-866b-872ff9ab007f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d3da316d-166a-42b4-866b-872ff9ab007f\") pod \"rabbitmq-server-0\" (UID: \"b5742bd8-396a-4174-a8b7-dd6deec69632\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e21c3608fa8a5b41b0269f8d0775f0d8ff74b744eb534b992671df54d6ebfc27/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.870643 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b5742bd8-396a-4174-a8b7-dd6deec69632-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b5742bd8-396a-4174-a8b7-dd6deec69632\") " pod="openstack/rabbitmq-server-0" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.871071 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b5742bd8-396a-4174-a8b7-dd6deec69632-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b5742bd8-396a-4174-a8b7-dd6deec69632\") " pod="openstack/rabbitmq-server-0" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.878260 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4p8q2\" (UniqueName: \"kubernetes.io/projected/b5742bd8-396a-4174-a8b7-dd6deec69632-kube-api-access-4p8q2\") pod \"rabbitmq-server-0\" (UID: \"b5742bd8-396a-4174-a8b7-dd6deec69632\") " pod="openstack/rabbitmq-server-0" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.883698 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b5742bd8-396a-4174-a8b7-dd6deec69632-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b5742bd8-396a-4174-a8b7-dd6deec69632\") " pod="openstack/rabbitmq-server-0" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.921479 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d3da316d-166a-42b4-866b-872ff9ab007f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d3da316d-166a-42b4-866b-872ff9ab007f\") pod \"rabbitmq-server-0\" (UID: \"b5742bd8-396a-4174-a8b7-dd6deec69632\") " pod="openstack/rabbitmq-server-0" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.954008 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-785a3b7a-40c0-4233-b4eb-a94f0e723354\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-785a3b7a-40c0-4233-b4eb-a94f0e723354\") pod \"rabbitmq-server-1\" (UID: \"ba007649-daf8-445b-b2c8-73ce6ec54403\") " pod="openstack/rabbitmq-server-1" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.955023 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ba007649-daf8-445b-b2c8-73ce6ec54403-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"ba007649-daf8-445b-b2c8-73ce6ec54403\") " pod="openstack/rabbitmq-server-1" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.955223 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/11d4187f-5938-4054-9eec-4d84f843bd73-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"11d4187f-5938-4054-9eec-4d84f843bd73\") " pod="openstack/rabbitmq-server-2" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.955255 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ba007649-daf8-445b-b2c8-73ce6ec54403-pod-info\") pod \"rabbitmq-server-1\" (UID: \"ba007649-daf8-445b-b2c8-73ce6ec54403\") " pod="openstack/rabbitmq-server-1" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.955286 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ba007649-daf8-445b-b2c8-73ce6ec54403-config-data\") pod \"rabbitmq-server-1\" (UID: \"ba007649-daf8-445b-b2c8-73ce6ec54403\") " pod="openstack/rabbitmq-server-1" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.955304 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/11d4187f-5938-4054-9eec-4d84f843bd73-config-data\") pod \"rabbitmq-server-2\" (UID: \"11d4187f-5938-4054-9eec-4d84f843bd73\") " pod="openstack/rabbitmq-server-2" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.955328 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdxk9\" (UniqueName: \"kubernetes.io/projected/11d4187f-5938-4054-9eec-4d84f843bd73-kube-api-access-mdxk9\") pod \"rabbitmq-server-2\" (UID: \"11d4187f-5938-4054-9eec-4d84f843bd73\") " pod="openstack/rabbitmq-server-2" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.955355 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2mg4\" (UniqueName: \"kubernetes.io/projected/ba007649-daf8-445b-b2c8-73ce6ec54403-kube-api-access-p2mg4\") pod \"rabbitmq-server-1\" (UID: \"ba007649-daf8-445b-b2c8-73ce6ec54403\") " pod="openstack/rabbitmq-server-1" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.955377 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ee918d9a-3b30-49a0-833a-596dc7301cae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ee918d9a-3b30-49a0-833a-596dc7301cae\") pod \"rabbitmq-server-2\" (UID: \"11d4187f-5938-4054-9eec-4d84f843bd73\") " pod="openstack/rabbitmq-server-2" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.955401 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ba007649-daf8-445b-b2c8-73ce6ec54403-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"ba007649-daf8-445b-b2c8-73ce6ec54403\") " pod="openstack/rabbitmq-server-1" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.955417 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/11d4187f-5938-4054-9eec-4d84f843bd73-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"11d4187f-5938-4054-9eec-4d84f843bd73\") " pod="openstack/rabbitmq-server-2" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.955439 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/11d4187f-5938-4054-9eec-4d84f843bd73-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"11d4187f-5938-4054-9eec-4d84f843bd73\") " pod="openstack/rabbitmq-server-2" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.955456 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ba007649-daf8-445b-b2c8-73ce6ec54403-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"ba007649-daf8-445b-b2c8-73ce6ec54403\") " pod="openstack/rabbitmq-server-1" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.955497 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/11d4187f-5938-4054-9eec-4d84f843bd73-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"11d4187f-5938-4054-9eec-4d84f843bd73\") " pod="openstack/rabbitmq-server-2" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.955547 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ba007649-daf8-445b-b2c8-73ce6ec54403-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"ba007649-daf8-445b-b2c8-73ce6ec54403\") " pod="openstack/rabbitmq-server-1" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.955580 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/11d4187f-5938-4054-9eec-4d84f843bd73-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"11d4187f-5938-4054-9eec-4d84f843bd73\") " pod="openstack/rabbitmq-server-2" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.955599 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ba007649-daf8-445b-b2c8-73ce6ec54403-server-conf\") pod \"rabbitmq-server-1\" (UID: \"ba007649-daf8-445b-b2c8-73ce6ec54403\") " pod="openstack/rabbitmq-server-1" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.956248 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/11d4187f-5938-4054-9eec-4d84f843bd73-server-conf\") pod \"rabbitmq-server-2\" (UID: \"11d4187f-5938-4054-9eec-4d84f843bd73\") " pod="openstack/rabbitmq-server-2" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.956275 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ba007649-daf8-445b-b2c8-73ce6ec54403-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"ba007649-daf8-445b-b2c8-73ce6ec54403\") " pod="openstack/rabbitmq-server-1" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.956318 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ba007649-daf8-445b-b2c8-73ce6ec54403-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"ba007649-daf8-445b-b2c8-73ce6ec54403\") " pod="openstack/rabbitmq-server-1" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.956342 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/11d4187f-5938-4054-9eec-4d84f843bd73-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"11d4187f-5938-4054-9eec-4d84f843bd73\") " pod="openstack/rabbitmq-server-2" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.956391 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/11d4187f-5938-4054-9eec-4d84f843bd73-pod-info\") pod \"rabbitmq-server-2\" (UID: \"11d4187f-5938-4054-9eec-4d84f843bd73\") " pod="openstack/rabbitmq-server-2" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.957217 4854 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.957250 4854 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-785a3b7a-40c0-4233-b4eb-a94f0e723354\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-785a3b7a-40c0-4233-b4eb-a94f0e723354\") pod \"rabbitmq-server-1\" (UID: \"ba007649-daf8-445b-b2c8-73ce6ec54403\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c874a54e7a277745e906225a135021eb2dc44cabc2a4e528eb3f699d09437dd7/globalmount\"" pod="openstack/rabbitmq-server-1" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.957624 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ba007649-daf8-445b-b2c8-73ce6ec54403-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"ba007649-daf8-445b-b2c8-73ce6ec54403\") " pod="openstack/rabbitmq-server-1" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.957629 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ba007649-daf8-445b-b2c8-73ce6ec54403-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"ba007649-daf8-445b-b2c8-73ce6ec54403\") " pod="openstack/rabbitmq-server-1" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.958192 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/11d4187f-5938-4054-9eec-4d84f843bd73-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"11d4187f-5938-4054-9eec-4d84f843bd73\") " pod="openstack/rabbitmq-server-2" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.958211 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/11d4187f-5938-4054-9eec-4d84f843bd73-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"11d4187f-5938-4054-9eec-4d84f843bd73\") " pod="openstack/rabbitmq-server-2" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.958667 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/11d4187f-5938-4054-9eec-4d84f843bd73-server-conf\") pod \"rabbitmq-server-2\" (UID: \"11d4187f-5938-4054-9eec-4d84f843bd73\") " pod="openstack/rabbitmq-server-2" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.959908 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/11d4187f-5938-4054-9eec-4d84f843bd73-pod-info\") pod \"rabbitmq-server-2\" (UID: \"11d4187f-5938-4054-9eec-4d84f843bd73\") " pod="openstack/rabbitmq-server-2" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.960725 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ba007649-daf8-445b-b2c8-73ce6ec54403-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"ba007649-daf8-445b-b2c8-73ce6ec54403\") " pod="openstack/rabbitmq-server-1" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.961365 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ba007649-daf8-445b-b2c8-73ce6ec54403-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"ba007649-daf8-445b-b2c8-73ce6ec54403\") " pod="openstack/rabbitmq-server-1" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.962850 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/11d4187f-5938-4054-9eec-4d84f843bd73-config-data\") pod \"rabbitmq-server-2\" (UID: \"11d4187f-5938-4054-9eec-4d84f843bd73\") " pod="openstack/rabbitmq-server-2" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.962880 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/11d4187f-5938-4054-9eec-4d84f843bd73-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"11d4187f-5938-4054-9eec-4d84f843bd73\") " pod="openstack/rabbitmq-server-2" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.964396 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ba007649-daf8-445b-b2c8-73ce6ec54403-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"ba007649-daf8-445b-b2c8-73ce6ec54403\") " pod="openstack/rabbitmq-server-1" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.964910 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ba007649-daf8-445b-b2c8-73ce6ec54403-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"ba007649-daf8-445b-b2c8-73ce6ec54403\") " pod="openstack/rabbitmq-server-1" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.966141 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ba007649-daf8-445b-b2c8-73ce6ec54403-pod-info\") pod \"rabbitmq-server-1\" (UID: \"ba007649-daf8-445b-b2c8-73ce6ec54403\") " pod="openstack/rabbitmq-server-1" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.967544 4854 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.967655 4854 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ee918d9a-3b30-49a0-833a-596dc7301cae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ee918d9a-3b30-49a0-833a-596dc7301cae\") pod \"rabbitmq-server-2\" (UID: \"11d4187f-5938-4054-9eec-4d84f843bd73\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/211f3cc9bde56467c1ebdea293e17dd39cf39688048069027414643ee5da736e/globalmount\"" pod="openstack/rabbitmq-server-2" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.968369 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/11d4187f-5938-4054-9eec-4d84f843bd73-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"11d4187f-5938-4054-9eec-4d84f843bd73\") " pod="openstack/rabbitmq-server-2" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.968410 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/11d4187f-5938-4054-9eec-4d84f843bd73-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"11d4187f-5938-4054-9eec-4d84f843bd73\") " pod="openstack/rabbitmq-server-2" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.973074 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ba007649-daf8-445b-b2c8-73ce6ec54403-config-data\") pod \"rabbitmq-server-1\" (UID: \"ba007649-daf8-445b-b2c8-73ce6ec54403\") " pod="openstack/rabbitmq-server-1" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.976260 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2mg4\" (UniqueName: \"kubernetes.io/projected/ba007649-daf8-445b-b2c8-73ce6ec54403-kube-api-access-p2mg4\") pod \"rabbitmq-server-1\" (UID: \"ba007649-daf8-445b-b2c8-73ce6ec54403\") " pod="openstack/rabbitmq-server-1" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.976958 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/11d4187f-5938-4054-9eec-4d84f843bd73-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"11d4187f-5938-4054-9eec-4d84f843bd73\") " pod="openstack/rabbitmq-server-2" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.977024 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ba007649-daf8-445b-b2c8-73ce6ec54403-server-conf\") pod \"rabbitmq-server-1\" (UID: \"ba007649-daf8-445b-b2c8-73ce6ec54403\") " pod="openstack/rabbitmq-server-1" Jan 03 06:01:48 crc kubenswrapper[4854]: I0103 06:01:48.983452 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdxk9\" (UniqueName: \"kubernetes.io/projected/11d4187f-5938-4054-9eec-4d84f843bd73-kube-api-access-mdxk9\") pod \"rabbitmq-server-2\" (UID: \"11d4187f-5938-4054-9eec-4d84f843bd73\") " pod="openstack/rabbitmq-server-2" Jan 03 06:01:49 crc kubenswrapper[4854]: I0103 06:01:49.022617 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ee918d9a-3b30-49a0-833a-596dc7301cae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ee918d9a-3b30-49a0-833a-596dc7301cae\") pod \"rabbitmq-server-2\" (UID: \"11d4187f-5938-4054-9eec-4d84f843bd73\") " pod="openstack/rabbitmq-server-2" Jan 03 06:01:49 crc kubenswrapper[4854]: I0103 06:01:49.038509 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-785a3b7a-40c0-4233-b4eb-a94f0e723354\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-785a3b7a-40c0-4233-b4eb-a94f0e723354\") pod \"rabbitmq-server-1\" (UID: \"ba007649-daf8-445b-b2c8-73ce6ec54403\") " pod="openstack/rabbitmq-server-1" Jan 03 06:01:49 crc kubenswrapper[4854]: I0103 06:01:49.039167 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 03 06:01:49 crc kubenswrapper[4854]: I0103 06:01:49.040062 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 03 06:01:49 crc kubenswrapper[4854]: I0103 06:01:49.050364 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:01:49 crc kubenswrapper[4854]: I0103 06:01:49.057016 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-2l8qk" Jan 03 06:01:49 crc kubenswrapper[4854]: I0103 06:01:49.058199 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 03 06:01:49 crc kubenswrapper[4854]: I0103 06:01:49.058337 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 03 06:01:49 crc kubenswrapper[4854]: I0103 06:01:49.059104 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/71288814-2f4e-4e92-8064-8f9ef1920212-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"71288814-2f4e-4e92-8064-8f9ef1920212\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:01:49 crc kubenswrapper[4854]: I0103 06:01:49.059361 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/71288814-2f4e-4e92-8064-8f9ef1920212-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"71288814-2f4e-4e92-8064-8f9ef1920212\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:01:49 crc kubenswrapper[4854]: I0103 06:01:49.059389 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/71288814-2f4e-4e92-8064-8f9ef1920212-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"71288814-2f4e-4e92-8064-8f9ef1920212\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:01:49 crc kubenswrapper[4854]: I0103 06:01:49.059425 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn7fn\" (UniqueName: \"kubernetes.io/projected/71288814-2f4e-4e92-8064-8f9ef1920212-kube-api-access-wn7fn\") pod \"rabbitmq-cell1-server-0\" (UID: \"71288814-2f4e-4e92-8064-8f9ef1920212\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:01:49 crc kubenswrapper[4854]: I0103 06:01:49.059759 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 03 06:01:49 crc kubenswrapper[4854]: I0103 06:01:49.059767 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 03 06:01:49 crc kubenswrapper[4854]: I0103 06:01:49.059889 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 03 06:01:49 crc kubenswrapper[4854]: I0103 06:01:49.059993 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 03 06:01:49 crc kubenswrapper[4854]: I0103 06:01:49.060460 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 03 06:01:49 crc kubenswrapper[4854]: I0103 06:01:49.060677 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/71288814-2f4e-4e92-8064-8f9ef1920212-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"71288814-2f4e-4e92-8064-8f9ef1920212\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:01:49 crc kubenswrapper[4854]: I0103 06:01:49.060732 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/71288814-2f4e-4e92-8064-8f9ef1920212-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"71288814-2f4e-4e92-8064-8f9ef1920212\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:01:49 crc kubenswrapper[4854]: I0103 06:01:49.060758 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/71288814-2f4e-4e92-8064-8f9ef1920212-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"71288814-2f4e-4e92-8064-8f9ef1920212\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:01:49 crc kubenswrapper[4854]: I0103 06:01:49.060818 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-73b672f2-f1c0-42e6-a01e-e7d83d6d9b11\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-73b672f2-f1c0-42e6-a01e-e7d83d6d9b11\") pod \"rabbitmq-cell1-server-0\" (UID: \"71288814-2f4e-4e92-8064-8f9ef1920212\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:01:49 crc kubenswrapper[4854]: I0103 06:01:49.060837 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/71288814-2f4e-4e92-8064-8f9ef1920212-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"71288814-2f4e-4e92-8064-8f9ef1920212\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:01:49 crc kubenswrapper[4854]: I0103 06:01:49.060888 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/71288814-2f4e-4e92-8064-8f9ef1920212-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"71288814-2f4e-4e92-8064-8f9ef1920212\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:01:49 crc kubenswrapper[4854]: I0103 06:01:49.062132 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/71288814-2f4e-4e92-8064-8f9ef1920212-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"71288814-2f4e-4e92-8064-8f9ef1920212\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:01:49 crc kubenswrapper[4854]: I0103 06:01:49.076913 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 03 06:01:49 crc kubenswrapper[4854]: I0103 06:01:49.103821 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 03 06:01:49 crc kubenswrapper[4854]: I0103 06:01:49.164401 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn7fn\" (UniqueName: \"kubernetes.io/projected/71288814-2f4e-4e92-8064-8f9ef1920212-kube-api-access-wn7fn\") pod \"rabbitmq-cell1-server-0\" (UID: \"71288814-2f4e-4e92-8064-8f9ef1920212\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:01:49 crc kubenswrapper[4854]: I0103 06:01:49.164547 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/71288814-2f4e-4e92-8064-8f9ef1920212-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"71288814-2f4e-4e92-8064-8f9ef1920212\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:01:49 crc kubenswrapper[4854]: I0103 06:01:49.164596 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/71288814-2f4e-4e92-8064-8f9ef1920212-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"71288814-2f4e-4e92-8064-8f9ef1920212\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:01:49 crc kubenswrapper[4854]: I0103 06:01:49.164616 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/71288814-2f4e-4e92-8064-8f9ef1920212-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"71288814-2f4e-4e92-8064-8f9ef1920212\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:01:49 crc kubenswrapper[4854]: I0103 06:01:49.164688 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-73b672f2-f1c0-42e6-a01e-e7d83d6d9b11\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-73b672f2-f1c0-42e6-a01e-e7d83d6d9b11\") pod \"rabbitmq-cell1-server-0\" (UID: \"71288814-2f4e-4e92-8064-8f9ef1920212\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:01:49 crc kubenswrapper[4854]: I0103 06:01:49.164718 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/71288814-2f4e-4e92-8064-8f9ef1920212-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"71288814-2f4e-4e92-8064-8f9ef1920212\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:01:49 crc kubenswrapper[4854]: I0103 06:01:49.164755 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/71288814-2f4e-4e92-8064-8f9ef1920212-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"71288814-2f4e-4e92-8064-8f9ef1920212\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:01:49 crc kubenswrapper[4854]: I0103 06:01:49.164873 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/71288814-2f4e-4e92-8064-8f9ef1920212-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"71288814-2f4e-4e92-8064-8f9ef1920212\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:01:49 crc kubenswrapper[4854]: I0103 06:01:49.164988 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/71288814-2f4e-4e92-8064-8f9ef1920212-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"71288814-2f4e-4e92-8064-8f9ef1920212\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:01:49 crc kubenswrapper[4854]: I0103 06:01:49.165105 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/71288814-2f4e-4e92-8064-8f9ef1920212-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"71288814-2f4e-4e92-8064-8f9ef1920212\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:01:49 crc kubenswrapper[4854]: I0103 06:01:49.165124 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/71288814-2f4e-4e92-8064-8f9ef1920212-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"71288814-2f4e-4e92-8064-8f9ef1920212\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:01:49 crc kubenswrapper[4854]: I0103 06:01:49.168290 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/71288814-2f4e-4e92-8064-8f9ef1920212-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"71288814-2f4e-4e92-8064-8f9ef1920212\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:01:49 crc kubenswrapper[4854]: I0103 06:01:49.168565 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/71288814-2f4e-4e92-8064-8f9ef1920212-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"71288814-2f4e-4e92-8064-8f9ef1920212\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:01:49 crc kubenswrapper[4854]: I0103 06:01:49.170258 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/71288814-2f4e-4e92-8064-8f9ef1920212-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"71288814-2f4e-4e92-8064-8f9ef1920212\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:01:49 crc kubenswrapper[4854]: I0103 06:01:49.170745 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/71288814-2f4e-4e92-8064-8f9ef1920212-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"71288814-2f4e-4e92-8064-8f9ef1920212\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:01:49 crc kubenswrapper[4854]: I0103 06:01:49.176790 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/71288814-2f4e-4e92-8064-8f9ef1920212-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"71288814-2f4e-4e92-8064-8f9ef1920212\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:01:49 crc kubenswrapper[4854]: I0103 06:01:49.190576 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/71288814-2f4e-4e92-8064-8f9ef1920212-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"71288814-2f4e-4e92-8064-8f9ef1920212\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:01:49 crc kubenswrapper[4854]: I0103 06:01:49.190908 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/71288814-2f4e-4e92-8064-8f9ef1920212-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"71288814-2f4e-4e92-8064-8f9ef1920212\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:01:49 crc kubenswrapper[4854]: I0103 06:01:49.191624 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/71288814-2f4e-4e92-8064-8f9ef1920212-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"71288814-2f4e-4e92-8064-8f9ef1920212\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:01:49 crc kubenswrapper[4854]: I0103 06:01:49.193659 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/71288814-2f4e-4e92-8064-8f9ef1920212-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"71288814-2f4e-4e92-8064-8f9ef1920212\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:01:49 crc kubenswrapper[4854]: I0103 06:01:49.193889 4854 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 03 06:01:49 crc kubenswrapper[4854]: I0103 06:01:49.193925 4854 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-73b672f2-f1c0-42e6-a01e-e7d83d6d9b11\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-73b672f2-f1c0-42e6-a01e-e7d83d6d9b11\") pod \"rabbitmq-cell1-server-0\" (UID: \"71288814-2f4e-4e92-8064-8f9ef1920212\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/eea12b79154e29fc712e0c8a941340ba71a472a92b568d3e8a62025798d2edd7/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:01:49 crc kubenswrapper[4854]: I0103 06:01:49.204122 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn7fn\" (UniqueName: \"kubernetes.io/projected/71288814-2f4e-4e92-8064-8f9ef1920212-kube-api-access-wn7fn\") pod \"rabbitmq-cell1-server-0\" (UID: \"71288814-2f4e-4e92-8064-8f9ef1920212\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:01:49 crc kubenswrapper[4854]: I0103 06:01:49.258166 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-73b672f2-f1c0-42e6-a01e-e7d83d6d9b11\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-73b672f2-f1c0-42e6-a01e-e7d83d6d9b11\") pod \"rabbitmq-cell1-server-0\" (UID: \"71288814-2f4e-4e92-8064-8f9ef1920212\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:01:49 crc kubenswrapper[4854]: I0103 06:01:49.391545 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:01:49 crc kubenswrapper[4854]: I0103 06:01:49.782008 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-9p2s9" event={"ID":"11e65dd3-c929-4a34-aa76-94ad1d7464db","Type":"ContainerStarted","Data":"f483b082d7e8f5b1b831f169f578e1398fef77ea1046aa46655c6fb10238f6dc"} Jan 03 06:01:49 crc kubenswrapper[4854]: I0103 06:01:49.975451 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 03 06:01:50 crc kubenswrapper[4854]: I0103 06:01:50.179374 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 03 06:01:50 crc kubenswrapper[4854]: I0103 06:01:50.200800 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 03 06:01:50 crc kubenswrapper[4854]: I0103 06:01:50.383025 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 03 06:01:50 crc kubenswrapper[4854]: I0103 06:01:50.506548 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 03 06:01:50 crc kubenswrapper[4854]: I0103 06:01:50.509108 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 03 06:01:50 crc kubenswrapper[4854]: I0103 06:01:50.512976 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 03 06:01:50 crc kubenswrapper[4854]: I0103 06:01:50.513461 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 03 06:01:50 crc kubenswrapper[4854]: I0103 06:01:50.513664 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-tnr7r" Jan 03 06:01:50 crc kubenswrapper[4854]: I0103 06:01:50.518805 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 03 06:01:50 crc kubenswrapper[4854]: I0103 06:01:50.519843 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 03 06:01:50 crc kubenswrapper[4854]: I0103 06:01:50.523655 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 03 06:01:50 crc kubenswrapper[4854]: I0103 06:01:50.648752 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/10578fce-2c06-4977-9cb2-51b8593f9fed-kolla-config\") pod \"openstack-galera-0\" (UID: \"10578fce-2c06-4977-9cb2-51b8593f9fed\") " pod="openstack/openstack-galera-0" Jan 03 06:01:50 crc kubenswrapper[4854]: I0103 06:01:50.648815 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10578fce-2c06-4977-9cb2-51b8593f9fed-operator-scripts\") pod \"openstack-galera-0\" (UID: \"10578fce-2c06-4977-9cb2-51b8593f9fed\") " pod="openstack/openstack-galera-0" Jan 03 06:01:50 crc kubenswrapper[4854]: I0103 06:01:50.648842 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/10578fce-2c06-4977-9cb2-51b8593f9fed-config-data-generated\") pod \"openstack-galera-0\" (UID: \"10578fce-2c06-4977-9cb2-51b8593f9fed\") " pod="openstack/openstack-galera-0" Jan 03 06:01:50 crc kubenswrapper[4854]: I0103 06:01:50.648938 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/10578fce-2c06-4977-9cb2-51b8593f9fed-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"10578fce-2c06-4977-9cb2-51b8593f9fed\") " pod="openstack/openstack-galera-0" Jan 03 06:01:50 crc kubenswrapper[4854]: I0103 06:01:50.648956 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d44m\" (UniqueName: \"kubernetes.io/projected/10578fce-2c06-4977-9cb2-51b8593f9fed-kube-api-access-5d44m\") pod \"openstack-galera-0\" (UID: \"10578fce-2c06-4977-9cb2-51b8593f9fed\") " pod="openstack/openstack-galera-0" Jan 03 06:01:50 crc kubenswrapper[4854]: I0103 06:01:50.648976 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10578fce-2c06-4977-9cb2-51b8593f9fed-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"10578fce-2c06-4977-9cb2-51b8593f9fed\") " pod="openstack/openstack-galera-0" Jan 03 06:01:50 crc kubenswrapper[4854]: I0103 06:01:50.649001 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f88d9d08-44e1-4d31-b859-d2c47d1b0b95\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f88d9d08-44e1-4d31-b859-d2c47d1b0b95\") pod \"openstack-galera-0\" (UID: \"10578fce-2c06-4977-9cb2-51b8593f9fed\") " pod="openstack/openstack-galera-0" Jan 03 06:01:50 crc kubenswrapper[4854]: I0103 06:01:50.649033 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/10578fce-2c06-4977-9cb2-51b8593f9fed-config-data-default\") pod \"openstack-galera-0\" (UID: \"10578fce-2c06-4977-9cb2-51b8593f9fed\") " pod="openstack/openstack-galera-0" Jan 03 06:01:50 crc kubenswrapper[4854]: I0103 06:01:50.751228 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10578fce-2c06-4977-9cb2-51b8593f9fed-operator-scripts\") pod \"openstack-galera-0\" (UID: \"10578fce-2c06-4977-9cb2-51b8593f9fed\") " pod="openstack/openstack-galera-0" Jan 03 06:01:50 crc kubenswrapper[4854]: I0103 06:01:50.751279 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/10578fce-2c06-4977-9cb2-51b8593f9fed-config-data-generated\") pod \"openstack-galera-0\" (UID: \"10578fce-2c06-4977-9cb2-51b8593f9fed\") " pod="openstack/openstack-galera-0" Jan 03 06:01:50 crc kubenswrapper[4854]: I0103 06:01:50.751369 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/10578fce-2c06-4977-9cb2-51b8593f9fed-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"10578fce-2c06-4977-9cb2-51b8593f9fed\") " pod="openstack/openstack-galera-0" Jan 03 06:01:50 crc kubenswrapper[4854]: I0103 06:01:50.751391 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5d44m\" (UniqueName: \"kubernetes.io/projected/10578fce-2c06-4977-9cb2-51b8593f9fed-kube-api-access-5d44m\") pod \"openstack-galera-0\" (UID: \"10578fce-2c06-4977-9cb2-51b8593f9fed\") " pod="openstack/openstack-galera-0" Jan 03 06:01:50 crc kubenswrapper[4854]: I0103 06:01:50.751411 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10578fce-2c06-4977-9cb2-51b8593f9fed-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"10578fce-2c06-4977-9cb2-51b8593f9fed\") " pod="openstack/openstack-galera-0" Jan 03 06:01:50 crc kubenswrapper[4854]: I0103 06:01:50.751435 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f88d9d08-44e1-4d31-b859-d2c47d1b0b95\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f88d9d08-44e1-4d31-b859-d2c47d1b0b95\") pod \"openstack-galera-0\" (UID: \"10578fce-2c06-4977-9cb2-51b8593f9fed\") " pod="openstack/openstack-galera-0" Jan 03 06:01:50 crc kubenswrapper[4854]: I0103 06:01:50.751467 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/10578fce-2c06-4977-9cb2-51b8593f9fed-config-data-default\") pod \"openstack-galera-0\" (UID: \"10578fce-2c06-4977-9cb2-51b8593f9fed\") " pod="openstack/openstack-galera-0" Jan 03 06:01:50 crc kubenswrapper[4854]: I0103 06:01:50.751503 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/10578fce-2c06-4977-9cb2-51b8593f9fed-kolla-config\") pod \"openstack-galera-0\" (UID: \"10578fce-2c06-4977-9cb2-51b8593f9fed\") " pod="openstack/openstack-galera-0" Jan 03 06:01:50 crc kubenswrapper[4854]: I0103 06:01:50.752337 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/10578fce-2c06-4977-9cb2-51b8593f9fed-kolla-config\") pod \"openstack-galera-0\" (UID: \"10578fce-2c06-4977-9cb2-51b8593f9fed\") " pod="openstack/openstack-galera-0" Jan 03 06:01:50 crc kubenswrapper[4854]: I0103 06:01:50.753491 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10578fce-2c06-4977-9cb2-51b8593f9fed-operator-scripts\") pod \"openstack-galera-0\" (UID: \"10578fce-2c06-4977-9cb2-51b8593f9fed\") " pod="openstack/openstack-galera-0" Jan 03 06:01:50 crc kubenswrapper[4854]: I0103 06:01:50.753719 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/10578fce-2c06-4977-9cb2-51b8593f9fed-config-data-generated\") pod \"openstack-galera-0\" (UID: \"10578fce-2c06-4977-9cb2-51b8593f9fed\") " pod="openstack/openstack-galera-0" Jan 03 06:01:50 crc kubenswrapper[4854]: I0103 06:01:50.756789 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/10578fce-2c06-4977-9cb2-51b8593f9fed-config-data-default\") pod \"openstack-galera-0\" (UID: \"10578fce-2c06-4977-9cb2-51b8593f9fed\") " pod="openstack/openstack-galera-0" Jan 03 06:01:50 crc kubenswrapper[4854]: I0103 06:01:50.770872 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/10578fce-2c06-4977-9cb2-51b8593f9fed-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"10578fce-2c06-4977-9cb2-51b8593f9fed\") " pod="openstack/openstack-galera-0" Jan 03 06:01:50 crc kubenswrapper[4854]: I0103 06:01:50.770911 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10578fce-2c06-4977-9cb2-51b8593f9fed-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"10578fce-2c06-4977-9cb2-51b8593f9fed\") " pod="openstack/openstack-galera-0" Jan 03 06:01:50 crc kubenswrapper[4854]: I0103 06:01:50.780738 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5d44m\" (UniqueName: \"kubernetes.io/projected/10578fce-2c06-4977-9cb2-51b8593f9fed-kube-api-access-5d44m\") pod \"openstack-galera-0\" (UID: \"10578fce-2c06-4977-9cb2-51b8593f9fed\") " pod="openstack/openstack-galera-0" Jan 03 06:01:50 crc kubenswrapper[4854]: I0103 06:01:50.796921 4854 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 03 06:01:50 crc kubenswrapper[4854]: I0103 06:01:50.796958 4854 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f88d9d08-44e1-4d31-b859-d2c47d1b0b95\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f88d9d08-44e1-4d31-b859-d2c47d1b0b95\") pod \"openstack-galera-0\" (UID: \"10578fce-2c06-4977-9cb2-51b8593f9fed\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a1c15b76785231c72d7ded2dd31f84814ef383188320cde39a74f29c2fc19686/globalmount\"" pod="openstack/openstack-galera-0" Jan 03 06:01:50 crc kubenswrapper[4854]: I0103 06:01:50.834308 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"11d4187f-5938-4054-9eec-4d84f843bd73","Type":"ContainerStarted","Data":"27c7b927832c03c7ba640994748c7296335ccf34d5985ffe88f86de2f25e7391"} Jan 03 06:01:50 crc kubenswrapper[4854]: I0103 06:01:50.834471 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f88d9d08-44e1-4d31-b859-d2c47d1b0b95\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f88d9d08-44e1-4d31-b859-d2c47d1b0b95\") pod \"openstack-galera-0\" (UID: \"10578fce-2c06-4977-9cb2-51b8593f9fed\") " pod="openstack/openstack-galera-0" Jan 03 06:01:50 crc kubenswrapper[4854]: I0103 06:01:50.849461 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 03 06:01:51 crc kubenswrapper[4854]: I0103 06:01:51.643928 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 03 06:01:51 crc kubenswrapper[4854]: I0103 06:01:51.650650 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 03 06:01:51 crc kubenswrapper[4854]: I0103 06:01:51.654145 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 03 06:01:51 crc kubenswrapper[4854]: I0103 06:01:51.656247 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 03 06:01:51 crc kubenswrapper[4854]: I0103 06:01:51.656627 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 03 06:01:51 crc kubenswrapper[4854]: I0103 06:01:51.657679 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 03 06:01:51 crc kubenswrapper[4854]: I0103 06:01:51.660727 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-mgjzj" Jan 03 06:01:51 crc kubenswrapper[4854]: I0103 06:01:51.843056 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/748d9586-5917-42ab-8f1f-3a811b724dae-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"748d9586-5917-42ab-8f1f-3a811b724dae\") " pod="openstack/openstack-cell1-galera-0" Jan 03 06:01:51 crc kubenswrapper[4854]: I0103 06:01:51.843122 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/748d9586-5917-42ab-8f1f-3a811b724dae-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"748d9586-5917-42ab-8f1f-3a811b724dae\") " pod="openstack/openstack-cell1-galera-0" Jan 03 06:01:51 crc kubenswrapper[4854]: I0103 06:01:51.843193 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/748d9586-5917-42ab-8f1f-3a811b724dae-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"748d9586-5917-42ab-8f1f-3a811b724dae\") " pod="openstack/openstack-cell1-galera-0" Jan 03 06:01:51 crc kubenswrapper[4854]: I0103 06:01:51.843236 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c93efa61-87de-4f45-8198-7a4765dd2f24\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c93efa61-87de-4f45-8198-7a4765dd2f24\") pod \"openstack-cell1-galera-0\" (UID: \"748d9586-5917-42ab-8f1f-3a811b724dae\") " pod="openstack/openstack-cell1-galera-0" Jan 03 06:01:51 crc kubenswrapper[4854]: I0103 06:01:51.843302 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/748d9586-5917-42ab-8f1f-3a811b724dae-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"748d9586-5917-42ab-8f1f-3a811b724dae\") " pod="openstack/openstack-cell1-galera-0" Jan 03 06:01:51 crc kubenswrapper[4854]: I0103 06:01:51.843353 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtpdz\" (UniqueName: \"kubernetes.io/projected/748d9586-5917-42ab-8f1f-3a811b724dae-kube-api-access-wtpdz\") pod \"openstack-cell1-galera-0\" (UID: \"748d9586-5917-42ab-8f1f-3a811b724dae\") " pod="openstack/openstack-cell1-galera-0" Jan 03 06:01:51 crc kubenswrapper[4854]: I0103 06:01:51.843384 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/748d9586-5917-42ab-8f1f-3a811b724dae-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"748d9586-5917-42ab-8f1f-3a811b724dae\") " pod="openstack/openstack-cell1-galera-0" Jan 03 06:01:51 crc kubenswrapper[4854]: I0103 06:01:51.843426 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/748d9586-5917-42ab-8f1f-3a811b724dae-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"748d9586-5917-42ab-8f1f-3a811b724dae\") " pod="openstack/openstack-cell1-galera-0" Jan 03 06:01:51 crc kubenswrapper[4854]: I0103 06:01:51.948842 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/748d9586-5917-42ab-8f1f-3a811b724dae-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"748d9586-5917-42ab-8f1f-3a811b724dae\") " pod="openstack/openstack-cell1-galera-0" Jan 03 06:01:51 crc kubenswrapper[4854]: I0103 06:01:51.948902 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/748d9586-5917-42ab-8f1f-3a811b724dae-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"748d9586-5917-42ab-8f1f-3a811b724dae\") " pod="openstack/openstack-cell1-galera-0" Jan 03 06:01:51 crc kubenswrapper[4854]: I0103 06:01:51.949308 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/748d9586-5917-42ab-8f1f-3a811b724dae-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"748d9586-5917-42ab-8f1f-3a811b724dae\") " pod="openstack/openstack-cell1-galera-0" Jan 03 06:01:51 crc kubenswrapper[4854]: I0103 06:01:51.949867 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/748d9586-5917-42ab-8f1f-3a811b724dae-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"748d9586-5917-42ab-8f1f-3a811b724dae\") " pod="openstack/openstack-cell1-galera-0" Jan 03 06:01:51 crc kubenswrapper[4854]: I0103 06:01:51.950202 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/748d9586-5917-42ab-8f1f-3a811b724dae-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"748d9586-5917-42ab-8f1f-3a811b724dae\") " pod="openstack/openstack-cell1-galera-0" Jan 03 06:01:51 crc kubenswrapper[4854]: I0103 06:01:51.951315 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/748d9586-5917-42ab-8f1f-3a811b724dae-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"748d9586-5917-42ab-8f1f-3a811b724dae\") " pod="openstack/openstack-cell1-galera-0" Jan 03 06:01:51 crc kubenswrapper[4854]: I0103 06:01:51.951461 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/748d9586-5917-42ab-8f1f-3a811b724dae-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"748d9586-5917-42ab-8f1f-3a811b724dae\") " pod="openstack/openstack-cell1-galera-0" Jan 03 06:01:51 crc kubenswrapper[4854]: I0103 06:01:51.952042 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/748d9586-5917-42ab-8f1f-3a811b724dae-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"748d9586-5917-42ab-8f1f-3a811b724dae\") " pod="openstack/openstack-cell1-galera-0" Jan 03 06:01:51 crc kubenswrapper[4854]: I0103 06:01:51.952141 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c93efa61-87de-4f45-8198-7a4765dd2f24\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c93efa61-87de-4f45-8198-7a4765dd2f24\") pod \"openstack-cell1-galera-0\" (UID: \"748d9586-5917-42ab-8f1f-3a811b724dae\") " pod="openstack/openstack-cell1-galera-0" Jan 03 06:01:51 crc kubenswrapper[4854]: I0103 06:01:51.953875 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/748d9586-5917-42ab-8f1f-3a811b724dae-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"748d9586-5917-42ab-8f1f-3a811b724dae\") " pod="openstack/openstack-cell1-galera-0" Jan 03 06:01:51 crc kubenswrapper[4854]: I0103 06:01:51.953938 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtpdz\" (UniqueName: \"kubernetes.io/projected/748d9586-5917-42ab-8f1f-3a811b724dae-kube-api-access-wtpdz\") pod \"openstack-cell1-galera-0\" (UID: \"748d9586-5917-42ab-8f1f-3a811b724dae\") " pod="openstack/openstack-cell1-galera-0" Jan 03 06:01:51 crc kubenswrapper[4854]: I0103 06:01:51.955541 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/748d9586-5917-42ab-8f1f-3a811b724dae-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"748d9586-5917-42ab-8f1f-3a811b724dae\") " pod="openstack/openstack-cell1-galera-0" Jan 03 06:01:51 crc kubenswrapper[4854]: I0103 06:01:51.957377 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/748d9586-5917-42ab-8f1f-3a811b724dae-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"748d9586-5917-42ab-8f1f-3a811b724dae\") " pod="openstack/openstack-cell1-galera-0" Jan 03 06:01:51 crc kubenswrapper[4854]: I0103 06:01:51.957638 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/748d9586-5917-42ab-8f1f-3a811b724dae-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"748d9586-5917-42ab-8f1f-3a811b724dae\") " pod="openstack/openstack-cell1-galera-0" Jan 03 06:01:51 crc kubenswrapper[4854]: I0103 06:01:51.958276 4854 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 03 06:01:51 crc kubenswrapper[4854]: I0103 06:01:51.958301 4854 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c93efa61-87de-4f45-8198-7a4765dd2f24\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c93efa61-87de-4f45-8198-7a4765dd2f24\") pod \"openstack-cell1-galera-0\" (UID: \"748d9586-5917-42ab-8f1f-3a811b724dae\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/da42e17485985d2315c9401968d56839107efd8a9694d4b9c9dd3061ea1456b0/globalmount\"" pod="openstack/openstack-cell1-galera-0" Jan 03 06:01:51 crc kubenswrapper[4854]: I0103 06:01:51.980291 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtpdz\" (UniqueName: \"kubernetes.io/projected/748d9586-5917-42ab-8f1f-3a811b724dae-kube-api-access-wtpdz\") pod \"openstack-cell1-galera-0\" (UID: \"748d9586-5917-42ab-8f1f-3a811b724dae\") " pod="openstack/openstack-cell1-galera-0" Jan 03 06:01:52 crc kubenswrapper[4854]: I0103 06:01:52.032046 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c93efa61-87de-4f45-8198-7a4765dd2f24\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c93efa61-87de-4f45-8198-7a4765dd2f24\") pod \"openstack-cell1-galera-0\" (UID: \"748d9586-5917-42ab-8f1f-3a811b724dae\") " pod="openstack/openstack-cell1-galera-0" Jan 03 06:01:52 crc kubenswrapper[4854]: I0103 06:01:52.083662 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 03 06:01:52 crc kubenswrapper[4854]: I0103 06:01:52.084893 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 03 06:01:52 crc kubenswrapper[4854]: I0103 06:01:52.100106 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 03 06:01:52 crc kubenswrapper[4854]: I0103 06:01:52.142144 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 03 06:01:52 crc kubenswrapper[4854]: I0103 06:01:52.148069 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 03 06:01:52 crc kubenswrapper[4854]: I0103 06:01:52.148691 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-hlk78" Jan 03 06:01:52 crc kubenswrapper[4854]: I0103 06:01:52.262575 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnxpv\" (UniqueName: \"kubernetes.io/projected/61632803-5660-4e68-865c-0d231613aec4-kube-api-access-qnxpv\") pod \"memcached-0\" (UID: \"61632803-5660-4e68-865c-0d231613aec4\") " pod="openstack/memcached-0" Jan 03 06:01:52 crc kubenswrapper[4854]: I0103 06:01:52.262728 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/61632803-5660-4e68-865c-0d231613aec4-kolla-config\") pod \"memcached-0\" (UID: \"61632803-5660-4e68-865c-0d231613aec4\") " pod="openstack/memcached-0" Jan 03 06:01:52 crc kubenswrapper[4854]: I0103 06:01:52.262846 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/61632803-5660-4e68-865c-0d231613aec4-memcached-tls-certs\") pod \"memcached-0\" (UID: \"61632803-5660-4e68-865c-0d231613aec4\") " pod="openstack/memcached-0" Jan 03 06:01:52 crc kubenswrapper[4854]: I0103 06:01:52.262874 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61632803-5660-4e68-865c-0d231613aec4-combined-ca-bundle\") pod \"memcached-0\" (UID: \"61632803-5660-4e68-865c-0d231613aec4\") " pod="openstack/memcached-0" Jan 03 06:01:52 crc kubenswrapper[4854]: I0103 06:01:52.262919 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/61632803-5660-4e68-865c-0d231613aec4-config-data\") pod \"memcached-0\" (UID: \"61632803-5660-4e68-865c-0d231613aec4\") " pod="openstack/memcached-0" Jan 03 06:01:52 crc kubenswrapper[4854]: I0103 06:01:52.293284 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 03 06:01:52 crc kubenswrapper[4854]: I0103 06:01:52.371345 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/61632803-5660-4e68-865c-0d231613aec4-config-data\") pod \"memcached-0\" (UID: \"61632803-5660-4e68-865c-0d231613aec4\") " pod="openstack/memcached-0" Jan 03 06:01:52 crc kubenswrapper[4854]: I0103 06:01:52.371554 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnxpv\" (UniqueName: \"kubernetes.io/projected/61632803-5660-4e68-865c-0d231613aec4-kube-api-access-qnxpv\") pod \"memcached-0\" (UID: \"61632803-5660-4e68-865c-0d231613aec4\") " pod="openstack/memcached-0" Jan 03 06:01:52 crc kubenswrapper[4854]: I0103 06:01:52.371717 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/61632803-5660-4e68-865c-0d231613aec4-kolla-config\") pod \"memcached-0\" (UID: \"61632803-5660-4e68-865c-0d231613aec4\") " pod="openstack/memcached-0" Jan 03 06:01:52 crc kubenswrapper[4854]: I0103 06:01:52.371848 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/61632803-5660-4e68-865c-0d231613aec4-memcached-tls-certs\") pod \"memcached-0\" (UID: \"61632803-5660-4e68-865c-0d231613aec4\") " pod="openstack/memcached-0" Jan 03 06:01:52 crc kubenswrapper[4854]: I0103 06:01:52.371900 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61632803-5660-4e68-865c-0d231613aec4-combined-ca-bundle\") pod \"memcached-0\" (UID: \"61632803-5660-4e68-865c-0d231613aec4\") " pod="openstack/memcached-0" Jan 03 06:01:52 crc kubenswrapper[4854]: I0103 06:01:52.373483 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/61632803-5660-4e68-865c-0d231613aec4-config-data\") pod \"memcached-0\" (UID: \"61632803-5660-4e68-865c-0d231613aec4\") " pod="openstack/memcached-0" Jan 03 06:01:52 crc kubenswrapper[4854]: I0103 06:01:52.379506 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/61632803-5660-4e68-865c-0d231613aec4-kolla-config\") pod \"memcached-0\" (UID: \"61632803-5660-4e68-865c-0d231613aec4\") " pod="openstack/memcached-0" Jan 03 06:01:52 crc kubenswrapper[4854]: I0103 06:01:52.406812 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/61632803-5660-4e68-865c-0d231613aec4-memcached-tls-certs\") pod \"memcached-0\" (UID: \"61632803-5660-4e68-865c-0d231613aec4\") " pod="openstack/memcached-0" Jan 03 06:01:52 crc kubenswrapper[4854]: I0103 06:01:52.407554 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61632803-5660-4e68-865c-0d231613aec4-combined-ca-bundle\") pod \"memcached-0\" (UID: \"61632803-5660-4e68-865c-0d231613aec4\") " pod="openstack/memcached-0" Jan 03 06:01:52 crc kubenswrapper[4854]: I0103 06:01:52.413054 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnxpv\" (UniqueName: \"kubernetes.io/projected/61632803-5660-4e68-865c-0d231613aec4-kube-api-access-qnxpv\") pod \"memcached-0\" (UID: \"61632803-5660-4e68-865c-0d231613aec4\") " pod="openstack/memcached-0" Jan 03 06:01:52 crc kubenswrapper[4854]: I0103 06:01:52.486790 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 03 06:01:53 crc kubenswrapper[4854]: I0103 06:01:53.968718 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 03 06:01:53 crc kubenswrapper[4854]: I0103 06:01:53.970881 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 03 06:01:53 crc kubenswrapper[4854]: I0103 06:01:53.979490 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-6trlx" Jan 03 06:01:54 crc kubenswrapper[4854]: I0103 06:01:53.999552 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 03 06:01:54 crc kubenswrapper[4854]: I0103 06:01:54.038868 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbpsb\" (UniqueName: \"kubernetes.io/projected/b2518f81-3d3d-47a6-a157-19c2685f07d2-kube-api-access-fbpsb\") pod \"kube-state-metrics-0\" (UID: \"b2518f81-3d3d-47a6-a157-19c2685f07d2\") " pod="openstack/kube-state-metrics-0" Jan 03 06:01:54 crc kubenswrapper[4854]: I0103 06:01:54.141524 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbpsb\" (UniqueName: \"kubernetes.io/projected/b2518f81-3d3d-47a6-a157-19c2685f07d2-kube-api-access-fbpsb\") pod \"kube-state-metrics-0\" (UID: \"b2518f81-3d3d-47a6-a157-19c2685f07d2\") " pod="openstack/kube-state-metrics-0" Jan 03 06:01:54 crc kubenswrapper[4854]: I0103 06:01:54.191694 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbpsb\" (UniqueName: \"kubernetes.io/projected/b2518f81-3d3d-47a6-a157-19c2685f07d2-kube-api-access-fbpsb\") pod \"kube-state-metrics-0\" (UID: \"b2518f81-3d3d-47a6-a157-19c2685f07d2\") " pod="openstack/kube-state-metrics-0" Jan 03 06:01:54 crc kubenswrapper[4854]: I0103 06:01:54.305665 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 03 06:01:54 crc kubenswrapper[4854]: I0103 06:01:54.651921 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-ngk5h"] Jan 03 06:01:54 crc kubenswrapper[4854]: I0103 06:01:54.653286 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-ngk5h" Jan 03 06:01:54 crc kubenswrapper[4854]: I0103 06:01:54.657366 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Jan 03 06:01:54 crc kubenswrapper[4854]: I0103 06:01:54.657606 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-67jrl" Jan 03 06:01:54 crc kubenswrapper[4854]: I0103 06:01:54.672273 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-ngk5h"] Jan 03 06:01:54 crc kubenswrapper[4854]: I0103 06:01:54.762527 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32ebd9b5-c83a-401d-824e-77c47a842836-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-ngk5h\" (UID: \"32ebd9b5-c83a-401d-824e-77c47a842836\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-ngk5h" Jan 03 06:01:54 crc kubenswrapper[4854]: I0103 06:01:54.762665 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzkgv\" (UniqueName: \"kubernetes.io/projected/32ebd9b5-c83a-401d-824e-77c47a842836-kube-api-access-fzkgv\") pod \"observability-ui-dashboards-66cbf594b5-ngk5h\" (UID: \"32ebd9b5-c83a-401d-824e-77c47a842836\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-ngk5h" Jan 03 06:01:54 crc kubenswrapper[4854]: I0103 06:01:54.864232 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzkgv\" (UniqueName: \"kubernetes.io/projected/32ebd9b5-c83a-401d-824e-77c47a842836-kube-api-access-fzkgv\") pod \"observability-ui-dashboards-66cbf594b5-ngk5h\" (UID: \"32ebd9b5-c83a-401d-824e-77c47a842836\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-ngk5h" Jan 03 06:01:54 crc kubenswrapper[4854]: I0103 06:01:54.864683 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32ebd9b5-c83a-401d-824e-77c47a842836-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-ngk5h\" (UID: \"32ebd9b5-c83a-401d-824e-77c47a842836\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-ngk5h" Jan 03 06:01:54 crc kubenswrapper[4854]: E0103 06:01:54.864819 4854 secret.go:188] Couldn't get secret openshift-operators/observability-ui-dashboards: secret "observability-ui-dashboards" not found Jan 03 06:01:54 crc kubenswrapper[4854]: E0103 06:01:54.864899 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/32ebd9b5-c83a-401d-824e-77c47a842836-serving-cert podName:32ebd9b5-c83a-401d-824e-77c47a842836 nodeName:}" failed. No retries permitted until 2026-01-03 06:01:55.36488066 +0000 UTC m=+1293.691457232 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/32ebd9b5-c83a-401d-824e-77c47a842836-serving-cert") pod "observability-ui-dashboards-66cbf594b5-ngk5h" (UID: "32ebd9b5-c83a-401d-824e-77c47a842836") : secret "observability-ui-dashboards" not found Jan 03 06:01:54 crc kubenswrapper[4854]: I0103 06:01:54.906160 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzkgv\" (UniqueName: \"kubernetes.io/projected/32ebd9b5-c83a-401d-824e-77c47a842836-kube-api-access-fzkgv\") pod \"observability-ui-dashboards-66cbf594b5-ngk5h\" (UID: \"32ebd9b5-c83a-401d-824e-77c47a842836\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-ngk5h" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.024591 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-67666b4d85-nwx4t"] Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.046250 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-67666b4d85-nwx4t" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.052143 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-67666b4d85-nwx4t"] Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.162562 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.165313 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.173211 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.174934 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.175059 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.175204 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-hmglt" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.175409 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.175532 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.175631 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.179346 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xtb5\" (UniqueName: \"kubernetes.io/projected/002174a6-3b57-4eba-985b-9fd7c492b143-kube-api-access-9xtb5\") pod \"console-67666b4d85-nwx4t\" (UID: \"002174a6-3b57-4eba-985b-9fd7c492b143\") " pod="openshift-console/console-67666b4d85-nwx4t" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.179418 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/002174a6-3b57-4eba-985b-9fd7c492b143-trusted-ca-bundle\") pod \"console-67666b4d85-nwx4t\" (UID: \"002174a6-3b57-4eba-985b-9fd7c492b143\") " pod="openshift-console/console-67666b4d85-nwx4t" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.179532 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/002174a6-3b57-4eba-985b-9fd7c492b143-service-ca\") pod \"console-67666b4d85-nwx4t\" (UID: \"002174a6-3b57-4eba-985b-9fd7c492b143\") " pod="openshift-console/console-67666b4d85-nwx4t" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.179630 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/002174a6-3b57-4eba-985b-9fd7c492b143-console-serving-cert\") pod \"console-67666b4d85-nwx4t\" (UID: \"002174a6-3b57-4eba-985b-9fd7c492b143\") " pod="openshift-console/console-67666b4d85-nwx4t" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.179709 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/002174a6-3b57-4eba-985b-9fd7c492b143-console-config\") pod \"console-67666b4d85-nwx4t\" (UID: \"002174a6-3b57-4eba-985b-9fd7c492b143\") " pod="openshift-console/console-67666b4d85-nwx4t" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.180000 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/002174a6-3b57-4eba-985b-9fd7c492b143-oauth-serving-cert\") pod \"console-67666b4d85-nwx4t\" (UID: \"002174a6-3b57-4eba-985b-9fd7c492b143\") " pod="openshift-console/console-67666b4d85-nwx4t" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.180247 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/002174a6-3b57-4eba-985b-9fd7c492b143-console-oauth-config\") pod \"console-67666b4d85-nwx4t\" (UID: \"002174a6-3b57-4eba-985b-9fd7c492b143\") " pod="openshift-console/console-67666b4d85-nwx4t" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.181710 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.195143 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.282175 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xtb5\" (UniqueName: \"kubernetes.io/projected/002174a6-3b57-4eba-985b-9fd7c492b143-kube-api-access-9xtb5\") pod \"console-67666b4d85-nwx4t\" (UID: \"002174a6-3b57-4eba-985b-9fd7c492b143\") " pod="openshift-console/console-67666b4d85-nwx4t" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.282228 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/97a38e3c-dd5a-447b-b580-ed7bd5f16fde-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"97a38e3c-dd5a-447b-b580-ed7bd5f16fde\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.282264 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4d0c0986-6456-41c5-893f-749533411374\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d0c0986-6456-41c5-893f-749533411374\") pod \"prometheus-metric-storage-0\" (UID: \"97a38e3c-dd5a-447b-b580-ed7bd5f16fde\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.282286 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/002174a6-3b57-4eba-985b-9fd7c492b143-trusted-ca-bundle\") pod \"console-67666b4d85-nwx4t\" (UID: \"002174a6-3b57-4eba-985b-9fd7c492b143\") " pod="openshift-console/console-67666b4d85-nwx4t" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.282305 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/97a38e3c-dd5a-447b-b580-ed7bd5f16fde-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"97a38e3c-dd5a-447b-b580-ed7bd5f16fde\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.282320 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/97a38e3c-dd5a-447b-b580-ed7bd5f16fde-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"97a38e3c-dd5a-447b-b580-ed7bd5f16fde\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.282337 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/97a38e3c-dd5a-447b-b580-ed7bd5f16fde-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"97a38e3c-dd5a-447b-b580-ed7bd5f16fde\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.282393 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/97a38e3c-dd5a-447b-b580-ed7bd5f16fde-config\") pod \"prometheus-metric-storage-0\" (UID: \"97a38e3c-dd5a-447b-b580-ed7bd5f16fde\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.282424 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/002174a6-3b57-4eba-985b-9fd7c492b143-service-ca\") pod \"console-67666b4d85-nwx4t\" (UID: \"002174a6-3b57-4eba-985b-9fd7c492b143\") " pod="openshift-console/console-67666b4d85-nwx4t" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.282444 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/97a38e3c-dd5a-447b-b580-ed7bd5f16fde-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"97a38e3c-dd5a-447b-b580-ed7bd5f16fde\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.282478 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/002174a6-3b57-4eba-985b-9fd7c492b143-console-serving-cert\") pod \"console-67666b4d85-nwx4t\" (UID: \"002174a6-3b57-4eba-985b-9fd7c492b143\") " pod="openshift-console/console-67666b4d85-nwx4t" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.282502 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/97a38e3c-dd5a-447b-b580-ed7bd5f16fde-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"97a38e3c-dd5a-447b-b580-ed7bd5f16fde\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.282521 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/002174a6-3b57-4eba-985b-9fd7c492b143-console-config\") pod \"console-67666b4d85-nwx4t\" (UID: \"002174a6-3b57-4eba-985b-9fd7c492b143\") " pod="openshift-console/console-67666b4d85-nwx4t" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.282591 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/002174a6-3b57-4eba-985b-9fd7c492b143-oauth-serving-cert\") pod \"console-67666b4d85-nwx4t\" (UID: \"002174a6-3b57-4eba-985b-9fd7c492b143\") " pod="openshift-console/console-67666b4d85-nwx4t" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.282616 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/97a38e3c-dd5a-447b-b580-ed7bd5f16fde-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"97a38e3c-dd5a-447b-b580-ed7bd5f16fde\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.282646 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/002174a6-3b57-4eba-985b-9fd7c492b143-console-oauth-config\") pod \"console-67666b4d85-nwx4t\" (UID: \"002174a6-3b57-4eba-985b-9fd7c492b143\") " pod="openshift-console/console-67666b4d85-nwx4t" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.282689 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng2qt\" (UniqueName: \"kubernetes.io/projected/97a38e3c-dd5a-447b-b580-ed7bd5f16fde-kube-api-access-ng2qt\") pod \"prometheus-metric-storage-0\" (UID: \"97a38e3c-dd5a-447b-b580-ed7bd5f16fde\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.284024 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/002174a6-3b57-4eba-985b-9fd7c492b143-console-config\") pod \"console-67666b4d85-nwx4t\" (UID: \"002174a6-3b57-4eba-985b-9fd7c492b143\") " pod="openshift-console/console-67666b4d85-nwx4t" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.285051 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/002174a6-3b57-4eba-985b-9fd7c492b143-oauth-serving-cert\") pod \"console-67666b4d85-nwx4t\" (UID: \"002174a6-3b57-4eba-985b-9fd7c492b143\") " pod="openshift-console/console-67666b4d85-nwx4t" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.286129 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/002174a6-3b57-4eba-985b-9fd7c492b143-service-ca\") pod \"console-67666b4d85-nwx4t\" (UID: \"002174a6-3b57-4eba-985b-9fd7c492b143\") " pod="openshift-console/console-67666b4d85-nwx4t" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.286168 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/002174a6-3b57-4eba-985b-9fd7c492b143-trusted-ca-bundle\") pod \"console-67666b4d85-nwx4t\" (UID: \"002174a6-3b57-4eba-985b-9fd7c492b143\") " pod="openshift-console/console-67666b4d85-nwx4t" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.289220 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/002174a6-3b57-4eba-985b-9fd7c492b143-console-oauth-config\") pod \"console-67666b4d85-nwx4t\" (UID: \"002174a6-3b57-4eba-985b-9fd7c492b143\") " pod="openshift-console/console-67666b4d85-nwx4t" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.289706 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/002174a6-3b57-4eba-985b-9fd7c492b143-console-serving-cert\") pod \"console-67666b4d85-nwx4t\" (UID: \"002174a6-3b57-4eba-985b-9fd7c492b143\") " pod="openshift-console/console-67666b4d85-nwx4t" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.307795 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xtb5\" (UniqueName: \"kubernetes.io/projected/002174a6-3b57-4eba-985b-9fd7c492b143-kube-api-access-9xtb5\") pod \"console-67666b4d85-nwx4t\" (UID: \"002174a6-3b57-4eba-985b-9fd7c492b143\") " pod="openshift-console/console-67666b4d85-nwx4t" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.372987 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-67666b4d85-nwx4t" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.386750 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/97a38e3c-dd5a-447b-b580-ed7bd5f16fde-config\") pod \"prometheus-metric-storage-0\" (UID: \"97a38e3c-dd5a-447b-b580-ed7bd5f16fde\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.386810 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/97a38e3c-dd5a-447b-b580-ed7bd5f16fde-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"97a38e3c-dd5a-447b-b580-ed7bd5f16fde\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.386900 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32ebd9b5-c83a-401d-824e-77c47a842836-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-ngk5h\" (UID: \"32ebd9b5-c83a-401d-824e-77c47a842836\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-ngk5h" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.386966 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/97a38e3c-dd5a-447b-b580-ed7bd5f16fde-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"97a38e3c-dd5a-447b-b580-ed7bd5f16fde\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.387043 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/97a38e3c-dd5a-447b-b580-ed7bd5f16fde-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"97a38e3c-dd5a-447b-b580-ed7bd5f16fde\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.387108 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ng2qt\" (UniqueName: \"kubernetes.io/projected/97a38e3c-dd5a-447b-b580-ed7bd5f16fde-kube-api-access-ng2qt\") pod \"prometheus-metric-storage-0\" (UID: \"97a38e3c-dd5a-447b-b580-ed7bd5f16fde\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.387145 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/97a38e3c-dd5a-447b-b580-ed7bd5f16fde-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"97a38e3c-dd5a-447b-b580-ed7bd5f16fde\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.387171 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4d0c0986-6456-41c5-893f-749533411374\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d0c0986-6456-41c5-893f-749533411374\") pod \"prometheus-metric-storage-0\" (UID: \"97a38e3c-dd5a-447b-b580-ed7bd5f16fde\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.387190 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/97a38e3c-dd5a-447b-b580-ed7bd5f16fde-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"97a38e3c-dd5a-447b-b580-ed7bd5f16fde\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.387207 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/97a38e3c-dd5a-447b-b580-ed7bd5f16fde-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"97a38e3c-dd5a-447b-b580-ed7bd5f16fde\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.387225 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/97a38e3c-dd5a-447b-b580-ed7bd5f16fde-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"97a38e3c-dd5a-447b-b580-ed7bd5f16fde\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.388218 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/97a38e3c-dd5a-447b-b580-ed7bd5f16fde-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"97a38e3c-dd5a-447b-b580-ed7bd5f16fde\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.388623 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/97a38e3c-dd5a-447b-b580-ed7bd5f16fde-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"97a38e3c-dd5a-447b-b580-ed7bd5f16fde\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.390702 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/97a38e3c-dd5a-447b-b580-ed7bd5f16fde-config\") pod \"prometheus-metric-storage-0\" (UID: \"97a38e3c-dd5a-447b-b580-ed7bd5f16fde\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.391119 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/97a38e3c-dd5a-447b-b580-ed7bd5f16fde-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"97a38e3c-dd5a-447b-b580-ed7bd5f16fde\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.391322 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/97a38e3c-dd5a-447b-b580-ed7bd5f16fde-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"97a38e3c-dd5a-447b-b580-ed7bd5f16fde\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.393929 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/97a38e3c-dd5a-447b-b580-ed7bd5f16fde-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"97a38e3c-dd5a-447b-b580-ed7bd5f16fde\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.394796 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/97a38e3c-dd5a-447b-b580-ed7bd5f16fde-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"97a38e3c-dd5a-447b-b580-ed7bd5f16fde\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.395201 4854 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.395245 4854 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4d0c0986-6456-41c5-893f-749533411374\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d0c0986-6456-41c5-893f-749533411374\") pod \"prometheus-metric-storage-0\" (UID: \"97a38e3c-dd5a-447b-b580-ed7bd5f16fde\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c042c513fa3aca66ad55ee0b68f2245eaa190a63e0e2078526e0ed40cb362657/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.396045 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32ebd9b5-c83a-401d-824e-77c47a842836-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-ngk5h\" (UID: \"32ebd9b5-c83a-401d-824e-77c47a842836\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-ngk5h" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.399496 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/97a38e3c-dd5a-447b-b580-ed7bd5f16fde-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"97a38e3c-dd5a-447b-b580-ed7bd5f16fde\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.403412 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ng2qt\" (UniqueName: \"kubernetes.io/projected/97a38e3c-dd5a-447b-b580-ed7bd5f16fde-kube-api-access-ng2qt\") pod \"prometheus-metric-storage-0\" (UID: \"97a38e3c-dd5a-447b-b580-ed7bd5f16fde\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.429421 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4d0c0986-6456-41c5-893f-749533411374\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d0c0986-6456-41c5-893f-749533411374\") pod \"prometheus-metric-storage-0\" (UID: \"97a38e3c-dd5a-447b-b580-ed7bd5f16fde\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.499187 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 03 06:01:55 crc kubenswrapper[4854]: I0103 06:01:55.596227 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-ngk5h" Jan 03 06:01:57 crc kubenswrapper[4854]: I0103 06:01:57.011508 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-dll2c"] Jan 03 06:01:57 crc kubenswrapper[4854]: I0103 06:01:57.013584 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dll2c" Jan 03 06:01:57 crc kubenswrapper[4854]: I0103 06:01:57.015941 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 03 06:01:57 crc kubenswrapper[4854]: I0103 06:01:57.016267 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-v2kw9" Jan 03 06:01:57 crc kubenswrapper[4854]: I0103 06:01:57.018656 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 03 06:01:57 crc kubenswrapper[4854]: I0103 06:01:57.029841 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-mkvp7"] Jan 03 06:01:57 crc kubenswrapper[4854]: I0103 06:01:57.032530 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-mkvp7" Jan 03 06:01:57 crc kubenswrapper[4854]: I0103 06:01:57.041134 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-dll2c"] Jan 03 06:01:57 crc kubenswrapper[4854]: I0103 06:01:57.053733 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-mkvp7"] Jan 03 06:01:57 crc kubenswrapper[4854]: I0103 06:01:57.124340 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/babc1db7-041b-4116-86ff-b9d0c4349d49-var-log\") pod \"ovn-controller-ovs-mkvp7\" (UID: \"babc1db7-041b-4116-86ff-b9d0c4349d49\") " pod="openstack/ovn-controller-ovs-mkvp7" Jan 03 06:01:57 crc kubenswrapper[4854]: I0103 06:01:57.124384 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04465680-9e76-4b04-aa5f-c94218a6bf28-combined-ca-bundle\") pod \"ovn-controller-dll2c\" (UID: \"04465680-9e76-4b04-aa5f-c94218a6bf28\") " pod="openstack/ovn-controller-dll2c" Jan 03 06:01:57 crc kubenswrapper[4854]: I0103 06:01:57.124408 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78mxl\" (UniqueName: \"kubernetes.io/projected/babc1db7-041b-4116-86ff-b9d0c4349d49-kube-api-access-78mxl\") pod \"ovn-controller-ovs-mkvp7\" (UID: \"babc1db7-041b-4116-86ff-b9d0c4349d49\") " pod="openstack/ovn-controller-ovs-mkvp7" Jan 03 06:01:57 crc kubenswrapper[4854]: I0103 06:01:57.124436 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04465680-9e76-4b04-aa5f-c94218a6bf28-scripts\") pod \"ovn-controller-dll2c\" (UID: \"04465680-9e76-4b04-aa5f-c94218a6bf28\") " pod="openstack/ovn-controller-dll2c" Jan 03 06:01:57 crc kubenswrapper[4854]: I0103 06:01:57.124546 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/04465680-9e76-4b04-aa5f-c94218a6bf28-var-log-ovn\") pod \"ovn-controller-dll2c\" (UID: \"04465680-9e76-4b04-aa5f-c94218a6bf28\") " pod="openstack/ovn-controller-dll2c" Jan 03 06:01:57 crc kubenswrapper[4854]: I0103 06:01:57.124569 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/babc1db7-041b-4116-86ff-b9d0c4349d49-etc-ovs\") pod \"ovn-controller-ovs-mkvp7\" (UID: \"babc1db7-041b-4116-86ff-b9d0c4349d49\") " pod="openstack/ovn-controller-ovs-mkvp7" Jan 03 06:01:57 crc kubenswrapper[4854]: I0103 06:01:57.124590 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/04465680-9e76-4b04-aa5f-c94218a6bf28-var-run\") pod \"ovn-controller-dll2c\" (UID: \"04465680-9e76-4b04-aa5f-c94218a6bf28\") " pod="openstack/ovn-controller-dll2c" Jan 03 06:01:57 crc kubenswrapper[4854]: I0103 06:01:57.124612 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/04465680-9e76-4b04-aa5f-c94218a6bf28-var-run-ovn\") pod \"ovn-controller-dll2c\" (UID: \"04465680-9e76-4b04-aa5f-c94218a6bf28\") " pod="openstack/ovn-controller-dll2c" Jan 03 06:01:57 crc kubenswrapper[4854]: I0103 06:01:57.124637 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/babc1db7-041b-4116-86ff-b9d0c4349d49-var-lib\") pod \"ovn-controller-ovs-mkvp7\" (UID: \"babc1db7-041b-4116-86ff-b9d0c4349d49\") " pod="openstack/ovn-controller-ovs-mkvp7" Jan 03 06:01:57 crc kubenswrapper[4854]: I0103 06:01:57.124701 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/babc1db7-041b-4116-86ff-b9d0c4349d49-var-run\") pod \"ovn-controller-ovs-mkvp7\" (UID: \"babc1db7-041b-4116-86ff-b9d0c4349d49\") " pod="openstack/ovn-controller-ovs-mkvp7" Jan 03 06:01:57 crc kubenswrapper[4854]: I0103 06:01:57.124748 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/babc1db7-041b-4116-86ff-b9d0c4349d49-scripts\") pod \"ovn-controller-ovs-mkvp7\" (UID: \"babc1db7-041b-4116-86ff-b9d0c4349d49\") " pod="openstack/ovn-controller-ovs-mkvp7" Jan 03 06:01:57 crc kubenswrapper[4854]: I0103 06:01:57.124775 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwt4b\" (UniqueName: \"kubernetes.io/projected/04465680-9e76-4b04-aa5f-c94218a6bf28-kube-api-access-cwt4b\") pod \"ovn-controller-dll2c\" (UID: \"04465680-9e76-4b04-aa5f-c94218a6bf28\") " pod="openstack/ovn-controller-dll2c" Jan 03 06:01:57 crc kubenswrapper[4854]: I0103 06:01:57.124797 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/04465680-9e76-4b04-aa5f-c94218a6bf28-ovn-controller-tls-certs\") pod \"ovn-controller-dll2c\" (UID: \"04465680-9e76-4b04-aa5f-c94218a6bf28\") " pod="openstack/ovn-controller-dll2c" Jan 03 06:01:57 crc kubenswrapper[4854]: I0103 06:01:57.228131 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/04465680-9e76-4b04-aa5f-c94218a6bf28-var-log-ovn\") pod \"ovn-controller-dll2c\" (UID: \"04465680-9e76-4b04-aa5f-c94218a6bf28\") " pod="openstack/ovn-controller-dll2c" Jan 03 06:01:57 crc kubenswrapper[4854]: I0103 06:01:57.228302 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/babc1db7-041b-4116-86ff-b9d0c4349d49-etc-ovs\") pod \"ovn-controller-ovs-mkvp7\" (UID: \"babc1db7-041b-4116-86ff-b9d0c4349d49\") " pod="openstack/ovn-controller-ovs-mkvp7" Jan 03 06:01:57 crc kubenswrapper[4854]: I0103 06:01:57.228450 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/04465680-9e76-4b04-aa5f-c94218a6bf28-var-run\") pod \"ovn-controller-dll2c\" (UID: \"04465680-9e76-4b04-aa5f-c94218a6bf28\") " pod="openstack/ovn-controller-dll2c" Jan 03 06:01:57 crc kubenswrapper[4854]: I0103 06:01:57.228524 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/04465680-9e76-4b04-aa5f-c94218a6bf28-var-run-ovn\") pod \"ovn-controller-dll2c\" (UID: \"04465680-9e76-4b04-aa5f-c94218a6bf28\") " pod="openstack/ovn-controller-dll2c" Jan 03 06:01:57 crc kubenswrapper[4854]: I0103 06:01:57.228631 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/babc1db7-041b-4116-86ff-b9d0c4349d49-var-lib\") pod \"ovn-controller-ovs-mkvp7\" (UID: \"babc1db7-041b-4116-86ff-b9d0c4349d49\") " pod="openstack/ovn-controller-ovs-mkvp7" Jan 03 06:01:57 crc kubenswrapper[4854]: I0103 06:01:57.229034 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/babc1db7-041b-4116-86ff-b9d0c4349d49-var-run\") pod \"ovn-controller-ovs-mkvp7\" (UID: \"babc1db7-041b-4116-86ff-b9d0c4349d49\") " pod="openstack/ovn-controller-ovs-mkvp7" Jan 03 06:01:57 crc kubenswrapper[4854]: I0103 06:01:57.229052 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/04465680-9e76-4b04-aa5f-c94218a6bf28-var-log-ovn\") pod \"ovn-controller-dll2c\" (UID: \"04465680-9e76-4b04-aa5f-c94218a6bf28\") " pod="openstack/ovn-controller-dll2c" Jan 03 06:01:57 crc kubenswrapper[4854]: I0103 06:01:57.229125 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/babc1db7-041b-4116-86ff-b9d0c4349d49-scripts\") pod \"ovn-controller-ovs-mkvp7\" (UID: \"babc1db7-041b-4116-86ff-b9d0c4349d49\") " pod="openstack/ovn-controller-ovs-mkvp7" Jan 03 06:01:57 crc kubenswrapper[4854]: I0103 06:01:57.229239 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwt4b\" (UniqueName: \"kubernetes.io/projected/04465680-9e76-4b04-aa5f-c94218a6bf28-kube-api-access-cwt4b\") pod \"ovn-controller-dll2c\" (UID: \"04465680-9e76-4b04-aa5f-c94218a6bf28\") " pod="openstack/ovn-controller-dll2c" Jan 03 06:01:57 crc kubenswrapper[4854]: I0103 06:01:57.229407 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/babc1db7-041b-4116-86ff-b9d0c4349d49-etc-ovs\") pod \"ovn-controller-ovs-mkvp7\" (UID: \"babc1db7-041b-4116-86ff-b9d0c4349d49\") " pod="openstack/ovn-controller-ovs-mkvp7" Jan 03 06:01:57 crc kubenswrapper[4854]: I0103 06:01:57.229520 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/04465680-9e76-4b04-aa5f-c94218a6bf28-var-run-ovn\") pod \"ovn-controller-dll2c\" (UID: \"04465680-9e76-4b04-aa5f-c94218a6bf28\") " pod="openstack/ovn-controller-dll2c" Jan 03 06:01:57 crc kubenswrapper[4854]: I0103 06:01:57.229629 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/babc1db7-041b-4116-86ff-b9d0c4349d49-var-run\") pod \"ovn-controller-ovs-mkvp7\" (UID: \"babc1db7-041b-4116-86ff-b9d0c4349d49\") " pod="openstack/ovn-controller-ovs-mkvp7" Jan 03 06:01:57 crc kubenswrapper[4854]: I0103 06:01:57.229436 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/04465680-9e76-4b04-aa5f-c94218a6bf28-ovn-controller-tls-certs\") pod \"ovn-controller-dll2c\" (UID: \"04465680-9e76-4b04-aa5f-c94218a6bf28\") " pod="openstack/ovn-controller-dll2c" Jan 03 06:01:57 crc kubenswrapper[4854]: I0103 06:01:57.230435 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/babc1db7-041b-4116-86ff-b9d0c4349d49-var-log\") pod \"ovn-controller-ovs-mkvp7\" (UID: \"babc1db7-041b-4116-86ff-b9d0c4349d49\") " pod="openstack/ovn-controller-ovs-mkvp7" Jan 03 06:01:57 crc kubenswrapper[4854]: I0103 06:01:57.230510 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04465680-9e76-4b04-aa5f-c94218a6bf28-combined-ca-bundle\") pod \"ovn-controller-dll2c\" (UID: \"04465680-9e76-4b04-aa5f-c94218a6bf28\") " pod="openstack/ovn-controller-dll2c" Jan 03 06:01:57 crc kubenswrapper[4854]: I0103 06:01:57.230557 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78mxl\" (UniqueName: \"kubernetes.io/projected/babc1db7-041b-4116-86ff-b9d0c4349d49-kube-api-access-78mxl\") pod \"ovn-controller-ovs-mkvp7\" (UID: \"babc1db7-041b-4116-86ff-b9d0c4349d49\") " pod="openstack/ovn-controller-ovs-mkvp7" Jan 03 06:01:57 crc kubenswrapper[4854]: I0103 06:01:57.230589 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04465680-9e76-4b04-aa5f-c94218a6bf28-scripts\") pod \"ovn-controller-dll2c\" (UID: \"04465680-9e76-4b04-aa5f-c94218a6bf28\") " pod="openstack/ovn-controller-dll2c" Jan 03 06:01:57 crc kubenswrapper[4854]: I0103 06:01:57.230938 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/babc1db7-041b-4116-86ff-b9d0c4349d49-var-log\") pod \"ovn-controller-ovs-mkvp7\" (UID: \"babc1db7-041b-4116-86ff-b9d0c4349d49\") " pod="openstack/ovn-controller-ovs-mkvp7" Jan 03 06:01:57 crc kubenswrapper[4854]: I0103 06:01:57.230969 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/04465680-9e76-4b04-aa5f-c94218a6bf28-var-run\") pod \"ovn-controller-dll2c\" (UID: \"04465680-9e76-4b04-aa5f-c94218a6bf28\") " pod="openstack/ovn-controller-dll2c" Jan 03 06:01:57 crc kubenswrapper[4854]: I0103 06:01:57.233223 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/babc1db7-041b-4116-86ff-b9d0c4349d49-scripts\") pod \"ovn-controller-ovs-mkvp7\" (UID: \"babc1db7-041b-4116-86ff-b9d0c4349d49\") " pod="openstack/ovn-controller-ovs-mkvp7" Jan 03 06:01:57 crc kubenswrapper[4854]: I0103 06:01:57.233370 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/babc1db7-041b-4116-86ff-b9d0c4349d49-var-lib\") pod \"ovn-controller-ovs-mkvp7\" (UID: \"babc1db7-041b-4116-86ff-b9d0c4349d49\") " pod="openstack/ovn-controller-ovs-mkvp7" Jan 03 06:01:57 crc kubenswrapper[4854]: I0103 06:01:57.259035 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04465680-9e76-4b04-aa5f-c94218a6bf28-combined-ca-bundle\") pod \"ovn-controller-dll2c\" (UID: \"04465680-9e76-4b04-aa5f-c94218a6bf28\") " pod="openstack/ovn-controller-dll2c" Jan 03 06:01:57 crc kubenswrapper[4854]: I0103 06:01:57.261271 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04465680-9e76-4b04-aa5f-c94218a6bf28-scripts\") pod \"ovn-controller-dll2c\" (UID: \"04465680-9e76-4b04-aa5f-c94218a6bf28\") " pod="openstack/ovn-controller-dll2c" Jan 03 06:01:57 crc kubenswrapper[4854]: I0103 06:01:57.266272 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/04465680-9e76-4b04-aa5f-c94218a6bf28-ovn-controller-tls-certs\") pod \"ovn-controller-dll2c\" (UID: \"04465680-9e76-4b04-aa5f-c94218a6bf28\") " pod="openstack/ovn-controller-dll2c" Jan 03 06:01:57 crc kubenswrapper[4854]: I0103 06:01:57.385387 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78mxl\" (UniqueName: \"kubernetes.io/projected/babc1db7-041b-4116-86ff-b9d0c4349d49-kube-api-access-78mxl\") pod \"ovn-controller-ovs-mkvp7\" (UID: \"babc1db7-041b-4116-86ff-b9d0c4349d49\") " pod="openstack/ovn-controller-ovs-mkvp7" Jan 03 06:01:57 crc kubenswrapper[4854]: I0103 06:01:57.391830 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-mkvp7" Jan 03 06:01:57 crc kubenswrapper[4854]: I0103 06:01:57.401893 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwt4b\" (UniqueName: \"kubernetes.io/projected/04465680-9e76-4b04-aa5f-c94218a6bf28-kube-api-access-cwt4b\") pod \"ovn-controller-dll2c\" (UID: \"04465680-9e76-4b04-aa5f-c94218a6bf28\") " pod="openstack/ovn-controller-dll2c" Jan 03 06:01:57 crc kubenswrapper[4854]: I0103 06:01:57.683035 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dll2c" Jan 03 06:01:57 crc kubenswrapper[4854]: I0103 06:01:57.966072 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 03 06:01:57 crc kubenswrapper[4854]: I0103 06:01:57.971884 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 03 06:01:57 crc kubenswrapper[4854]: I0103 06:01:57.976570 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 03 06:01:57 crc kubenswrapper[4854]: I0103 06:01:57.976820 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 03 06:01:57 crc kubenswrapper[4854]: I0103 06:01:57.977116 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-tlrfs" Jan 03 06:01:57 crc kubenswrapper[4854]: I0103 06:01:57.977337 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 03 06:01:57 crc kubenswrapper[4854]: I0103 06:01:57.977631 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 03 06:01:57 crc kubenswrapper[4854]: I0103 06:01:57.981849 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 03 06:01:58 crc kubenswrapper[4854]: I0103 06:01:58.055262 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/08cbd0f0-dda7-45be-9bad-28f1d1bc108d-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"08cbd0f0-dda7-45be-9bad-28f1d1bc108d\") " pod="openstack/ovsdbserver-nb-0" Jan 03 06:01:58 crc kubenswrapper[4854]: I0103 06:01:58.055332 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/08cbd0f0-dda7-45be-9bad-28f1d1bc108d-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"08cbd0f0-dda7-45be-9bad-28f1d1bc108d\") " pod="openstack/ovsdbserver-nb-0" Jan 03 06:01:58 crc kubenswrapper[4854]: I0103 06:01:58.055430 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08cbd0f0-dda7-45be-9bad-28f1d1bc108d-config\") pod \"ovsdbserver-nb-0\" (UID: \"08cbd0f0-dda7-45be-9bad-28f1d1bc108d\") " pod="openstack/ovsdbserver-nb-0" Jan 03 06:01:58 crc kubenswrapper[4854]: I0103 06:01:58.055456 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/08cbd0f0-dda7-45be-9bad-28f1d1bc108d-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"08cbd0f0-dda7-45be-9bad-28f1d1bc108d\") " pod="openstack/ovsdbserver-nb-0" Jan 03 06:01:58 crc kubenswrapper[4854]: I0103 06:01:58.055517 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/08cbd0f0-dda7-45be-9bad-28f1d1bc108d-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"08cbd0f0-dda7-45be-9bad-28f1d1bc108d\") " pod="openstack/ovsdbserver-nb-0" Jan 03 06:01:58 crc kubenswrapper[4854]: I0103 06:01:58.055535 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08cbd0f0-dda7-45be-9bad-28f1d1bc108d-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"08cbd0f0-dda7-45be-9bad-28f1d1bc108d\") " pod="openstack/ovsdbserver-nb-0" Jan 03 06:01:58 crc kubenswrapper[4854]: I0103 06:01:58.055632 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfrc4\" (UniqueName: \"kubernetes.io/projected/08cbd0f0-dda7-45be-9bad-28f1d1bc108d-kube-api-access-vfrc4\") pod \"ovsdbserver-nb-0\" (UID: \"08cbd0f0-dda7-45be-9bad-28f1d1bc108d\") " pod="openstack/ovsdbserver-nb-0" Jan 03 06:01:58 crc kubenswrapper[4854]: I0103 06:01:58.055671 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8845be78-e501-4a53-bc96-2ac4ba430960\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8845be78-e501-4a53-bc96-2ac4ba430960\") pod \"ovsdbserver-nb-0\" (UID: \"08cbd0f0-dda7-45be-9bad-28f1d1bc108d\") " pod="openstack/ovsdbserver-nb-0" Jan 03 06:01:58 crc kubenswrapper[4854]: I0103 06:01:58.158944 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/08cbd0f0-dda7-45be-9bad-28f1d1bc108d-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"08cbd0f0-dda7-45be-9bad-28f1d1bc108d\") " pod="openstack/ovsdbserver-nb-0" Jan 03 06:01:58 crc kubenswrapper[4854]: I0103 06:01:58.159274 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/08cbd0f0-dda7-45be-9bad-28f1d1bc108d-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"08cbd0f0-dda7-45be-9bad-28f1d1bc108d\") " pod="openstack/ovsdbserver-nb-0" Jan 03 06:01:58 crc kubenswrapper[4854]: I0103 06:01:58.159351 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/08cbd0f0-dda7-45be-9bad-28f1d1bc108d-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"08cbd0f0-dda7-45be-9bad-28f1d1bc108d\") " pod="openstack/ovsdbserver-nb-0" Jan 03 06:01:58 crc kubenswrapper[4854]: I0103 06:01:58.159433 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08cbd0f0-dda7-45be-9bad-28f1d1bc108d-config\") pod \"ovsdbserver-nb-0\" (UID: \"08cbd0f0-dda7-45be-9bad-28f1d1bc108d\") " pod="openstack/ovsdbserver-nb-0" Jan 03 06:01:58 crc kubenswrapper[4854]: I0103 06:01:58.159467 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/08cbd0f0-dda7-45be-9bad-28f1d1bc108d-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"08cbd0f0-dda7-45be-9bad-28f1d1bc108d\") " pod="openstack/ovsdbserver-nb-0" Jan 03 06:01:58 crc kubenswrapper[4854]: I0103 06:01:58.159524 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/08cbd0f0-dda7-45be-9bad-28f1d1bc108d-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"08cbd0f0-dda7-45be-9bad-28f1d1bc108d\") " pod="openstack/ovsdbserver-nb-0" Jan 03 06:01:58 crc kubenswrapper[4854]: I0103 06:01:58.159552 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08cbd0f0-dda7-45be-9bad-28f1d1bc108d-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"08cbd0f0-dda7-45be-9bad-28f1d1bc108d\") " pod="openstack/ovsdbserver-nb-0" Jan 03 06:01:58 crc kubenswrapper[4854]: I0103 06:01:58.159717 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfrc4\" (UniqueName: \"kubernetes.io/projected/08cbd0f0-dda7-45be-9bad-28f1d1bc108d-kube-api-access-vfrc4\") pod \"ovsdbserver-nb-0\" (UID: \"08cbd0f0-dda7-45be-9bad-28f1d1bc108d\") " pod="openstack/ovsdbserver-nb-0" Jan 03 06:01:58 crc kubenswrapper[4854]: I0103 06:01:58.159813 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8845be78-e501-4a53-bc96-2ac4ba430960\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8845be78-e501-4a53-bc96-2ac4ba430960\") pod \"ovsdbserver-nb-0\" (UID: \"08cbd0f0-dda7-45be-9bad-28f1d1bc108d\") " pod="openstack/ovsdbserver-nb-0" Jan 03 06:01:58 crc kubenswrapper[4854]: I0103 06:01:58.160308 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/08cbd0f0-dda7-45be-9bad-28f1d1bc108d-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"08cbd0f0-dda7-45be-9bad-28f1d1bc108d\") " pod="openstack/ovsdbserver-nb-0" Jan 03 06:01:58 crc kubenswrapper[4854]: I0103 06:01:58.161043 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08cbd0f0-dda7-45be-9bad-28f1d1bc108d-config\") pod \"ovsdbserver-nb-0\" (UID: \"08cbd0f0-dda7-45be-9bad-28f1d1bc108d\") " pod="openstack/ovsdbserver-nb-0" Jan 03 06:01:58 crc kubenswrapper[4854]: I0103 06:01:58.164358 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/08cbd0f0-dda7-45be-9bad-28f1d1bc108d-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"08cbd0f0-dda7-45be-9bad-28f1d1bc108d\") " pod="openstack/ovsdbserver-nb-0" Jan 03 06:01:58 crc kubenswrapper[4854]: I0103 06:01:58.166795 4854 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 03 06:01:58 crc kubenswrapper[4854]: I0103 06:01:58.166831 4854 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8845be78-e501-4a53-bc96-2ac4ba430960\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8845be78-e501-4a53-bc96-2ac4ba430960\") pod \"ovsdbserver-nb-0\" (UID: \"08cbd0f0-dda7-45be-9bad-28f1d1bc108d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/aba9facf8a7cbfcfb7681f9ad8d88d86c5add329995242269a015b04db753104/globalmount\"" pod="openstack/ovsdbserver-nb-0" Jan 03 06:01:58 crc kubenswrapper[4854]: I0103 06:01:58.175691 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/08cbd0f0-dda7-45be-9bad-28f1d1bc108d-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"08cbd0f0-dda7-45be-9bad-28f1d1bc108d\") " pod="openstack/ovsdbserver-nb-0" Jan 03 06:01:58 crc kubenswrapper[4854]: I0103 06:01:58.175707 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08cbd0f0-dda7-45be-9bad-28f1d1bc108d-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"08cbd0f0-dda7-45be-9bad-28f1d1bc108d\") " pod="openstack/ovsdbserver-nb-0" Jan 03 06:01:58 crc kubenswrapper[4854]: I0103 06:01:58.180487 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfrc4\" (UniqueName: \"kubernetes.io/projected/08cbd0f0-dda7-45be-9bad-28f1d1bc108d-kube-api-access-vfrc4\") pod \"ovsdbserver-nb-0\" (UID: \"08cbd0f0-dda7-45be-9bad-28f1d1bc108d\") " pod="openstack/ovsdbserver-nb-0" Jan 03 06:01:58 crc kubenswrapper[4854]: I0103 06:01:58.202391 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8845be78-e501-4a53-bc96-2ac4ba430960\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8845be78-e501-4a53-bc96-2ac4ba430960\") pod \"ovsdbserver-nb-0\" (UID: \"08cbd0f0-dda7-45be-9bad-28f1d1bc108d\") " pod="openstack/ovsdbserver-nb-0" Jan 03 06:01:58 crc kubenswrapper[4854]: I0103 06:01:58.361552 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 03 06:01:59 crc kubenswrapper[4854]: I0103 06:01:59.997905 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"ba007649-daf8-445b-b2c8-73ce6ec54403","Type":"ContainerStarted","Data":"0a8423abbecac7236d02416ed90148ceb9912dfa5aeecf071f12da7504b96e87"} Jan 03 06:02:01 crc kubenswrapper[4854]: I0103 06:02:01.307298 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 03 06:02:01 crc kubenswrapper[4854]: I0103 06:02:01.324560 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 03 06:02:01 crc kubenswrapper[4854]: I0103 06:02:01.326986 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 03 06:02:01 crc kubenswrapper[4854]: I0103 06:02:01.327287 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 03 06:02:01 crc kubenswrapper[4854]: I0103 06:02:01.327575 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-88ztp" Jan 03 06:02:01 crc kubenswrapper[4854]: I0103 06:02:01.327782 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 03 06:02:01 crc kubenswrapper[4854]: I0103 06:02:01.369459 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 03 06:02:01 crc kubenswrapper[4854]: I0103 06:02:01.434687 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/73c3ad1e-a419-4c11-a31d-81f28866fe2b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"73c3ad1e-a419-4c11-a31d-81f28866fe2b\") " pod="openstack/ovsdbserver-sb-0" Jan 03 06:02:01 crc kubenswrapper[4854]: I0103 06:02:01.434752 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/73c3ad1e-a419-4c11-a31d-81f28866fe2b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"73c3ad1e-a419-4c11-a31d-81f28866fe2b\") " pod="openstack/ovsdbserver-sb-0" Jan 03 06:02:01 crc kubenswrapper[4854]: I0103 06:02:01.434804 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73c3ad1e-a419-4c11-a31d-81f28866fe2b-config\") pod \"ovsdbserver-sb-0\" (UID: \"73c3ad1e-a419-4c11-a31d-81f28866fe2b\") " pod="openstack/ovsdbserver-sb-0" Jan 03 06:02:01 crc kubenswrapper[4854]: I0103 06:02:01.434829 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73c3ad1e-a419-4c11-a31d-81f28866fe2b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"73c3ad1e-a419-4c11-a31d-81f28866fe2b\") " pod="openstack/ovsdbserver-sb-0" Jan 03 06:02:01 crc kubenswrapper[4854]: I0103 06:02:01.435046 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xswc\" (UniqueName: \"kubernetes.io/projected/73c3ad1e-a419-4c11-a31d-81f28866fe2b-kube-api-access-9xswc\") pod \"ovsdbserver-sb-0\" (UID: \"73c3ad1e-a419-4c11-a31d-81f28866fe2b\") " pod="openstack/ovsdbserver-sb-0" Jan 03 06:02:01 crc kubenswrapper[4854]: I0103 06:02:01.435207 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/73c3ad1e-a419-4c11-a31d-81f28866fe2b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"73c3ad1e-a419-4c11-a31d-81f28866fe2b\") " pod="openstack/ovsdbserver-sb-0" Jan 03 06:02:01 crc kubenswrapper[4854]: I0103 06:02:01.435258 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-50a36b6d-f721-47b0-8bea-78b97bc457d6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-50a36b6d-f721-47b0-8bea-78b97bc457d6\") pod \"ovsdbserver-sb-0\" (UID: \"73c3ad1e-a419-4c11-a31d-81f28866fe2b\") " pod="openstack/ovsdbserver-sb-0" Jan 03 06:02:01 crc kubenswrapper[4854]: I0103 06:02:01.435465 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/73c3ad1e-a419-4c11-a31d-81f28866fe2b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"73c3ad1e-a419-4c11-a31d-81f28866fe2b\") " pod="openstack/ovsdbserver-sb-0" Jan 03 06:02:01 crc kubenswrapper[4854]: I0103 06:02:01.537511 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xswc\" (UniqueName: \"kubernetes.io/projected/73c3ad1e-a419-4c11-a31d-81f28866fe2b-kube-api-access-9xswc\") pod \"ovsdbserver-sb-0\" (UID: \"73c3ad1e-a419-4c11-a31d-81f28866fe2b\") " pod="openstack/ovsdbserver-sb-0" Jan 03 06:02:01 crc kubenswrapper[4854]: I0103 06:02:01.537615 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/73c3ad1e-a419-4c11-a31d-81f28866fe2b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"73c3ad1e-a419-4c11-a31d-81f28866fe2b\") " pod="openstack/ovsdbserver-sb-0" Jan 03 06:02:01 crc kubenswrapper[4854]: I0103 06:02:01.537665 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-50a36b6d-f721-47b0-8bea-78b97bc457d6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-50a36b6d-f721-47b0-8bea-78b97bc457d6\") pod \"ovsdbserver-sb-0\" (UID: \"73c3ad1e-a419-4c11-a31d-81f28866fe2b\") " pod="openstack/ovsdbserver-sb-0" Jan 03 06:02:01 crc kubenswrapper[4854]: I0103 06:02:01.537743 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/73c3ad1e-a419-4c11-a31d-81f28866fe2b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"73c3ad1e-a419-4c11-a31d-81f28866fe2b\") " pod="openstack/ovsdbserver-sb-0" Jan 03 06:02:01 crc kubenswrapper[4854]: I0103 06:02:01.537788 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/73c3ad1e-a419-4c11-a31d-81f28866fe2b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"73c3ad1e-a419-4c11-a31d-81f28866fe2b\") " pod="openstack/ovsdbserver-sb-0" Jan 03 06:02:01 crc kubenswrapper[4854]: I0103 06:02:01.537833 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/73c3ad1e-a419-4c11-a31d-81f28866fe2b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"73c3ad1e-a419-4c11-a31d-81f28866fe2b\") " pod="openstack/ovsdbserver-sb-0" Jan 03 06:02:01 crc kubenswrapper[4854]: I0103 06:02:01.537875 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73c3ad1e-a419-4c11-a31d-81f28866fe2b-config\") pod \"ovsdbserver-sb-0\" (UID: \"73c3ad1e-a419-4c11-a31d-81f28866fe2b\") " pod="openstack/ovsdbserver-sb-0" Jan 03 06:02:01 crc kubenswrapper[4854]: I0103 06:02:01.538600 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/73c3ad1e-a419-4c11-a31d-81f28866fe2b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"73c3ad1e-a419-4c11-a31d-81f28866fe2b\") " pod="openstack/ovsdbserver-sb-0" Jan 03 06:02:01 crc kubenswrapper[4854]: I0103 06:02:01.539839 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/73c3ad1e-a419-4c11-a31d-81f28866fe2b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"73c3ad1e-a419-4c11-a31d-81f28866fe2b\") " pod="openstack/ovsdbserver-sb-0" Jan 03 06:02:01 crc kubenswrapper[4854]: I0103 06:02:01.539236 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73c3ad1e-a419-4c11-a31d-81f28866fe2b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"73c3ad1e-a419-4c11-a31d-81f28866fe2b\") " pod="openstack/ovsdbserver-sb-0" Jan 03 06:02:01 crc kubenswrapper[4854]: I0103 06:02:01.540270 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73c3ad1e-a419-4c11-a31d-81f28866fe2b-config\") pod \"ovsdbserver-sb-0\" (UID: \"73c3ad1e-a419-4c11-a31d-81f28866fe2b\") " pod="openstack/ovsdbserver-sb-0" Jan 03 06:02:01 crc kubenswrapper[4854]: I0103 06:02:01.542150 4854 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 03 06:02:01 crc kubenswrapper[4854]: I0103 06:02:01.542186 4854 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-50a36b6d-f721-47b0-8bea-78b97bc457d6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-50a36b6d-f721-47b0-8bea-78b97bc457d6\") pod \"ovsdbserver-sb-0\" (UID: \"73c3ad1e-a419-4c11-a31d-81f28866fe2b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7d1bdd1fb0735c0aed033128cd4ebf6e01a16c058b6928eb7113fb51d4984aa1/globalmount\"" pod="openstack/ovsdbserver-sb-0" Jan 03 06:02:01 crc kubenswrapper[4854]: I0103 06:02:01.543471 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/73c3ad1e-a419-4c11-a31d-81f28866fe2b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"73c3ad1e-a419-4c11-a31d-81f28866fe2b\") " pod="openstack/ovsdbserver-sb-0" Jan 03 06:02:01 crc kubenswrapper[4854]: I0103 06:02:01.544903 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/73c3ad1e-a419-4c11-a31d-81f28866fe2b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"73c3ad1e-a419-4c11-a31d-81f28866fe2b\") " pod="openstack/ovsdbserver-sb-0" Jan 03 06:02:01 crc kubenswrapper[4854]: I0103 06:02:01.546183 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73c3ad1e-a419-4c11-a31d-81f28866fe2b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"73c3ad1e-a419-4c11-a31d-81f28866fe2b\") " pod="openstack/ovsdbserver-sb-0" Jan 03 06:02:01 crc kubenswrapper[4854]: I0103 06:02:01.563612 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xswc\" (UniqueName: \"kubernetes.io/projected/73c3ad1e-a419-4c11-a31d-81f28866fe2b-kube-api-access-9xswc\") pod \"ovsdbserver-sb-0\" (UID: \"73c3ad1e-a419-4c11-a31d-81f28866fe2b\") " pod="openstack/ovsdbserver-sb-0" Jan 03 06:02:01 crc kubenswrapper[4854]: W0103 06:02:01.583422 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5742bd8_396a_4174_a8b7_dd6deec69632.slice/crio-7363264c49196cd57d70879897271a07b44de86859d62c3ec6a6f7523e3e8853 WatchSource:0}: Error finding container 7363264c49196cd57d70879897271a07b44de86859d62c3ec6a6f7523e3e8853: Status 404 returned error can't find the container with id 7363264c49196cd57d70879897271a07b44de86859d62c3ec6a6f7523e3e8853 Jan 03 06:02:01 crc kubenswrapper[4854]: I0103 06:02:01.584667 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-50a36b6d-f721-47b0-8bea-78b97bc457d6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-50a36b6d-f721-47b0-8bea-78b97bc457d6\") pod \"ovsdbserver-sb-0\" (UID: \"73c3ad1e-a419-4c11-a31d-81f28866fe2b\") " pod="openstack/ovsdbserver-sb-0" Jan 03 06:02:01 crc kubenswrapper[4854]: W0103 06:02:01.628664 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71288814_2f4e_4e92_8064_8f9ef1920212.slice/crio-f5b0295f2709e3a9e8f2abe44fb70699ba1907f3f15e6f0b9cdf8dceffbd0927 WatchSource:0}: Error finding container f5b0295f2709e3a9e8f2abe44fb70699ba1907f3f15e6f0b9cdf8dceffbd0927: Status 404 returned error can't find the container with id f5b0295f2709e3a9e8f2abe44fb70699ba1907f3f15e6f0b9cdf8dceffbd0927 Jan 03 06:02:01 crc kubenswrapper[4854]: I0103 06:02:01.655164 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 03 06:02:02 crc kubenswrapper[4854]: I0103 06:02:02.020492 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"71288814-2f4e-4e92-8064-8f9ef1920212","Type":"ContainerStarted","Data":"f5b0295f2709e3a9e8f2abe44fb70699ba1907f3f15e6f0b9cdf8dceffbd0927"} Jan 03 06:02:02 crc kubenswrapper[4854]: I0103 06:02:02.022077 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b5742bd8-396a-4174-a8b7-dd6deec69632","Type":"ContainerStarted","Data":"7363264c49196cd57d70879897271a07b44de86859d62c3ec6a6f7523e3e8853"} Jan 03 06:02:08 crc kubenswrapper[4854]: I0103 06:02:08.094041 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 03 06:02:08 crc kubenswrapper[4854]: I0103 06:02:08.549579 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-dll2c"] Jan 03 06:02:08 crc kubenswrapper[4854]: I0103 06:02:08.567373 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-67666b4d85-nwx4t"] Jan 03 06:02:08 crc kubenswrapper[4854]: I0103 06:02:08.646368 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 03 06:02:14 crc kubenswrapper[4854]: E0103 06:02:14.688379 4854 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 03 06:02:14 crc kubenswrapper[4854]: E0103 06:02:14.691766 4854 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mdxk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-2_openstack(11d4187f-5938-4054-9eec-4d84f843bd73): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 03 06:02:14 crc kubenswrapper[4854]: E0103 06:02:14.693169 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-2" podUID="11d4187f-5938-4054-9eec-4d84f843bd73" Jan 03 06:02:14 crc kubenswrapper[4854]: E0103 06:02:14.765651 4854 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 03 06:02:14 crc kubenswrapper[4854]: E0103 06:02:14.765882 4854 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p2mg4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-1_openstack(ba007649-daf8-445b-b2c8-73ce6ec54403): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 03 06:02:14 crc kubenswrapper[4854]: E0103 06:02:14.767217 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-1" podUID="ba007649-daf8-445b-b2c8-73ce6ec54403" Jan 03 06:02:15 crc kubenswrapper[4854]: E0103 06:02:15.154347 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-1" podUID="ba007649-daf8-445b-b2c8-73ce6ec54403" Jan 03 06:02:15 crc kubenswrapper[4854]: E0103 06:02:15.154720 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-2" podUID="11d4187f-5938-4054-9eec-4d84f843bd73" Jan 03 06:02:15 crc kubenswrapper[4854]: W0103 06:02:15.586879 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97a38e3c_dd5a_447b_b580_ed7bd5f16fde.slice/crio-961e13b21b5d51e645d050cb57f92e4a88a73fcf36afafffd6547add501ccefc WatchSource:0}: Error finding container 961e13b21b5d51e645d050cb57f92e4a88a73fcf36afafffd6547add501ccefc: Status 404 returned error can't find the container with id 961e13b21b5d51e645d050cb57f92e4a88a73fcf36afafffd6547add501ccefc Jan 03 06:02:15 crc kubenswrapper[4854]: W0103 06:02:15.595145 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod002174a6_3b57_4eba_985b_9fd7c492b143.slice/crio-6ffa06e06f7ffeb6c54eec5c679590fb0e1466548cc8099ad86a912fbb63e3b6 WatchSource:0}: Error finding container 6ffa06e06f7ffeb6c54eec5c679590fb0e1466548cc8099ad86a912fbb63e3b6: Status 404 returned error can't find the container with id 6ffa06e06f7ffeb6c54eec5c679590fb0e1466548cc8099ad86a912fbb63e3b6 Jan 03 06:02:15 crc kubenswrapper[4854]: E0103 06:02:15.614213 4854 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 03 06:02:15 crc kubenswrapper[4854]: E0103 06:02:15.614392 4854 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j4d2k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-6mlr2_openstack(27abeea6-c224-4248-99aa-cfc50d1e911b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 03 06:02:15 crc kubenswrapper[4854]: E0103 06:02:15.614688 4854 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 03 06:02:15 crc kubenswrapper[4854]: E0103 06:02:15.614901 4854 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dmzw2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-2kbfm_openstack(19db79b0-2939-4a6b-bf9a-9f35b6f63acd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 03 06:02:15 crc kubenswrapper[4854]: E0103 06:02:15.615766 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-6mlr2" podUID="27abeea6-c224-4248-99aa-cfc50d1e911b" Jan 03 06:02:15 crc kubenswrapper[4854]: E0103 06:02:15.617474 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-2kbfm" podUID="19db79b0-2939-4a6b-bf9a-9f35b6f63acd" Jan 03 06:02:15 crc kubenswrapper[4854]: E0103 06:02:15.627633 4854 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 03 06:02:15 crc kubenswrapper[4854]: E0103 06:02:15.628307 4854 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-snn75,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-xwdk9_openstack(581c10f5-d4c5-41af-9930-dc5324b5b48a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 03 06:02:15 crc kubenswrapper[4854]: E0103 06:02:15.629821 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-xwdk9" podUID="581c10f5-d4c5-41af-9930-dc5324b5b48a" Jan 03 06:02:16 crc kubenswrapper[4854]: I0103 06:02:16.252146 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"97a38e3c-dd5a-447b-b580-ed7bd5f16fde","Type":"ContainerStarted","Data":"961e13b21b5d51e645d050cb57f92e4a88a73fcf36afafffd6547add501ccefc"} Jan 03 06:02:16 crc kubenswrapper[4854]: I0103 06:02:16.252578 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 03 06:02:16 crc kubenswrapper[4854]: I0103 06:02:16.270461 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"08cbd0f0-dda7-45be-9bad-28f1d1bc108d","Type":"ContainerStarted","Data":"be68847edb70c01c4006f76a85dcc7d6306b09223bf62268d2268b050c78f2f4"} Jan 03 06:02:16 crc kubenswrapper[4854]: I0103 06:02:16.272181 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dll2c" event={"ID":"04465680-9e76-4b04-aa5f-c94218a6bf28","Type":"ContainerStarted","Data":"35532a58f919ae527521dcc5213c17c918f73b8a3186aba5a1261afe9d18e86f"} Jan 03 06:02:16 crc kubenswrapper[4854]: I0103 06:02:16.275193 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-67666b4d85-nwx4t" event={"ID":"002174a6-3b57-4eba-985b-9fd7c492b143","Type":"ContainerStarted","Data":"6ffa06e06f7ffeb6c54eec5c679590fb0e1466548cc8099ad86a912fbb63e3b6"} Jan 03 06:02:16 crc kubenswrapper[4854]: I0103 06:02:16.387111 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 03 06:02:16 crc kubenswrapper[4854]: I0103 06:02:16.406170 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-67666b4d85-nwx4t" podStartSLOduration=22.406151315 podStartE2EDuration="22.406151315s" podCreationTimestamp="2026-01-03 06:01:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:02:16.371353857 +0000 UTC m=+1314.697930429" watchObservedRunningTime="2026-01-03 06:02:16.406151315 +0000 UTC m=+1314.732727897" Jan 03 06:02:17 crc kubenswrapper[4854]: I0103 06:02:17.088623 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 03 06:02:17 crc kubenswrapper[4854]: I0103 06:02:17.111574 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 03 06:02:17 crc kubenswrapper[4854]: I0103 06:02:17.336048 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"10578fce-2c06-4977-9cb2-51b8593f9fed","Type":"ContainerStarted","Data":"ab6729129cb14f8272e9f97614599088b513fb6219941a680a937f6d5617ae59"} Jan 03 06:02:17 crc kubenswrapper[4854]: I0103 06:02:17.374274 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-ngk5h"] Jan 03 06:02:17 crc kubenswrapper[4854]: I0103 06:02:17.402416 4854 generic.go:334] "Generic (PLEG): container finished" podID="19db79b0-2939-4a6b-bf9a-9f35b6f63acd" containerID="eb1c1ac7cb3199760c4745b40d40c4521c1ba8316ddd5c2440ae23e398cda87c" exitCode=0 Jan 03 06:02:17 crc kubenswrapper[4854]: I0103 06:02:17.402537 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-2kbfm" event={"ID":"19db79b0-2939-4a6b-bf9a-9f35b6f63acd","Type":"ContainerDied","Data":"eb1c1ac7cb3199760c4745b40d40c4521c1ba8316ddd5c2440ae23e398cda87c"} Jan 03 06:02:17 crc kubenswrapper[4854]: I0103 06:02:17.466025 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-67666b4d85-nwx4t" event={"ID":"002174a6-3b57-4eba-985b-9fd7c492b143","Type":"ContainerStarted","Data":"880bc6dc8873f0bbc31cde5de1f7081f573da192ec8aefac577a46a08ed98ee5"} Jan 03 06:02:17 crc kubenswrapper[4854]: I0103 06:02:17.495372 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 03 06:02:17 crc kubenswrapper[4854]: I0103 06:02:17.523161 4854 generic.go:334] "Generic (PLEG): container finished" podID="11e65dd3-c929-4a34-aa76-94ad1d7464db" containerID="4b8f08606528c73dd507f22ae3ae9bca8bbbafe4a53b066c34f385c2f9d2c78f" exitCode=0 Jan 03 06:02:17 crc kubenswrapper[4854]: I0103 06:02:17.523278 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-9p2s9" event={"ID":"11e65dd3-c929-4a34-aa76-94ad1d7464db","Type":"ContainerDied","Data":"4b8f08606528c73dd507f22ae3ae9bca8bbbafe4a53b066c34f385c2f9d2c78f"} Jan 03 06:02:17 crc kubenswrapper[4854]: I0103 06:02:17.525202 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b2518f81-3d3d-47a6-a157-19c2685f07d2","Type":"ContainerStarted","Data":"e38e63c3678040ae4e8bdfad604a4579c1e8551a859f7aa50b44b27d06c18126"} Jan 03 06:02:17 crc kubenswrapper[4854]: I0103 06:02:17.526317 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"61632803-5660-4e68-865c-0d231613aec4","Type":"ContainerStarted","Data":"69c7424031ded60dc32a72e9e88c2e713fd7405d1678d6ad98e2f924f155884a"} Jan 03 06:02:17 crc kubenswrapper[4854]: I0103 06:02:17.534256 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"748d9586-5917-42ab-8f1f-3a811b724dae","Type":"ContainerStarted","Data":"6419424c296c38e40a92fda54a255ae79688dfd792cf9a389023e41d945817a1"} Jan 03 06:02:17 crc kubenswrapper[4854]: I0103 06:02:17.544021 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-mkvp7"] Jan 03 06:02:17 crc kubenswrapper[4854]: I0103 06:02:17.632036 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-6mlr2" Jan 03 06:02:17 crc kubenswrapper[4854]: I0103 06:02:17.680787 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4d2k\" (UniqueName: \"kubernetes.io/projected/27abeea6-c224-4248-99aa-cfc50d1e911b-kube-api-access-j4d2k\") pod \"27abeea6-c224-4248-99aa-cfc50d1e911b\" (UID: \"27abeea6-c224-4248-99aa-cfc50d1e911b\") " Jan 03 06:02:17 crc kubenswrapper[4854]: I0103 06:02:17.681030 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27abeea6-c224-4248-99aa-cfc50d1e911b-config\") pod \"27abeea6-c224-4248-99aa-cfc50d1e911b\" (UID: \"27abeea6-c224-4248-99aa-cfc50d1e911b\") " Jan 03 06:02:17 crc kubenswrapper[4854]: I0103 06:02:17.682672 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27abeea6-c224-4248-99aa-cfc50d1e911b-config" (OuterVolumeSpecName: "config") pod "27abeea6-c224-4248-99aa-cfc50d1e911b" (UID: "27abeea6-c224-4248-99aa-cfc50d1e911b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:02:17 crc kubenswrapper[4854]: I0103 06:02:17.693418 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27abeea6-c224-4248-99aa-cfc50d1e911b-kube-api-access-j4d2k" (OuterVolumeSpecName: "kube-api-access-j4d2k") pod "27abeea6-c224-4248-99aa-cfc50d1e911b" (UID: "27abeea6-c224-4248-99aa-cfc50d1e911b"). InnerVolumeSpecName "kube-api-access-j4d2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:02:17 crc kubenswrapper[4854]: W0103 06:02:17.755753 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbabc1db7_041b_4116_86ff_b9d0c4349d49.slice/crio-c729de4d841ad9a4382f7f70a07a9dadd2c79c703ceaf69438c8f360cb72a02e WatchSource:0}: Error finding container c729de4d841ad9a4382f7f70a07a9dadd2c79c703ceaf69438c8f360cb72a02e: Status 404 returned error can't find the container with id c729de4d841ad9a4382f7f70a07a9dadd2c79c703ceaf69438c8f360cb72a02e Jan 03 06:02:17 crc kubenswrapper[4854]: I0103 06:02:17.784922 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4d2k\" (UniqueName: \"kubernetes.io/projected/27abeea6-c224-4248-99aa-cfc50d1e911b-kube-api-access-j4d2k\") on node \"crc\" DevicePath \"\"" Jan 03 06:02:17 crc kubenswrapper[4854]: I0103 06:02:17.784965 4854 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27abeea6-c224-4248-99aa-cfc50d1e911b-config\") on node \"crc\" DevicePath \"\"" Jan 03 06:02:17 crc kubenswrapper[4854]: I0103 06:02:17.859699 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-xwdk9" Jan 03 06:02:17 crc kubenswrapper[4854]: I0103 06:02:17.886789 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snn75\" (UniqueName: \"kubernetes.io/projected/581c10f5-d4c5-41af-9930-dc5324b5b48a-kube-api-access-snn75\") pod \"581c10f5-d4c5-41af-9930-dc5324b5b48a\" (UID: \"581c10f5-d4c5-41af-9930-dc5324b5b48a\") " Jan 03 06:02:17 crc kubenswrapper[4854]: I0103 06:02:17.887232 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/581c10f5-d4c5-41af-9930-dc5324b5b48a-config\") pod \"581c10f5-d4c5-41af-9930-dc5324b5b48a\" (UID: \"581c10f5-d4c5-41af-9930-dc5324b5b48a\") " Jan 03 06:02:17 crc kubenswrapper[4854]: I0103 06:02:17.888635 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/581c10f5-d4c5-41af-9930-dc5324b5b48a-dns-svc\") pod \"581c10f5-d4c5-41af-9930-dc5324b5b48a\" (UID: \"581c10f5-d4c5-41af-9930-dc5324b5b48a\") " Jan 03 06:02:17 crc kubenswrapper[4854]: I0103 06:02:17.887861 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/581c10f5-d4c5-41af-9930-dc5324b5b48a-config" (OuterVolumeSpecName: "config") pod "581c10f5-d4c5-41af-9930-dc5324b5b48a" (UID: "581c10f5-d4c5-41af-9930-dc5324b5b48a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:02:17 crc kubenswrapper[4854]: I0103 06:02:17.889128 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/581c10f5-d4c5-41af-9930-dc5324b5b48a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "581c10f5-d4c5-41af-9930-dc5324b5b48a" (UID: "581c10f5-d4c5-41af-9930-dc5324b5b48a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:02:17 crc kubenswrapper[4854]: I0103 06:02:17.890555 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/581c10f5-d4c5-41af-9930-dc5324b5b48a-kube-api-access-snn75" (OuterVolumeSpecName: "kube-api-access-snn75") pod "581c10f5-d4c5-41af-9930-dc5324b5b48a" (UID: "581c10f5-d4c5-41af-9930-dc5324b5b48a"). InnerVolumeSpecName "kube-api-access-snn75". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:02:17 crc kubenswrapper[4854]: I0103 06:02:17.991611 4854 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/581c10f5-d4c5-41af-9930-dc5324b5b48a-config\") on node \"crc\" DevicePath \"\"" Jan 03 06:02:17 crc kubenswrapper[4854]: I0103 06:02:17.991926 4854 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/581c10f5-d4c5-41af-9930-dc5324b5b48a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 03 06:02:17 crc kubenswrapper[4854]: I0103 06:02:17.991937 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snn75\" (UniqueName: \"kubernetes.io/projected/581c10f5-d4c5-41af-9930-dc5324b5b48a-kube-api-access-snn75\") on node \"crc\" DevicePath \"\"" Jan 03 06:02:18 crc kubenswrapper[4854]: I0103 06:02:18.555580 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-ngk5h" event={"ID":"32ebd9b5-c83a-401d-824e-77c47a842836","Type":"ContainerStarted","Data":"83657471dbaf0712a8f88262e07d222e9cd6263240363e25c9c1fb8990b360b8"} Jan 03 06:02:18 crc kubenswrapper[4854]: I0103 06:02:18.561305 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b5742bd8-396a-4174-a8b7-dd6deec69632","Type":"ContainerStarted","Data":"88f49f06efbbf4014f255467c048d5442670bcc7f7f5b289052869111303351b"} Jan 03 06:02:18 crc kubenswrapper[4854]: I0103 06:02:18.566598 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-mkvp7" event={"ID":"babc1db7-041b-4116-86ff-b9d0c4349d49","Type":"ContainerStarted","Data":"c729de4d841ad9a4382f7f70a07a9dadd2c79c703ceaf69438c8f360cb72a02e"} Jan 03 06:02:18 crc kubenswrapper[4854]: I0103 06:02:18.568302 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-6mlr2" event={"ID":"27abeea6-c224-4248-99aa-cfc50d1e911b","Type":"ContainerDied","Data":"04987a3aeb4d57bf5d1113abbbb1213f98b6feed629d7eaddfc21af1f915c716"} Jan 03 06:02:18 crc kubenswrapper[4854]: I0103 06:02:18.568362 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-6mlr2" Jan 03 06:02:18 crc kubenswrapper[4854]: I0103 06:02:18.570869 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-xwdk9" event={"ID":"581c10f5-d4c5-41af-9930-dc5324b5b48a","Type":"ContainerDied","Data":"d5be23cb32ae5f37ce4368038267cb0bb4a647d040baa3566d1663fb24974683"} Jan 03 06:02:18 crc kubenswrapper[4854]: I0103 06:02:18.570917 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-xwdk9" Jan 03 06:02:18 crc kubenswrapper[4854]: I0103 06:02:18.575830 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"71288814-2f4e-4e92-8064-8f9ef1920212","Type":"ContainerStarted","Data":"5d753224579da962547240b2ab8650f6accf03025a8696720fb08f50815571ce"} Jan 03 06:02:18 crc kubenswrapper[4854]: I0103 06:02:18.579587 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"73c3ad1e-a419-4c11-a31d-81f28866fe2b","Type":"ContainerStarted","Data":"9ff91dc9fa6b36746b6d484678239d8723127426d68808ba171af02a839455b1"} Jan 03 06:02:18 crc kubenswrapper[4854]: I0103 06:02:18.641555 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-xwdk9"] Jan 03 06:02:18 crc kubenswrapper[4854]: I0103 06:02:18.641616 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-xwdk9"] Jan 03 06:02:18 crc kubenswrapper[4854]: I0103 06:02:18.703107 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-6mlr2"] Jan 03 06:02:18 crc kubenswrapper[4854]: I0103 06:02:18.730287 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-6mlr2"] Jan 03 06:02:20 crc kubenswrapper[4854]: I0103 06:02:20.131040 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27abeea6-c224-4248-99aa-cfc50d1e911b" path="/var/lib/kubelet/pods/27abeea6-c224-4248-99aa-cfc50d1e911b/volumes" Jan 03 06:02:20 crc kubenswrapper[4854]: I0103 06:02:20.132159 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="581c10f5-d4c5-41af-9930-dc5324b5b48a" path="/var/lib/kubelet/pods/581c10f5-d4c5-41af-9930-dc5324b5b48a/volumes" Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.299305 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-jwxpm"] Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.301140 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-jwxpm" Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.308642 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.356160 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-jwxpm"] Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.421868 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bgtf\" (UniqueName: \"kubernetes.io/projected/9863d2ec-0177-4054-b715-08a87aed5eae-kube-api-access-9bgtf\") pod \"ovn-controller-metrics-jwxpm\" (UID: \"9863d2ec-0177-4054-b715-08a87aed5eae\") " pod="openstack/ovn-controller-metrics-jwxpm" Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.421916 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9863d2ec-0177-4054-b715-08a87aed5eae-combined-ca-bundle\") pod \"ovn-controller-metrics-jwxpm\" (UID: \"9863d2ec-0177-4054-b715-08a87aed5eae\") " pod="openstack/ovn-controller-metrics-jwxpm" Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.421951 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9863d2ec-0177-4054-b715-08a87aed5eae-config\") pod \"ovn-controller-metrics-jwxpm\" (UID: \"9863d2ec-0177-4054-b715-08a87aed5eae\") " pod="openstack/ovn-controller-metrics-jwxpm" Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.421976 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/9863d2ec-0177-4054-b715-08a87aed5eae-ovs-rundir\") pod \"ovn-controller-metrics-jwxpm\" (UID: \"9863d2ec-0177-4054-b715-08a87aed5eae\") " pod="openstack/ovn-controller-metrics-jwxpm" Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.422036 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/9863d2ec-0177-4054-b715-08a87aed5eae-ovn-rundir\") pod \"ovn-controller-metrics-jwxpm\" (UID: \"9863d2ec-0177-4054-b715-08a87aed5eae\") " pod="openstack/ovn-controller-metrics-jwxpm" Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.422213 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9863d2ec-0177-4054-b715-08a87aed5eae-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-jwxpm\" (UID: \"9863d2ec-0177-4054-b715-08a87aed5eae\") " pod="openstack/ovn-controller-metrics-jwxpm" Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.491007 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-9p2s9"] Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.515627 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-jc5hb"] Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.520930 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-jc5hb" Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.524028 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.524076 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/9863d2ec-0177-4054-b715-08a87aed5eae-ovn-rundir\") pod \"ovn-controller-metrics-jwxpm\" (UID: \"9863d2ec-0177-4054-b715-08a87aed5eae\") " pod="openstack/ovn-controller-metrics-jwxpm" Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.524200 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9863d2ec-0177-4054-b715-08a87aed5eae-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-jwxpm\" (UID: \"9863d2ec-0177-4054-b715-08a87aed5eae\") " pod="openstack/ovn-controller-metrics-jwxpm" Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.524290 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bgtf\" (UniqueName: \"kubernetes.io/projected/9863d2ec-0177-4054-b715-08a87aed5eae-kube-api-access-9bgtf\") pod \"ovn-controller-metrics-jwxpm\" (UID: \"9863d2ec-0177-4054-b715-08a87aed5eae\") " pod="openstack/ovn-controller-metrics-jwxpm" Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.524320 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9863d2ec-0177-4054-b715-08a87aed5eae-combined-ca-bundle\") pod \"ovn-controller-metrics-jwxpm\" (UID: \"9863d2ec-0177-4054-b715-08a87aed5eae\") " pod="openstack/ovn-controller-metrics-jwxpm" Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.524355 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9863d2ec-0177-4054-b715-08a87aed5eae-config\") pod \"ovn-controller-metrics-jwxpm\" (UID: \"9863d2ec-0177-4054-b715-08a87aed5eae\") " pod="openstack/ovn-controller-metrics-jwxpm" Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.524382 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/9863d2ec-0177-4054-b715-08a87aed5eae-ovs-rundir\") pod \"ovn-controller-metrics-jwxpm\" (UID: \"9863d2ec-0177-4054-b715-08a87aed5eae\") " pod="openstack/ovn-controller-metrics-jwxpm" Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.524389 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/9863d2ec-0177-4054-b715-08a87aed5eae-ovn-rundir\") pod \"ovn-controller-metrics-jwxpm\" (UID: \"9863d2ec-0177-4054-b715-08a87aed5eae\") " pod="openstack/ovn-controller-metrics-jwxpm" Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.525106 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9863d2ec-0177-4054-b715-08a87aed5eae-config\") pod \"ovn-controller-metrics-jwxpm\" (UID: \"9863d2ec-0177-4054-b715-08a87aed5eae\") " pod="openstack/ovn-controller-metrics-jwxpm" Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.525176 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/9863d2ec-0177-4054-b715-08a87aed5eae-ovs-rundir\") pod \"ovn-controller-metrics-jwxpm\" (UID: \"9863d2ec-0177-4054-b715-08a87aed5eae\") " pod="openstack/ovn-controller-metrics-jwxpm" Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.531972 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9863d2ec-0177-4054-b715-08a87aed5eae-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-jwxpm\" (UID: \"9863d2ec-0177-4054-b715-08a87aed5eae\") " pod="openstack/ovn-controller-metrics-jwxpm" Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.541182 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9863d2ec-0177-4054-b715-08a87aed5eae-combined-ca-bundle\") pod \"ovn-controller-metrics-jwxpm\" (UID: \"9863d2ec-0177-4054-b715-08a87aed5eae\") " pod="openstack/ovn-controller-metrics-jwxpm" Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.552181 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-jc5hb"] Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.559038 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bgtf\" (UniqueName: \"kubernetes.io/projected/9863d2ec-0177-4054-b715-08a87aed5eae-kube-api-access-9bgtf\") pod \"ovn-controller-metrics-jwxpm\" (UID: \"9863d2ec-0177-4054-b715-08a87aed5eae\") " pod="openstack/ovn-controller-metrics-jwxpm" Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.628226 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2pvj\" (UniqueName: \"kubernetes.io/projected/d35aca72-6071-4bee-b7d6-c6704ec02797-kube-api-access-v2pvj\") pod \"dnsmasq-dns-7fd796d7df-jc5hb\" (UID: \"d35aca72-6071-4bee-b7d6-c6704ec02797\") " pod="openstack/dnsmasq-dns-7fd796d7df-jc5hb" Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.628429 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d35aca72-6071-4bee-b7d6-c6704ec02797-config\") pod \"dnsmasq-dns-7fd796d7df-jc5hb\" (UID: \"d35aca72-6071-4bee-b7d6-c6704ec02797\") " pod="openstack/dnsmasq-dns-7fd796d7df-jc5hb" Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.628483 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d35aca72-6071-4bee-b7d6-c6704ec02797-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-jc5hb\" (UID: \"d35aca72-6071-4bee-b7d6-c6704ec02797\") " pod="openstack/dnsmasq-dns-7fd796d7df-jc5hb" Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.628536 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d35aca72-6071-4bee-b7d6-c6704ec02797-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-jc5hb\" (UID: \"d35aca72-6071-4bee-b7d6-c6704ec02797\") " pod="openstack/dnsmasq-dns-7fd796d7df-jc5hb" Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.666534 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-jwxpm" Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.666807 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2kbfm"] Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.700209 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-fbbv8"] Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.701987 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-fbbv8" Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.708440 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.726391 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-fbbv8"] Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.732712 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d35aca72-6071-4bee-b7d6-c6704ec02797-config\") pod \"dnsmasq-dns-7fd796d7df-jc5hb\" (UID: \"d35aca72-6071-4bee-b7d6-c6704ec02797\") " pod="openstack/dnsmasq-dns-7fd796d7df-jc5hb" Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.732804 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d35aca72-6071-4bee-b7d6-c6704ec02797-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-jc5hb\" (UID: \"d35aca72-6071-4bee-b7d6-c6704ec02797\") " pod="openstack/dnsmasq-dns-7fd796d7df-jc5hb" Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.732844 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d35aca72-6071-4bee-b7d6-c6704ec02797-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-jc5hb\" (UID: \"d35aca72-6071-4bee-b7d6-c6704ec02797\") " pod="openstack/dnsmasq-dns-7fd796d7df-jc5hb" Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.732935 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2pvj\" (UniqueName: \"kubernetes.io/projected/d35aca72-6071-4bee-b7d6-c6704ec02797-kube-api-access-v2pvj\") pod \"dnsmasq-dns-7fd796d7df-jc5hb\" (UID: \"d35aca72-6071-4bee-b7d6-c6704ec02797\") " pod="openstack/dnsmasq-dns-7fd796d7df-jc5hb" Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.733847 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d35aca72-6071-4bee-b7d6-c6704ec02797-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-jc5hb\" (UID: \"d35aca72-6071-4bee-b7d6-c6704ec02797\") " pod="openstack/dnsmasq-dns-7fd796d7df-jc5hb" Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.734465 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d35aca72-6071-4bee-b7d6-c6704ec02797-config\") pod \"dnsmasq-dns-7fd796d7df-jc5hb\" (UID: \"d35aca72-6071-4bee-b7d6-c6704ec02797\") " pod="openstack/dnsmasq-dns-7fd796d7df-jc5hb" Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.734842 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d35aca72-6071-4bee-b7d6-c6704ec02797-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-jc5hb\" (UID: \"d35aca72-6071-4bee-b7d6-c6704ec02797\") " pod="openstack/dnsmasq-dns-7fd796d7df-jc5hb" Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.765750 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2pvj\" (UniqueName: \"kubernetes.io/projected/d35aca72-6071-4bee-b7d6-c6704ec02797-kube-api-access-v2pvj\") pod \"dnsmasq-dns-7fd796d7df-jc5hb\" (UID: \"d35aca72-6071-4bee-b7d6-c6704ec02797\") " pod="openstack/dnsmasq-dns-7fd796d7df-jc5hb" Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.835297 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8299f6ba-92fe-41ee-8a63-184f8a594135-config\") pod \"dnsmasq-dns-86db49b7ff-fbbv8\" (UID: \"8299f6ba-92fe-41ee-8a63-184f8a594135\") " pod="openstack/dnsmasq-dns-86db49b7ff-fbbv8" Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.835379 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgtlz\" (UniqueName: \"kubernetes.io/projected/8299f6ba-92fe-41ee-8a63-184f8a594135-kube-api-access-cgtlz\") pod \"dnsmasq-dns-86db49b7ff-fbbv8\" (UID: \"8299f6ba-92fe-41ee-8a63-184f8a594135\") " pod="openstack/dnsmasq-dns-86db49b7ff-fbbv8" Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.835403 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8299f6ba-92fe-41ee-8a63-184f8a594135-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-fbbv8\" (UID: \"8299f6ba-92fe-41ee-8a63-184f8a594135\") " pod="openstack/dnsmasq-dns-86db49b7ff-fbbv8" Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.835421 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8299f6ba-92fe-41ee-8a63-184f8a594135-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-fbbv8\" (UID: \"8299f6ba-92fe-41ee-8a63-184f8a594135\") " pod="openstack/dnsmasq-dns-86db49b7ff-fbbv8" Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.835467 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8299f6ba-92fe-41ee-8a63-184f8a594135-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-fbbv8\" (UID: \"8299f6ba-92fe-41ee-8a63-184f8a594135\") " pod="openstack/dnsmasq-dns-86db49b7ff-fbbv8" Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.930495 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-jc5hb" Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.937062 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgtlz\" (UniqueName: \"kubernetes.io/projected/8299f6ba-92fe-41ee-8a63-184f8a594135-kube-api-access-cgtlz\") pod \"dnsmasq-dns-86db49b7ff-fbbv8\" (UID: \"8299f6ba-92fe-41ee-8a63-184f8a594135\") " pod="openstack/dnsmasq-dns-86db49b7ff-fbbv8" Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.937149 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8299f6ba-92fe-41ee-8a63-184f8a594135-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-fbbv8\" (UID: \"8299f6ba-92fe-41ee-8a63-184f8a594135\") " pod="openstack/dnsmasq-dns-86db49b7ff-fbbv8" Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.937178 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8299f6ba-92fe-41ee-8a63-184f8a594135-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-fbbv8\" (UID: \"8299f6ba-92fe-41ee-8a63-184f8a594135\") " pod="openstack/dnsmasq-dns-86db49b7ff-fbbv8" Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.937242 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8299f6ba-92fe-41ee-8a63-184f8a594135-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-fbbv8\" (UID: \"8299f6ba-92fe-41ee-8a63-184f8a594135\") " pod="openstack/dnsmasq-dns-86db49b7ff-fbbv8" Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.937392 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8299f6ba-92fe-41ee-8a63-184f8a594135-config\") pod \"dnsmasq-dns-86db49b7ff-fbbv8\" (UID: \"8299f6ba-92fe-41ee-8a63-184f8a594135\") " pod="openstack/dnsmasq-dns-86db49b7ff-fbbv8" Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.938255 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8299f6ba-92fe-41ee-8a63-184f8a594135-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-fbbv8\" (UID: \"8299f6ba-92fe-41ee-8a63-184f8a594135\") " pod="openstack/dnsmasq-dns-86db49b7ff-fbbv8" Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.938275 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8299f6ba-92fe-41ee-8a63-184f8a594135-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-fbbv8\" (UID: \"8299f6ba-92fe-41ee-8a63-184f8a594135\") " pod="openstack/dnsmasq-dns-86db49b7ff-fbbv8" Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.938430 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8299f6ba-92fe-41ee-8a63-184f8a594135-config\") pod \"dnsmasq-dns-86db49b7ff-fbbv8\" (UID: \"8299f6ba-92fe-41ee-8a63-184f8a594135\") " pod="openstack/dnsmasq-dns-86db49b7ff-fbbv8" Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.938505 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8299f6ba-92fe-41ee-8a63-184f8a594135-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-fbbv8\" (UID: \"8299f6ba-92fe-41ee-8a63-184f8a594135\") " pod="openstack/dnsmasq-dns-86db49b7ff-fbbv8" Jan 03 06:02:23 crc kubenswrapper[4854]: I0103 06:02:23.981845 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgtlz\" (UniqueName: \"kubernetes.io/projected/8299f6ba-92fe-41ee-8a63-184f8a594135-kube-api-access-cgtlz\") pod \"dnsmasq-dns-86db49b7ff-fbbv8\" (UID: \"8299f6ba-92fe-41ee-8a63-184f8a594135\") " pod="openstack/dnsmasq-dns-86db49b7ff-fbbv8" Jan 03 06:02:24 crc kubenswrapper[4854]: I0103 06:02:24.023478 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-fbbv8" Jan 03 06:02:25 crc kubenswrapper[4854]: I0103 06:02:25.373150 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-67666b4d85-nwx4t" Jan 03 06:02:25 crc kubenswrapper[4854]: I0103 06:02:25.373427 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-67666b4d85-nwx4t" Jan 03 06:02:25 crc kubenswrapper[4854]: I0103 06:02:25.380904 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-67666b4d85-nwx4t" Jan 03 06:02:25 crc kubenswrapper[4854]: I0103 06:02:25.696188 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-67666b4d85-nwx4t" Jan 03 06:02:25 crc kubenswrapper[4854]: I0103 06:02:25.783186 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-64d6659995-xwhf5"] Jan 03 06:02:33 crc kubenswrapper[4854]: E0103 06:02:33.742594 4854 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/dashboards-console-plugin-rhel9@sha256:093d2731ac848ed5fd57356b155a19d3bf7b8db96d95b09c5d0095e143f7254f" Jan 03 06:02:33 crc kubenswrapper[4854]: E0103 06:02:33.743490 4854 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:observability-ui-dashboards,Image:registry.redhat.io/cluster-observability-operator/dashboards-console-plugin-rhel9@sha256:093d2731ac848ed5fd57356b155a19d3bf7b8db96d95b09c5d0095e143f7254f,Command:[],Args:[-port=9443 -cert=/var/serving-cert/tls.crt -key=/var/serving-cert/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:web,HostPort:0,ContainerPort:9443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serving-cert,ReadOnly:true,MountPath:/var/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fzkgv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000350000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod observability-ui-dashboards-66cbf594b5-ngk5h_openshift-operators(32ebd9b5-c83a-401d-824e-77c47a842836): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 03 06:02:33 crc kubenswrapper[4854]: E0103 06:02:33.744749 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"observability-ui-dashboards\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-ngk5h" podUID="32ebd9b5-c83a-401d-824e-77c47a842836" Jan 03 06:02:34 crc kubenswrapper[4854]: E0103 06:02:34.623354 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"observability-ui-dashboards\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/dashboards-console-plugin-rhel9@sha256:093d2731ac848ed5fd57356b155a19d3bf7b8db96d95b09c5d0095e143f7254f\\\"\"" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-ngk5h" podUID="32ebd9b5-c83a-401d-824e-77c47a842836" Jan 03 06:02:34 crc kubenswrapper[4854]: E0103 06:02:34.634470 4854 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 03 06:02:34 crc kubenswrapper[4854]: E0103 06:02:34.634704 4854 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wtpdz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(748d9586-5917-42ab-8f1f-3a811b724dae): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 03 06:02:34 crc kubenswrapper[4854]: E0103 06:02:34.636262 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="748d9586-5917-42ab-8f1f-3a811b724dae" Jan 03 06:02:34 crc kubenswrapper[4854]: E0103 06:02:34.777555 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="748d9586-5917-42ab-8f1f-3a811b724dae" Jan 03 06:02:35 crc kubenswrapper[4854]: E0103 06:02:35.889959 4854 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified" Jan 03 06:02:35 crc kubenswrapper[4854]: E0103 06:02:35.890548 4854 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-sb,Image:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndch6dh548h57bh68fhb6h677h5f6h546h677h59dh57dh689hfdh549h7fh5d9hbbh5d7h5cch67h88h6ch544h64dh5bh644h9bh568hf4h656h6cq,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-sb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9xswc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-sb-0_openstack(73c3ad1e-a419-4c11-a31d-81f28866fe2b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 03 06:02:37 crc kubenswrapper[4854]: I0103 06:02:37.760621 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-fbbv8"] Jan 03 06:02:37 crc kubenswrapper[4854]: I0103 06:02:37.933226 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-jwxpm"] Jan 03 06:02:37 crc kubenswrapper[4854]: I0103 06:02:37.945488 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-jc5hb"] Jan 03 06:02:38 crc kubenswrapper[4854]: E0103 06:02:38.321361 4854 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 03 06:02:38 crc kubenswrapper[4854]: E0103 06:02:38.321420 4854 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 03 06:02:38 crc kubenswrapper[4854]: E0103 06:02:38.321569 4854 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fbpsb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(b2518f81-3d3d-47a6-a157-19c2685f07d2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 03 06:02:38 crc kubenswrapper[4854]: E0103 06:02:38.322799 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="b2518f81-3d3d-47a6-a157-19c2685f07d2" Jan 03 06:02:38 crc kubenswrapper[4854]: W0103 06:02:38.373924 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd35aca72_6071_4bee_b7d6_c6704ec02797.slice/crio-73971b448b4bd0cce45efb6fe7a0af9530a3a95e4af6968529002291d6f55611 WatchSource:0}: Error finding container 73971b448b4bd0cce45efb6fe7a0af9530a3a95e4af6968529002291d6f55611: Status 404 returned error can't find the container with id 73971b448b4bd0cce45efb6fe7a0af9530a3a95e4af6968529002291d6f55611 Jan 03 06:02:38 crc kubenswrapper[4854]: I0103 06:02:38.841012 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-fbbv8" event={"ID":"8299f6ba-92fe-41ee-8a63-184f8a594135","Type":"ContainerStarted","Data":"4df9572494682062d0afd7715f1fb65c7de8ab388be07a0e2ac86182f54751b7"} Jan 03 06:02:38 crc kubenswrapper[4854]: I0103 06:02:38.843740 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-jwxpm" event={"ID":"9863d2ec-0177-4054-b715-08a87aed5eae","Type":"ContainerStarted","Data":"48b008a9d9f7400a993eabaa8d941760bf6ed1d29eec3eb75cbc674e2c05e612"} Jan 03 06:02:38 crc kubenswrapper[4854]: I0103 06:02:38.846986 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-jc5hb" event={"ID":"d35aca72-6071-4bee-b7d6-c6704ec02797","Type":"ContainerStarted","Data":"73971b448b4bd0cce45efb6fe7a0af9530a3a95e4af6968529002291d6f55611"} Jan 03 06:02:38 crc kubenswrapper[4854]: E0103 06:02:38.847761 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="b2518f81-3d3d-47a6-a157-19c2685f07d2" Jan 03 06:02:39 crc kubenswrapper[4854]: I0103 06:02:39.877460 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-2kbfm" event={"ID":"19db79b0-2939-4a6b-bf9a-9f35b6f63acd","Type":"ContainerStarted","Data":"b860c1238393526732cdb6b943711bfb2efdd77463816c7accde43211155cbcb"} Jan 03 06:02:39 crc kubenswrapper[4854]: I0103 06:02:39.878074 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-2kbfm" Jan 03 06:02:39 crc kubenswrapper[4854]: I0103 06:02:39.877566 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-2kbfm" podUID="19db79b0-2939-4a6b-bf9a-9f35b6f63acd" containerName="dnsmasq-dns" containerID="cri-o://b860c1238393526732cdb6b943711bfb2efdd77463816c7accde43211155cbcb" gracePeriod=10 Jan 03 06:02:39 crc kubenswrapper[4854]: I0103 06:02:39.883874 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"08cbd0f0-dda7-45be-9bad-28f1d1bc108d","Type":"ContainerStarted","Data":"e910ea20b35c610f3dab6fa4729c859d0538d6c766dcdd16d58ad1734b122883"} Jan 03 06:02:39 crc kubenswrapper[4854]: I0103 06:02:39.899134 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-2kbfm" podStartSLOduration=-9223371983.955666 podStartE2EDuration="52.899109612s" podCreationTimestamp="2026-01-03 06:01:47 +0000 UTC" firstStartedPulling="2026-01-03 06:01:48.487954357 +0000 UTC m=+1286.814530929" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:02:39.891610997 +0000 UTC m=+1338.218187569" watchObservedRunningTime="2026-01-03 06:02:39.899109612 +0000 UTC m=+1338.225686184" Jan 03 06:02:39 crc kubenswrapper[4854]: I0103 06:02:39.900413 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dll2c" event={"ID":"04465680-9e76-4b04-aa5f-c94218a6bf28","Type":"ContainerStarted","Data":"2a3024df82dd0a64d9de1079241e222dc01ce7a03aec44229b8d105526253882"} Jan 03 06:02:39 crc kubenswrapper[4854]: I0103 06:02:39.900493 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-dll2c" Jan 03 06:02:39 crc kubenswrapper[4854]: I0103 06:02:39.905814 4854 generic.go:334] "Generic (PLEG): container finished" podID="d35aca72-6071-4bee-b7d6-c6704ec02797" containerID="108e59799bc3d3de0b63b8b145bad6340e7f031734ce0ef56048224eaadab11d" exitCode=0 Jan 03 06:02:39 crc kubenswrapper[4854]: I0103 06:02:39.905886 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-jc5hb" event={"ID":"d35aca72-6071-4bee-b7d6-c6704ec02797","Type":"ContainerDied","Data":"108e59799bc3d3de0b63b8b145bad6340e7f031734ce0ef56048224eaadab11d"} Jan 03 06:02:39 crc kubenswrapper[4854]: I0103 06:02:39.909043 4854 generic.go:334] "Generic (PLEG): container finished" podID="8299f6ba-92fe-41ee-8a63-184f8a594135" containerID="a5a410cda3fc8bc9f631f873d36ad8a2ab52bf33eb272f5d2703dd56daa38842" exitCode=0 Jan 03 06:02:39 crc kubenswrapper[4854]: I0103 06:02:39.909129 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-fbbv8" event={"ID":"8299f6ba-92fe-41ee-8a63-184f8a594135","Type":"ContainerDied","Data":"a5a410cda3fc8bc9f631f873d36ad8a2ab52bf33eb272f5d2703dd56daa38842"} Jan 03 06:02:39 crc kubenswrapper[4854]: I0103 06:02:39.913099 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-mkvp7" event={"ID":"babc1db7-041b-4116-86ff-b9d0c4349d49","Type":"ContainerStarted","Data":"ddf947bbc5766a0d79e53e190daabe8dfca15840ff31f3a6eb057aab00e1ddbe"} Jan 03 06:02:39 crc kubenswrapper[4854]: I0103 06:02:39.915685 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-9p2s9" event={"ID":"11e65dd3-c929-4a34-aa76-94ad1d7464db","Type":"ContainerStarted","Data":"966be0870b3703258629a86592d60328dd3141e73e827e626ef8c5cda9a46c3a"} Jan 03 06:02:39 crc kubenswrapper[4854]: I0103 06:02:39.915772 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-9p2s9" Jan 03 06:02:39 crc kubenswrapper[4854]: I0103 06:02:39.915763 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-9p2s9" podUID="11e65dd3-c929-4a34-aa76-94ad1d7464db" containerName="dnsmasq-dns" containerID="cri-o://966be0870b3703258629a86592d60328dd3141e73e827e626ef8c5cda9a46c3a" gracePeriod=10 Jan 03 06:02:39 crc kubenswrapper[4854]: I0103 06:02:39.924561 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"61632803-5660-4e68-865c-0d231613aec4","Type":"ContainerStarted","Data":"8e19a2caf2f73bd0c918eec5605b17d505e2e8df5f037db29d0627875567235e"} Jan 03 06:02:39 crc kubenswrapper[4854]: I0103 06:02:39.924851 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-dll2c" podStartSLOduration=23.382365063 podStartE2EDuration="43.924830456s" podCreationTimestamp="2026-01-03 06:01:56 +0000 UTC" firstStartedPulling="2026-01-03 06:02:15.618080082 +0000 UTC m=+1313.944656654" lastFinishedPulling="2026-01-03 06:02:36.160545475 +0000 UTC m=+1334.487122047" observedRunningTime="2026-01-03 06:02:39.918413968 +0000 UTC m=+1338.244990540" watchObservedRunningTime="2026-01-03 06:02:39.924830456 +0000 UTC m=+1338.251407048" Jan 03 06:02:39 crc kubenswrapper[4854]: I0103 06:02:39.924910 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 03 06:02:39 crc kubenswrapper[4854]: I0103 06:02:39.928277 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"10578fce-2c06-4977-9cb2-51b8593f9fed","Type":"ContainerStarted","Data":"2516ced73f5e638936aaf2680762c1b3e3018cc4dc29b07b6f9ad45dcf11cdc7"} Jan 03 06:02:39 crc kubenswrapper[4854]: I0103 06:02:39.978381 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-9p2s9" podStartSLOduration=26.033655004 podStartE2EDuration="52.978362547s" podCreationTimestamp="2026-01-03 06:01:47 +0000 UTC" firstStartedPulling="2026-01-03 06:01:48.859728577 +0000 UTC m=+1287.186305149" lastFinishedPulling="2026-01-03 06:02:15.80443613 +0000 UTC m=+1314.131012692" observedRunningTime="2026-01-03 06:02:39.969996721 +0000 UTC m=+1338.296573303" watchObservedRunningTime="2026-01-03 06:02:39.978362547 +0000 UTC m=+1338.304939119" Jan 03 06:02:40 crc kubenswrapper[4854]: I0103 06:02:40.060106 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=29.416450649 podStartE2EDuration="48.060069393s" podCreationTimestamp="2026-01-03 06:01:52 +0000 UTC" firstStartedPulling="2026-01-03 06:02:16.270025617 +0000 UTC m=+1314.596602189" lastFinishedPulling="2026-01-03 06:02:34.913644361 +0000 UTC m=+1333.240220933" observedRunningTime="2026-01-03 06:02:40.04294594 +0000 UTC m=+1338.369522532" watchObservedRunningTime="2026-01-03 06:02:40.060069393 +0000 UTC m=+1338.386645975" Jan 03 06:02:40 crc kubenswrapper[4854]: I0103 06:02:40.942844 4854 generic.go:334] "Generic (PLEG): container finished" podID="11e65dd3-c929-4a34-aa76-94ad1d7464db" containerID="966be0870b3703258629a86592d60328dd3141e73e827e626ef8c5cda9a46c3a" exitCode=0 Jan 03 06:02:40 crc kubenswrapper[4854]: I0103 06:02:40.942927 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-9p2s9" event={"ID":"11e65dd3-c929-4a34-aa76-94ad1d7464db","Type":"ContainerDied","Data":"966be0870b3703258629a86592d60328dd3141e73e827e626ef8c5cda9a46c3a"} Jan 03 06:02:40 crc kubenswrapper[4854]: I0103 06:02:40.945475 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"ba007649-daf8-445b-b2c8-73ce6ec54403","Type":"ContainerStarted","Data":"3da223885f26a8fbf9908728d64ed6e77133ba67ab412d2929573a54cadd668b"} Jan 03 06:02:40 crc kubenswrapper[4854]: I0103 06:02:40.949034 4854 generic.go:334] "Generic (PLEG): container finished" podID="19db79b0-2939-4a6b-bf9a-9f35b6f63acd" containerID="b860c1238393526732cdb6b943711bfb2efdd77463816c7accde43211155cbcb" exitCode=0 Jan 03 06:02:40 crc kubenswrapper[4854]: I0103 06:02:40.949131 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-2kbfm" event={"ID":"19db79b0-2939-4a6b-bf9a-9f35b6f63acd","Type":"ContainerDied","Data":"b860c1238393526732cdb6b943711bfb2efdd77463816c7accde43211155cbcb"} Jan 03 06:02:40 crc kubenswrapper[4854]: I0103 06:02:40.951762 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"11d4187f-5938-4054-9eec-4d84f843bd73","Type":"ContainerStarted","Data":"8b3905b680a06f0704057089d6a59eff8699f94da007aad0b4bf2bf6b922a256"} Jan 03 06:02:40 crc kubenswrapper[4854]: I0103 06:02:40.953839 4854 generic.go:334] "Generic (PLEG): container finished" podID="babc1db7-041b-4116-86ff-b9d0c4349d49" containerID="ddf947bbc5766a0d79e53e190daabe8dfca15840ff31f3a6eb057aab00e1ddbe" exitCode=0 Jan 03 06:02:40 crc kubenswrapper[4854]: I0103 06:02:40.954247 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-mkvp7" event={"ID":"babc1db7-041b-4116-86ff-b9d0c4349d49","Type":"ContainerDied","Data":"ddf947bbc5766a0d79e53e190daabe8dfca15840ff31f3a6eb057aab00e1ddbe"} Jan 03 06:02:41 crc kubenswrapper[4854]: I0103 06:02:41.976253 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"97a38e3c-dd5a-447b-b580-ed7bd5f16fde","Type":"ContainerStarted","Data":"d95b4d4b362297f6c2586c50864692d21d1c811876211a128deb95be83ec1ff2"} Jan 03 06:02:41 crc kubenswrapper[4854]: I0103 06:02:41.981878 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-2kbfm" event={"ID":"19db79b0-2939-4a6b-bf9a-9f35b6f63acd","Type":"ContainerDied","Data":"ee9a19919b6476108f1acb3bf12070dfe225ad886a5971c3ee790fe0b4942751"} Jan 03 06:02:41 crc kubenswrapper[4854]: I0103 06:02:41.981909 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee9a19919b6476108f1acb3bf12070dfe225ad886a5971c3ee790fe0b4942751" Jan 03 06:02:41 crc kubenswrapper[4854]: I0103 06:02:41.984669 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-9p2s9" event={"ID":"11e65dd3-c929-4a34-aa76-94ad1d7464db","Type":"ContainerDied","Data":"f483b082d7e8f5b1b831f169f578e1398fef77ea1046aa46655c6fb10238f6dc"} Jan 03 06:02:41 crc kubenswrapper[4854]: I0103 06:02:41.984750 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f483b082d7e8f5b1b831f169f578e1398fef77ea1046aa46655c6fb10238f6dc" Jan 03 06:02:42 crc kubenswrapper[4854]: I0103 06:02:42.078807 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-2kbfm" Jan 03 06:02:42 crc kubenswrapper[4854]: I0103 06:02:42.136696 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-9p2s9" Jan 03 06:02:42 crc kubenswrapper[4854]: I0103 06:02:42.286206 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmzw2\" (UniqueName: \"kubernetes.io/projected/19db79b0-2939-4a6b-bf9a-9f35b6f63acd-kube-api-access-dmzw2\") pod \"19db79b0-2939-4a6b-bf9a-9f35b6f63acd\" (UID: \"19db79b0-2939-4a6b-bf9a-9f35b6f63acd\") " Jan 03 06:02:42 crc kubenswrapper[4854]: I0103 06:02:42.286316 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19db79b0-2939-4a6b-bf9a-9f35b6f63acd-config\") pod \"19db79b0-2939-4a6b-bf9a-9f35b6f63acd\" (UID: \"19db79b0-2939-4a6b-bf9a-9f35b6f63acd\") " Jan 03 06:02:42 crc kubenswrapper[4854]: I0103 06:02:42.286402 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11e65dd3-c929-4a34-aa76-94ad1d7464db-config\") pod \"11e65dd3-c929-4a34-aa76-94ad1d7464db\" (UID: \"11e65dd3-c929-4a34-aa76-94ad1d7464db\") " Jan 03 06:02:42 crc kubenswrapper[4854]: I0103 06:02:42.286442 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wg85k\" (UniqueName: \"kubernetes.io/projected/11e65dd3-c929-4a34-aa76-94ad1d7464db-kube-api-access-wg85k\") pod \"11e65dd3-c929-4a34-aa76-94ad1d7464db\" (UID: \"11e65dd3-c929-4a34-aa76-94ad1d7464db\") " Jan 03 06:02:42 crc kubenswrapper[4854]: I0103 06:02:42.286475 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/19db79b0-2939-4a6b-bf9a-9f35b6f63acd-dns-svc\") pod \"19db79b0-2939-4a6b-bf9a-9f35b6f63acd\" (UID: \"19db79b0-2939-4a6b-bf9a-9f35b6f63acd\") " Jan 03 06:02:42 crc kubenswrapper[4854]: I0103 06:02:42.286572 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11e65dd3-c929-4a34-aa76-94ad1d7464db-dns-svc\") pod \"11e65dd3-c929-4a34-aa76-94ad1d7464db\" (UID: \"11e65dd3-c929-4a34-aa76-94ad1d7464db\") " Jan 03 06:02:42 crc kubenswrapper[4854]: I0103 06:02:42.334001 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19db79b0-2939-4a6b-bf9a-9f35b6f63acd-kube-api-access-dmzw2" (OuterVolumeSpecName: "kube-api-access-dmzw2") pod "19db79b0-2939-4a6b-bf9a-9f35b6f63acd" (UID: "19db79b0-2939-4a6b-bf9a-9f35b6f63acd"). InnerVolumeSpecName "kube-api-access-dmzw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:02:42 crc kubenswrapper[4854]: I0103 06:02:42.334507 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11e65dd3-c929-4a34-aa76-94ad1d7464db-kube-api-access-wg85k" (OuterVolumeSpecName: "kube-api-access-wg85k") pod "11e65dd3-c929-4a34-aa76-94ad1d7464db" (UID: "11e65dd3-c929-4a34-aa76-94ad1d7464db"). InnerVolumeSpecName "kube-api-access-wg85k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:02:42 crc kubenswrapper[4854]: I0103 06:02:42.390057 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmzw2\" (UniqueName: \"kubernetes.io/projected/19db79b0-2939-4a6b-bf9a-9f35b6f63acd-kube-api-access-dmzw2\") on node \"crc\" DevicePath \"\"" Jan 03 06:02:42 crc kubenswrapper[4854]: I0103 06:02:42.390355 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wg85k\" (UniqueName: \"kubernetes.io/projected/11e65dd3-c929-4a34-aa76-94ad1d7464db-kube-api-access-wg85k\") on node \"crc\" DevicePath \"\"" Jan 03 06:02:42 crc kubenswrapper[4854]: E0103 06:02:42.469518 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-sb-0" podUID="73c3ad1e-a419-4c11-a31d-81f28866fe2b" Jan 03 06:02:42 crc kubenswrapper[4854]: I0103 06:02:42.530734 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19db79b0-2939-4a6b-bf9a-9f35b6f63acd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "19db79b0-2939-4a6b-bf9a-9f35b6f63acd" (UID: "19db79b0-2939-4a6b-bf9a-9f35b6f63acd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:02:42 crc kubenswrapper[4854]: I0103 06:02:42.536585 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11e65dd3-c929-4a34-aa76-94ad1d7464db-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "11e65dd3-c929-4a34-aa76-94ad1d7464db" (UID: "11e65dd3-c929-4a34-aa76-94ad1d7464db"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:02:42 crc kubenswrapper[4854]: I0103 06:02:42.555818 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11e65dd3-c929-4a34-aa76-94ad1d7464db-config" (OuterVolumeSpecName: "config") pod "11e65dd3-c929-4a34-aa76-94ad1d7464db" (UID: "11e65dd3-c929-4a34-aa76-94ad1d7464db"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:02:42 crc kubenswrapper[4854]: I0103 06:02:42.558786 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19db79b0-2939-4a6b-bf9a-9f35b6f63acd-config" (OuterVolumeSpecName: "config") pod "19db79b0-2939-4a6b-bf9a-9f35b6f63acd" (UID: "19db79b0-2939-4a6b-bf9a-9f35b6f63acd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:02:42 crc kubenswrapper[4854]: I0103 06:02:42.593203 4854 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19db79b0-2939-4a6b-bf9a-9f35b6f63acd-config\") on node \"crc\" DevicePath \"\"" Jan 03 06:02:42 crc kubenswrapper[4854]: I0103 06:02:42.593238 4854 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11e65dd3-c929-4a34-aa76-94ad1d7464db-config\") on node \"crc\" DevicePath \"\"" Jan 03 06:02:42 crc kubenswrapper[4854]: I0103 06:02:42.593246 4854 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/19db79b0-2939-4a6b-bf9a-9f35b6f63acd-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 03 06:02:42 crc kubenswrapper[4854]: I0103 06:02:42.593254 4854 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11e65dd3-c929-4a34-aa76-94ad1d7464db-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 03 06:02:42 crc kubenswrapper[4854]: I0103 06:02:42.996192 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-jwxpm" event={"ID":"9863d2ec-0177-4054-b715-08a87aed5eae","Type":"ContainerStarted","Data":"415691996b2fe93764a889d00be63d93ac71a89620ac44bd9e8e61934d71ffe3"} Jan 03 06:02:42 crc kubenswrapper[4854]: I0103 06:02:42.998957 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"73c3ad1e-a419-4c11-a31d-81f28866fe2b","Type":"ContainerStarted","Data":"257ce9696958ed276d70bed4407077c2d686488e50de684d8f484fd25186d90b"} Jan 03 06:02:43 crc kubenswrapper[4854]: I0103 06:02:43.001680 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-jc5hb" event={"ID":"d35aca72-6071-4bee-b7d6-c6704ec02797","Type":"ContainerStarted","Data":"62b10d7e0d65adf71252681a3f644c67d6d37efe89cf2fdab9134cf0245e7d4c"} Jan 03 06:02:43 crc kubenswrapper[4854]: E0103 06:02:43.001718 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="73c3ad1e-a419-4c11-a31d-81f28866fe2b" Jan 03 06:02:43 crc kubenswrapper[4854]: I0103 06:02:43.001877 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-jc5hb" Jan 03 06:02:43 crc kubenswrapper[4854]: I0103 06:02:43.003945 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-fbbv8" event={"ID":"8299f6ba-92fe-41ee-8a63-184f8a594135","Type":"ContainerStarted","Data":"7c14405b7572123f74d20adf3796a74bdd3aa749405989ca480262c94aa5cf71"} Jan 03 06:02:43 crc kubenswrapper[4854]: I0103 06:02:43.004407 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-fbbv8" Jan 03 06:02:43 crc kubenswrapper[4854]: I0103 06:02:43.008421 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"08cbd0f0-dda7-45be-9bad-28f1d1bc108d","Type":"ContainerStarted","Data":"c36110dc2667d5c9b4662dfafea0d9e6222d80da5dbebc9edb9ffb7ec865427e"} Jan 03 06:02:43 crc kubenswrapper[4854]: I0103 06:02:43.010963 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-9p2s9" Jan 03 06:02:43 crc kubenswrapper[4854]: I0103 06:02:43.015301 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-mkvp7" event={"ID":"babc1db7-041b-4116-86ff-b9d0c4349d49","Type":"ContainerStarted","Data":"2f82bdaaff14fa001fd22c704f8f539885cc64a1f1d7d13dfee2b4e9f1126ecc"} Jan 03 06:02:43 crc kubenswrapper[4854]: I0103 06:02:43.015378 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-mkvp7" event={"ID":"babc1db7-041b-4116-86ff-b9d0c4349d49","Type":"ContainerStarted","Data":"a71321678b4ed0f9501fde9d07000bfbb189e39a4be789736da84c5d154c3a8d"} Jan 03 06:02:43 crc kubenswrapper[4854]: I0103 06:02:43.015467 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-2kbfm" Jan 03 06:02:43 crc kubenswrapper[4854]: I0103 06:02:43.023958 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-jwxpm" podStartSLOduration=16.467430572 podStartE2EDuration="20.023930478s" podCreationTimestamp="2026-01-03 06:02:23 +0000 UTC" firstStartedPulling="2026-01-03 06:02:38.386882793 +0000 UTC m=+1336.713459365" lastFinishedPulling="2026-01-03 06:02:41.943382699 +0000 UTC m=+1340.269959271" observedRunningTime="2026-01-03 06:02:43.016800962 +0000 UTC m=+1341.343377544" watchObservedRunningTime="2026-01-03 06:02:43.023930478 +0000 UTC m=+1341.350507070" Jan 03 06:02:43 crc kubenswrapper[4854]: I0103 06:02:43.151556 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=20.829167062 podStartE2EDuration="47.151537576s" podCreationTimestamp="2026-01-03 06:01:56 +0000 UTC" firstStartedPulling="2026-01-03 06:02:15.618976334 +0000 UTC m=+1313.945552906" lastFinishedPulling="2026-01-03 06:02:41.941346848 +0000 UTC m=+1340.267923420" observedRunningTime="2026-01-03 06:02:43.112273948 +0000 UTC m=+1341.438850560" watchObservedRunningTime="2026-01-03 06:02:43.151537576 +0000 UTC m=+1341.478114148" Jan 03 06:02:43 crc kubenswrapper[4854]: I0103 06:02:43.151985 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-jc5hb" podStartSLOduration=20.151979377 podStartE2EDuration="20.151979377s" podCreationTimestamp="2026-01-03 06:02:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:02:43.142652967 +0000 UTC m=+1341.469229549" watchObservedRunningTime="2026-01-03 06:02:43.151979377 +0000 UTC m=+1341.478555949" Jan 03 06:02:43 crc kubenswrapper[4854]: I0103 06:02:43.189683 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-fbbv8" podStartSLOduration=20.189654427 podStartE2EDuration="20.189654427s" podCreationTimestamp="2026-01-03 06:02:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:02:43.173923149 +0000 UTC m=+1341.500499731" watchObservedRunningTime="2026-01-03 06:02:43.189654427 +0000 UTC m=+1341.516230999" Jan 03 06:02:43 crc kubenswrapper[4854]: I0103 06:02:43.202891 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-9p2s9"] Jan 03 06:02:43 crc kubenswrapper[4854]: I0103 06:02:43.220673 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-9p2s9"] Jan 03 06:02:43 crc kubenswrapper[4854]: I0103 06:02:43.221900 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-mkvp7" podStartSLOduration=28.387093712 podStartE2EDuration="47.221879712s" podCreationTimestamp="2026-01-03 06:01:56 +0000 UTC" firstStartedPulling="2026-01-03 06:02:17.760118711 +0000 UTC m=+1316.086695283" lastFinishedPulling="2026-01-03 06:02:36.594904711 +0000 UTC m=+1334.921481283" observedRunningTime="2026-01-03 06:02:43.208345478 +0000 UTC m=+1341.534922060" watchObservedRunningTime="2026-01-03 06:02:43.221879712 +0000 UTC m=+1341.548456294" Jan 03 06:02:43 crc kubenswrapper[4854]: I0103 06:02:43.235705 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2kbfm"] Jan 03 06:02:43 crc kubenswrapper[4854]: I0103 06:02:43.242624 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2kbfm"] Jan 03 06:02:43 crc kubenswrapper[4854]: I0103 06:02:43.364956 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 03 06:02:43 crc kubenswrapper[4854]: I0103 06:02:43.365168 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 03 06:02:43 crc kubenswrapper[4854]: I0103 06:02:43.433144 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 03 06:02:44 crc kubenswrapper[4854]: I0103 06:02:44.021459 4854 generic.go:334] "Generic (PLEG): container finished" podID="10578fce-2c06-4977-9cb2-51b8593f9fed" containerID="2516ced73f5e638936aaf2680762c1b3e3018cc4dc29b07b6f9ad45dcf11cdc7" exitCode=0 Jan 03 06:02:44 crc kubenswrapper[4854]: I0103 06:02:44.021522 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"10578fce-2c06-4977-9cb2-51b8593f9fed","Type":"ContainerDied","Data":"2516ced73f5e638936aaf2680762c1b3e3018cc4dc29b07b6f9ad45dcf11cdc7"} Jan 03 06:02:44 crc kubenswrapper[4854]: I0103 06:02:44.023385 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-mkvp7" Jan 03 06:02:44 crc kubenswrapper[4854]: I0103 06:02:44.023445 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-mkvp7" Jan 03 06:02:44 crc kubenswrapper[4854]: E0103 06:02:44.026628 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="73c3ad1e-a419-4c11-a31d-81f28866fe2b" Jan 03 06:02:44 crc kubenswrapper[4854]: I0103 06:02:44.084187 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 03 06:02:44 crc kubenswrapper[4854]: I0103 06:02:44.143881 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11e65dd3-c929-4a34-aa76-94ad1d7464db" path="/var/lib/kubelet/pods/11e65dd3-c929-4a34-aa76-94ad1d7464db/volumes" Jan 03 06:02:44 crc kubenswrapper[4854]: I0103 06:02:44.153509 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19db79b0-2939-4a6b-bf9a-9f35b6f63acd" path="/var/lib/kubelet/pods/19db79b0-2939-4a6b-bf9a-9f35b6f63acd/volumes" Jan 03 06:02:45 crc kubenswrapper[4854]: I0103 06:02:45.035354 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"10578fce-2c06-4977-9cb2-51b8593f9fed","Type":"ContainerStarted","Data":"85534b86966512a3a6777d75350ffebd09d510b71ed3bd20f577bb5b92d31a74"} Jan 03 06:02:45 crc kubenswrapper[4854]: I0103 06:02:45.064949 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=36.707516309 podStartE2EDuration="56.064925523s" podCreationTimestamp="2026-01-03 06:01:49 +0000 UTC" firstStartedPulling="2026-01-03 06:02:17.230959965 +0000 UTC m=+1315.557536537" lastFinishedPulling="2026-01-03 06:02:36.588369159 +0000 UTC m=+1334.914945751" observedRunningTime="2026-01-03 06:02:45.056153036 +0000 UTC m=+1343.382729618" watchObservedRunningTime="2026-01-03 06:02:45.064925523 +0000 UTC m=+1343.391502115" Jan 03 06:02:47 crc kubenswrapper[4854]: I0103 06:02:47.488606 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 03 06:02:48 crc kubenswrapper[4854]: I0103 06:02:48.070027 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"748d9586-5917-42ab-8f1f-3a811b724dae","Type":"ContainerStarted","Data":"2190ae54060737fb03fc6ba82f135aa31d85f4332fe7ba582c8b42da8c0416cb"} Jan 03 06:02:48 crc kubenswrapper[4854]: E0103 06:02:48.344580 4854 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97a38e3c_dd5a_447b_b580_ed7bd5f16fde.slice/crio-d95b4d4b362297f6c2586c50864692d21d1c811876211a128deb95be83ec1ff2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97a38e3c_dd5a_447b_b580_ed7bd5f16fde.slice/crio-conmon-d95b4d4b362297f6c2586c50864692d21d1c811876211a128deb95be83ec1ff2.scope\": RecentStats: unable to find data in memory cache]" Jan 03 06:02:48 crc kubenswrapper[4854]: E0103 06:02:48.344665 4854 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97a38e3c_dd5a_447b_b580_ed7bd5f16fde.slice/crio-d95b4d4b362297f6c2586c50864692d21d1c811876211a128deb95be83ec1ff2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97a38e3c_dd5a_447b_b580_ed7bd5f16fde.slice/crio-conmon-d95b4d4b362297f6c2586c50864692d21d1c811876211a128deb95be83ec1ff2.scope\": RecentStats: unable to find data in memory cache]" Jan 03 06:02:48 crc kubenswrapper[4854]: I0103 06:02:48.932303 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fd796d7df-jc5hb" Jan 03 06:02:49 crc kubenswrapper[4854]: I0103 06:02:49.027311 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-fbbv8" Jan 03 06:02:49 crc kubenswrapper[4854]: I0103 06:02:49.082599 4854 generic.go:334] "Generic (PLEG): container finished" podID="97a38e3c-dd5a-447b-b580-ed7bd5f16fde" containerID="d95b4d4b362297f6c2586c50864692d21d1c811876211a128deb95be83ec1ff2" exitCode=0 Jan 03 06:02:49 crc kubenswrapper[4854]: I0103 06:02:49.082649 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"97a38e3c-dd5a-447b-b580-ed7bd5f16fde","Type":"ContainerDied","Data":"d95b4d4b362297f6c2586c50864692d21d1c811876211a128deb95be83ec1ff2"} Jan 03 06:02:49 crc kubenswrapper[4854]: I0103 06:02:49.095252 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-jc5hb"] Jan 03 06:02:49 crc kubenswrapper[4854]: I0103 06:02:49.095467 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-jc5hb" podUID="d35aca72-6071-4bee-b7d6-c6704ec02797" containerName="dnsmasq-dns" containerID="cri-o://62b10d7e0d65adf71252681a3f644c67d6d37efe89cf2fdab9134cf0245e7d4c" gracePeriod=10 Jan 03 06:02:49 crc kubenswrapper[4854]: I0103 06:02:49.628633 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-jc5hb" Jan 03 06:02:49 crc kubenswrapper[4854]: I0103 06:02:49.808049 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d35aca72-6071-4bee-b7d6-c6704ec02797-dns-svc\") pod \"d35aca72-6071-4bee-b7d6-c6704ec02797\" (UID: \"d35aca72-6071-4bee-b7d6-c6704ec02797\") " Jan 03 06:02:49 crc kubenswrapper[4854]: I0103 06:02:49.808286 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d35aca72-6071-4bee-b7d6-c6704ec02797-config\") pod \"d35aca72-6071-4bee-b7d6-c6704ec02797\" (UID: \"d35aca72-6071-4bee-b7d6-c6704ec02797\") " Jan 03 06:02:49 crc kubenswrapper[4854]: I0103 06:02:49.808331 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d35aca72-6071-4bee-b7d6-c6704ec02797-ovsdbserver-nb\") pod \"d35aca72-6071-4bee-b7d6-c6704ec02797\" (UID: \"d35aca72-6071-4bee-b7d6-c6704ec02797\") " Jan 03 06:02:49 crc kubenswrapper[4854]: I0103 06:02:49.808544 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2pvj\" (UniqueName: \"kubernetes.io/projected/d35aca72-6071-4bee-b7d6-c6704ec02797-kube-api-access-v2pvj\") pod \"d35aca72-6071-4bee-b7d6-c6704ec02797\" (UID: \"d35aca72-6071-4bee-b7d6-c6704ec02797\") " Jan 03 06:02:49 crc kubenswrapper[4854]: I0103 06:02:49.816561 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d35aca72-6071-4bee-b7d6-c6704ec02797-kube-api-access-v2pvj" (OuterVolumeSpecName: "kube-api-access-v2pvj") pod "d35aca72-6071-4bee-b7d6-c6704ec02797" (UID: "d35aca72-6071-4bee-b7d6-c6704ec02797"). InnerVolumeSpecName "kube-api-access-v2pvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:02:49 crc kubenswrapper[4854]: I0103 06:02:49.858270 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d35aca72-6071-4bee-b7d6-c6704ec02797-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d35aca72-6071-4bee-b7d6-c6704ec02797" (UID: "d35aca72-6071-4bee-b7d6-c6704ec02797"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:02:49 crc kubenswrapper[4854]: I0103 06:02:49.887479 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d35aca72-6071-4bee-b7d6-c6704ec02797-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d35aca72-6071-4bee-b7d6-c6704ec02797" (UID: "d35aca72-6071-4bee-b7d6-c6704ec02797"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:02:49 crc kubenswrapper[4854]: I0103 06:02:49.901613 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d35aca72-6071-4bee-b7d6-c6704ec02797-config" (OuterVolumeSpecName: "config") pod "d35aca72-6071-4bee-b7d6-c6704ec02797" (UID: "d35aca72-6071-4bee-b7d6-c6704ec02797"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:02:49 crc kubenswrapper[4854]: I0103 06:02:49.912066 4854 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d35aca72-6071-4bee-b7d6-c6704ec02797-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 03 06:02:49 crc kubenswrapper[4854]: I0103 06:02:49.912115 4854 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d35aca72-6071-4bee-b7d6-c6704ec02797-config\") on node \"crc\" DevicePath \"\"" Jan 03 06:02:49 crc kubenswrapper[4854]: I0103 06:02:49.912126 4854 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d35aca72-6071-4bee-b7d6-c6704ec02797-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 03 06:02:49 crc kubenswrapper[4854]: I0103 06:02:49.912136 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2pvj\" (UniqueName: \"kubernetes.io/projected/d35aca72-6071-4bee-b7d6-c6704ec02797-kube-api-access-v2pvj\") on node \"crc\" DevicePath \"\"" Jan 03 06:02:50 crc kubenswrapper[4854]: I0103 06:02:50.094341 4854 generic.go:334] "Generic (PLEG): container finished" podID="d35aca72-6071-4bee-b7d6-c6704ec02797" containerID="62b10d7e0d65adf71252681a3f644c67d6d37efe89cf2fdab9134cf0245e7d4c" exitCode=0 Jan 03 06:02:50 crc kubenswrapper[4854]: I0103 06:02:50.094408 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-jc5hb" Jan 03 06:02:50 crc kubenswrapper[4854]: I0103 06:02:50.094455 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-jc5hb" event={"ID":"d35aca72-6071-4bee-b7d6-c6704ec02797","Type":"ContainerDied","Data":"62b10d7e0d65adf71252681a3f644c67d6d37efe89cf2fdab9134cf0245e7d4c"} Jan 03 06:02:50 crc kubenswrapper[4854]: I0103 06:02:50.094492 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-jc5hb" event={"ID":"d35aca72-6071-4bee-b7d6-c6704ec02797","Type":"ContainerDied","Data":"73971b448b4bd0cce45efb6fe7a0af9530a3a95e4af6968529002291d6f55611"} Jan 03 06:02:50 crc kubenswrapper[4854]: I0103 06:02:50.094516 4854 scope.go:117] "RemoveContainer" containerID="62b10d7e0d65adf71252681a3f644c67d6d37efe89cf2fdab9134cf0245e7d4c" Jan 03 06:02:50 crc kubenswrapper[4854]: I0103 06:02:50.097372 4854 generic.go:334] "Generic (PLEG): container finished" podID="b5742bd8-396a-4174-a8b7-dd6deec69632" containerID="88f49f06efbbf4014f255467c048d5442670bcc7f7f5b289052869111303351b" exitCode=0 Jan 03 06:02:50 crc kubenswrapper[4854]: I0103 06:02:50.097455 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b5742bd8-396a-4174-a8b7-dd6deec69632","Type":"ContainerDied","Data":"88f49f06efbbf4014f255467c048d5442670bcc7f7f5b289052869111303351b"} Jan 03 06:02:50 crc kubenswrapper[4854]: I0103 06:02:50.100218 4854 generic.go:334] "Generic (PLEG): container finished" podID="71288814-2f4e-4e92-8064-8f9ef1920212" containerID="5d753224579da962547240b2ab8650f6accf03025a8696720fb08f50815571ce" exitCode=0 Jan 03 06:02:50 crc kubenswrapper[4854]: I0103 06:02:50.100277 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"71288814-2f4e-4e92-8064-8f9ef1920212","Type":"ContainerDied","Data":"5d753224579da962547240b2ab8650f6accf03025a8696720fb08f50815571ce"} Jan 03 06:02:50 crc kubenswrapper[4854]: I0103 06:02:50.130862 4854 scope.go:117] "RemoveContainer" containerID="108e59799bc3d3de0b63b8b145bad6340e7f031734ce0ef56048224eaadab11d" Jan 03 06:02:50 crc kubenswrapper[4854]: I0103 06:02:50.185000 4854 scope.go:117] "RemoveContainer" containerID="62b10d7e0d65adf71252681a3f644c67d6d37efe89cf2fdab9134cf0245e7d4c" Jan 03 06:02:50 crc kubenswrapper[4854]: E0103 06:02:50.185491 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62b10d7e0d65adf71252681a3f644c67d6d37efe89cf2fdab9134cf0245e7d4c\": container with ID starting with 62b10d7e0d65adf71252681a3f644c67d6d37efe89cf2fdab9134cf0245e7d4c not found: ID does not exist" containerID="62b10d7e0d65adf71252681a3f644c67d6d37efe89cf2fdab9134cf0245e7d4c" Jan 03 06:02:50 crc kubenswrapper[4854]: I0103 06:02:50.185534 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62b10d7e0d65adf71252681a3f644c67d6d37efe89cf2fdab9134cf0245e7d4c"} err="failed to get container status \"62b10d7e0d65adf71252681a3f644c67d6d37efe89cf2fdab9134cf0245e7d4c\": rpc error: code = NotFound desc = could not find container \"62b10d7e0d65adf71252681a3f644c67d6d37efe89cf2fdab9134cf0245e7d4c\": container with ID starting with 62b10d7e0d65adf71252681a3f644c67d6d37efe89cf2fdab9134cf0245e7d4c not found: ID does not exist" Jan 03 06:02:50 crc kubenswrapper[4854]: I0103 06:02:50.185561 4854 scope.go:117] "RemoveContainer" containerID="108e59799bc3d3de0b63b8b145bad6340e7f031734ce0ef56048224eaadab11d" Jan 03 06:02:50 crc kubenswrapper[4854]: E0103 06:02:50.186019 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"108e59799bc3d3de0b63b8b145bad6340e7f031734ce0ef56048224eaadab11d\": container with ID starting with 108e59799bc3d3de0b63b8b145bad6340e7f031734ce0ef56048224eaadab11d not found: ID does not exist" containerID="108e59799bc3d3de0b63b8b145bad6340e7f031734ce0ef56048224eaadab11d" Jan 03 06:02:50 crc kubenswrapper[4854]: I0103 06:02:50.186065 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"108e59799bc3d3de0b63b8b145bad6340e7f031734ce0ef56048224eaadab11d"} err="failed to get container status \"108e59799bc3d3de0b63b8b145bad6340e7f031734ce0ef56048224eaadab11d\": rpc error: code = NotFound desc = could not find container \"108e59799bc3d3de0b63b8b145bad6340e7f031734ce0ef56048224eaadab11d\": container with ID starting with 108e59799bc3d3de0b63b8b145bad6340e7f031734ce0ef56048224eaadab11d not found: ID does not exist" Jan 03 06:02:50 crc kubenswrapper[4854]: I0103 06:02:50.192556 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-jc5hb"] Jan 03 06:02:50 crc kubenswrapper[4854]: I0103 06:02:50.204864 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-jc5hb"] Jan 03 06:02:50 crc kubenswrapper[4854]: I0103 06:02:50.839292 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-64d6659995-xwhf5" podUID="1ca99325-405c-467a-a9e0-53c5e4fb96e4" containerName="console" containerID="cri-o://5513109ef6942d9686904d3f18a4a8b92ad267ac54c67de568e73bfe7ff3688e" gracePeriod=15 Jan 03 06:02:50 crc kubenswrapper[4854]: I0103 06:02:50.851019 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 03 06:02:50 crc kubenswrapper[4854]: I0103 06:02:50.851344 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 03 06:02:50 crc kubenswrapper[4854]: I0103 06:02:50.950826 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 03 06:02:51 crc kubenswrapper[4854]: I0103 06:02:51.114852 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"71288814-2f4e-4e92-8064-8f9ef1920212","Type":"ContainerStarted","Data":"2c57a63b557f809daa470fa1e3f261e47b9ca7c22a62d29b168b4282d62dc1e2"} Jan 03 06:02:51 crc kubenswrapper[4854]: I0103 06:02:51.115099 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:02:51 crc kubenswrapper[4854]: I0103 06:02:51.126985 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-64d6659995-xwhf5_1ca99325-405c-467a-a9e0-53c5e4fb96e4/console/0.log" Jan 03 06:02:51 crc kubenswrapper[4854]: I0103 06:02:51.127038 4854 generic.go:334] "Generic (PLEG): container finished" podID="1ca99325-405c-467a-a9e0-53c5e4fb96e4" containerID="5513109ef6942d9686904d3f18a4a8b92ad267ac54c67de568e73bfe7ff3688e" exitCode=2 Jan 03 06:02:51 crc kubenswrapper[4854]: I0103 06:02:51.127143 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d6659995-xwhf5" event={"ID":"1ca99325-405c-467a-a9e0-53c5e4fb96e4","Type":"ContainerDied","Data":"5513109ef6942d9686904d3f18a4a8b92ad267ac54c67de568e73bfe7ff3688e"} Jan 03 06:02:51 crc kubenswrapper[4854]: I0103 06:02:51.133576 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b5742bd8-396a-4174-a8b7-dd6deec69632","Type":"ContainerStarted","Data":"0b753d996ac1ea5a51f54ed40a0e4776ee6470150075864e7557c78c2d7875ce"} Jan 03 06:02:51 crc kubenswrapper[4854]: I0103 06:02:51.134069 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 03 06:02:51 crc kubenswrapper[4854]: I0103 06:02:51.164730 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=49.960554211 podStartE2EDuration="1m4.164683855s" podCreationTimestamp="2026-01-03 06:01:47 +0000 UTC" firstStartedPulling="2026-01-03 06:02:01.631315111 +0000 UTC m=+1299.957891683" lastFinishedPulling="2026-01-03 06:02:15.835444755 +0000 UTC m=+1314.162021327" observedRunningTime="2026-01-03 06:02:51.152118015 +0000 UTC m=+1349.478694607" watchObservedRunningTime="2026-01-03 06:02:51.164683855 +0000 UTC m=+1349.491260437" Jan 03 06:02:51 crc kubenswrapper[4854]: I0103 06:02:51.210515 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=50.059849742 podStartE2EDuration="1m4.210497366s" podCreationTimestamp="2026-01-03 06:01:47 +0000 UTC" firstStartedPulling="2026-01-03 06:02:01.611657826 +0000 UTC m=+1299.938234398" lastFinishedPulling="2026-01-03 06:02:15.76230546 +0000 UTC m=+1314.088882022" observedRunningTime="2026-01-03 06:02:51.202291323 +0000 UTC m=+1349.528867895" watchObservedRunningTime="2026-01-03 06:02:51.210497366 +0000 UTC m=+1349.537073938" Jan 03 06:02:51 crc kubenswrapper[4854]: I0103 06:02:51.382177 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 03 06:02:51 crc kubenswrapper[4854]: I0103 06:02:51.939033 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-64d6659995-xwhf5_1ca99325-405c-467a-a9e0-53c5e4fb96e4/console/0.log" Jan 03 06:02:51 crc kubenswrapper[4854]: I0103 06:02:51.939122 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d6659995-xwhf5" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.062204 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ca99325-405c-467a-a9e0-53c5e4fb96e4-trusted-ca-bundle\") pod \"1ca99325-405c-467a-a9e0-53c5e4fb96e4\" (UID: \"1ca99325-405c-467a-a9e0-53c5e4fb96e4\") " Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.062512 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1ca99325-405c-467a-a9e0-53c5e4fb96e4-console-oauth-config\") pod \"1ca99325-405c-467a-a9e0-53c5e4fb96e4\" (UID: \"1ca99325-405c-467a-a9e0-53c5e4fb96e4\") " Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.062592 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1ca99325-405c-467a-a9e0-53c5e4fb96e4-oauth-serving-cert\") pod \"1ca99325-405c-467a-a9e0-53c5e4fb96e4\" (UID: \"1ca99325-405c-467a-a9e0-53c5e4fb96e4\") " Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.062654 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5dmk\" (UniqueName: \"kubernetes.io/projected/1ca99325-405c-467a-a9e0-53c5e4fb96e4-kube-api-access-s5dmk\") pod \"1ca99325-405c-467a-a9e0-53c5e4fb96e4\" (UID: \"1ca99325-405c-467a-a9e0-53c5e4fb96e4\") " Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.062695 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1ca99325-405c-467a-a9e0-53c5e4fb96e4-service-ca\") pod \"1ca99325-405c-467a-a9e0-53c5e4fb96e4\" (UID: \"1ca99325-405c-467a-a9e0-53c5e4fb96e4\") " Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.062784 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1ca99325-405c-467a-a9e0-53c5e4fb96e4-console-config\") pod \"1ca99325-405c-467a-a9e0-53c5e4fb96e4\" (UID: \"1ca99325-405c-467a-a9e0-53c5e4fb96e4\") " Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.062894 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1ca99325-405c-467a-a9e0-53c5e4fb96e4-console-serving-cert\") pod \"1ca99325-405c-467a-a9e0-53c5e4fb96e4\" (UID: \"1ca99325-405c-467a-a9e0-53c5e4fb96e4\") " Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.063432 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ca99325-405c-467a-a9e0-53c5e4fb96e4-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "1ca99325-405c-467a-a9e0-53c5e4fb96e4" (UID: "1ca99325-405c-467a-a9e0-53c5e4fb96e4"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.063449 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ca99325-405c-467a-a9e0-53c5e4fb96e4-service-ca" (OuterVolumeSpecName: "service-ca") pod "1ca99325-405c-467a-a9e0-53c5e4fb96e4" (UID: "1ca99325-405c-467a-a9e0-53c5e4fb96e4"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.063484 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ca99325-405c-467a-a9e0-53c5e4fb96e4-console-config" (OuterVolumeSpecName: "console-config") pod "1ca99325-405c-467a-a9e0-53c5e4fb96e4" (UID: "1ca99325-405c-467a-a9e0-53c5e4fb96e4"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.068849 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ca99325-405c-467a-a9e0-53c5e4fb96e4-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "1ca99325-405c-467a-a9e0-53c5e4fb96e4" (UID: "1ca99325-405c-467a-a9e0-53c5e4fb96e4"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.070467 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ca99325-405c-467a-a9e0-53c5e4fb96e4-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1ca99325-405c-467a-a9e0-53c5e4fb96e4" (UID: "1ca99325-405c-467a-a9e0-53c5e4fb96e4"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.071632 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ca99325-405c-467a-a9e0-53c5e4fb96e4-kube-api-access-s5dmk" (OuterVolumeSpecName: "kube-api-access-s5dmk") pod "1ca99325-405c-467a-a9e0-53c5e4fb96e4" (UID: "1ca99325-405c-467a-a9e0-53c5e4fb96e4"). InnerVolumeSpecName "kube-api-access-s5dmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.076535 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ca99325-405c-467a-a9e0-53c5e4fb96e4-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "1ca99325-405c-467a-a9e0-53c5e4fb96e4" (UID: "1ca99325-405c-467a-a9e0-53c5e4fb96e4"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.104942 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-fspcx"] Jan 03 06:02:52 crc kubenswrapper[4854]: E0103 06:02:52.105403 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11e65dd3-c929-4a34-aa76-94ad1d7464db" containerName="init" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.105420 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="11e65dd3-c929-4a34-aa76-94ad1d7464db" containerName="init" Jan 03 06:02:52 crc kubenswrapper[4854]: E0103 06:02:52.105434 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d35aca72-6071-4bee-b7d6-c6704ec02797" containerName="init" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.105439 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="d35aca72-6071-4bee-b7d6-c6704ec02797" containerName="init" Jan 03 06:02:52 crc kubenswrapper[4854]: E0103 06:02:52.105450 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11e65dd3-c929-4a34-aa76-94ad1d7464db" containerName="dnsmasq-dns" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.105456 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="11e65dd3-c929-4a34-aa76-94ad1d7464db" containerName="dnsmasq-dns" Jan 03 06:02:52 crc kubenswrapper[4854]: E0103 06:02:52.105472 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19db79b0-2939-4a6b-bf9a-9f35b6f63acd" containerName="init" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.105478 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="19db79b0-2939-4a6b-bf9a-9f35b6f63acd" containerName="init" Jan 03 06:02:52 crc kubenswrapper[4854]: E0103 06:02:52.105486 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ca99325-405c-467a-a9e0-53c5e4fb96e4" containerName="console" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.105492 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ca99325-405c-467a-a9e0-53c5e4fb96e4" containerName="console" Jan 03 06:02:52 crc kubenswrapper[4854]: E0103 06:02:52.105503 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d35aca72-6071-4bee-b7d6-c6704ec02797" containerName="dnsmasq-dns" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.105509 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="d35aca72-6071-4bee-b7d6-c6704ec02797" containerName="dnsmasq-dns" Jan 03 06:02:52 crc kubenswrapper[4854]: E0103 06:02:52.105524 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19db79b0-2939-4a6b-bf9a-9f35b6f63acd" containerName="dnsmasq-dns" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.105530 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="19db79b0-2939-4a6b-bf9a-9f35b6f63acd" containerName="dnsmasq-dns" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.105716 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="d35aca72-6071-4bee-b7d6-c6704ec02797" containerName="dnsmasq-dns" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.105731 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ca99325-405c-467a-a9e0-53c5e4fb96e4" containerName="console" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.105744 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="11e65dd3-c929-4a34-aa76-94ad1d7464db" containerName="dnsmasq-dns" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.105750 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="19db79b0-2939-4a6b-bf9a-9f35b6f63acd" containerName="dnsmasq-dns" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.106407 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fspcx" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.114926 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-fspcx"] Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.150800 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d35aca72-6071-4bee-b7d6-c6704ec02797" path="/var/lib/kubelet/pods/d35aca72-6071-4bee-b7d6-c6704ec02797/volumes" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.151739 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-4c08-account-create-update-7pbzj"] Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.152937 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4c08-account-create-update-7pbzj" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.156619 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.157066 4854 generic.go:334] "Generic (PLEG): container finished" podID="748d9586-5917-42ab-8f1f-3a811b724dae" containerID="2190ae54060737fb03fc6ba82f135aa31d85f4332fe7ba582c8b42da8c0416cb" exitCode=0 Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.157130 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"748d9586-5917-42ab-8f1f-3a811b724dae","Type":"ContainerDied","Data":"2190ae54060737fb03fc6ba82f135aa31d85f4332fe7ba582c8b42da8c0416cb"} Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.160803 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-64d6659995-xwhf5_1ca99325-405c-467a-a9e0-53c5e4fb96e4/console/0.log" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.160904 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d6659995-xwhf5" event={"ID":"1ca99325-405c-467a-a9e0-53c5e4fb96e4","Type":"ContainerDied","Data":"064a900973c6546b03e8b420b28b3f7e0bbd28f9f57dfc8ded02ffac2098d734"} Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.160943 4854 scope.go:117] "RemoveContainer" containerID="5513109ef6942d9686904d3f18a4a8b92ad267ac54c67de568e73bfe7ff3688e" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.161044 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d6659995-xwhf5" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.167423 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rng2p\" (UniqueName: \"kubernetes.io/projected/1b06e03e-86ca-4379-9199-a4c1bddd4e33-kube-api-access-rng2p\") pod \"keystone-db-create-fspcx\" (UID: \"1b06e03e-86ca-4379-9199-a4c1bddd4e33\") " pod="openstack/keystone-db-create-fspcx" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.167675 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b06e03e-86ca-4379-9199-a4c1bddd4e33-operator-scripts\") pod \"keystone-db-create-fspcx\" (UID: \"1b06e03e-86ca-4379-9199-a4c1bddd4e33\") " pod="openstack/keystone-db-create-fspcx" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.167868 4854 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1ca99325-405c-467a-a9e0-53c5e4fb96e4-console-config\") on node \"crc\" DevicePath \"\"" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.167881 4854 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1ca99325-405c-467a-a9e0-53c5e4fb96e4-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.167890 4854 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ca99325-405c-467a-a9e0-53c5e4fb96e4-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.167899 4854 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1ca99325-405c-467a-a9e0-53c5e4fb96e4-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.167906 4854 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1ca99325-405c-467a-a9e0-53c5e4fb96e4-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.167914 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5dmk\" (UniqueName: \"kubernetes.io/projected/1ca99325-405c-467a-a9e0-53c5e4fb96e4-kube-api-access-s5dmk\") on node \"crc\" DevicePath \"\"" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.167925 4854 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1ca99325-405c-467a-a9e0-53c5e4fb96e4-service-ca\") on node \"crc\" DevicePath \"\"" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.174219 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-ngk5h" event={"ID":"32ebd9b5-c83a-401d-824e-77c47a842836","Type":"ContainerStarted","Data":"505e51c0303575f1b88044ca6934d934419d8b369573ac48cb18bebe3ec8d7dc"} Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.208825 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-4c08-account-create-update-7pbzj"] Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.257111 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-ngk5h" podStartSLOduration=25.234405565 podStartE2EDuration="58.257093298s" podCreationTimestamp="2026-01-03 06:01:54 +0000 UTC" firstStartedPulling="2026-01-03 06:02:17.74550012 +0000 UTC m=+1316.072076692" lastFinishedPulling="2026-01-03 06:02:50.768187853 +0000 UTC m=+1349.094764425" observedRunningTime="2026-01-03 06:02:52.238819307 +0000 UTC m=+1350.565395879" watchObservedRunningTime="2026-01-03 06:02:52.257093298 +0000 UTC m=+1350.583669890" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.279395 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de481fb0-7bd9-496e-99e1-5a3d1a25e47b-operator-scripts\") pod \"keystone-4c08-account-create-update-7pbzj\" (UID: \"de481fb0-7bd9-496e-99e1-5a3d1a25e47b\") " pod="openstack/keystone-4c08-account-create-update-7pbzj" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.279500 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b06e03e-86ca-4379-9199-a4c1bddd4e33-operator-scripts\") pod \"keystone-db-create-fspcx\" (UID: \"1b06e03e-86ca-4379-9199-a4c1bddd4e33\") " pod="openstack/keystone-db-create-fspcx" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.279604 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rshws\" (UniqueName: \"kubernetes.io/projected/de481fb0-7bd9-496e-99e1-5a3d1a25e47b-kube-api-access-rshws\") pod \"keystone-4c08-account-create-update-7pbzj\" (UID: \"de481fb0-7bd9-496e-99e1-5a3d1a25e47b\") " pod="openstack/keystone-4c08-account-create-update-7pbzj" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.279771 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rng2p\" (UniqueName: \"kubernetes.io/projected/1b06e03e-86ca-4379-9199-a4c1bddd4e33-kube-api-access-rng2p\") pod \"keystone-db-create-fspcx\" (UID: \"1b06e03e-86ca-4379-9199-a4c1bddd4e33\") " pod="openstack/keystone-db-create-fspcx" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.281026 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b06e03e-86ca-4379-9199-a4c1bddd4e33-operator-scripts\") pod \"keystone-db-create-fspcx\" (UID: \"1b06e03e-86ca-4379-9199-a4c1bddd4e33\") " pod="openstack/keystone-db-create-fspcx" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.289117 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-64d6659995-xwhf5"] Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.298481 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-64d6659995-xwhf5"] Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.300921 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rng2p\" (UniqueName: \"kubernetes.io/projected/1b06e03e-86ca-4379-9199-a4c1bddd4e33-kube-api-access-rng2p\") pod \"keystone-db-create-fspcx\" (UID: \"1b06e03e-86ca-4379-9199-a4c1bddd4e33\") " pod="openstack/keystone-db-create-fspcx" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.381238 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rshws\" (UniqueName: \"kubernetes.io/projected/de481fb0-7bd9-496e-99e1-5a3d1a25e47b-kube-api-access-rshws\") pod \"keystone-4c08-account-create-update-7pbzj\" (UID: \"de481fb0-7bd9-496e-99e1-5a3d1a25e47b\") " pod="openstack/keystone-4c08-account-create-update-7pbzj" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.381413 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de481fb0-7bd9-496e-99e1-5a3d1a25e47b-operator-scripts\") pod \"keystone-4c08-account-create-update-7pbzj\" (UID: \"de481fb0-7bd9-496e-99e1-5a3d1a25e47b\") " pod="openstack/keystone-4c08-account-create-update-7pbzj" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.382042 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de481fb0-7bd9-496e-99e1-5a3d1a25e47b-operator-scripts\") pod \"keystone-4c08-account-create-update-7pbzj\" (UID: \"de481fb0-7bd9-496e-99e1-5a3d1a25e47b\") " pod="openstack/keystone-4c08-account-create-update-7pbzj" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.404532 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rshws\" (UniqueName: \"kubernetes.io/projected/de481fb0-7bd9-496e-99e1-5a3d1a25e47b-kube-api-access-rshws\") pod \"keystone-4c08-account-create-update-7pbzj\" (UID: \"de481fb0-7bd9-496e-99e1-5a3d1a25e47b\") " pod="openstack/keystone-4c08-account-create-update-7pbzj" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.542303 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fspcx" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.559011 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-qv4f6"] Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.560575 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-qv4f6" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.561637 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4c08-account-create-update-7pbzj" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.591536 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8251ed1f-e0cc-48dd-8bbd-14c8753a65a3-operator-scripts\") pod \"placement-db-create-qv4f6\" (UID: \"8251ed1f-e0cc-48dd-8bbd-14c8753a65a3\") " pod="openstack/placement-db-create-qv4f6" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.591941 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9rlk\" (UniqueName: \"kubernetes.io/projected/8251ed1f-e0cc-48dd-8bbd-14c8753a65a3-kube-api-access-c9rlk\") pod \"placement-db-create-qv4f6\" (UID: \"8251ed1f-e0cc-48dd-8bbd-14c8753a65a3\") " pod="openstack/placement-db-create-qv4f6" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.592136 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-qv4f6"] Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.642294 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-8d7c-account-create-update-lpc6t"] Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.643747 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8d7c-account-create-update-lpc6t" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.648919 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.690317 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8d7c-account-create-update-lpc6t"] Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.694428 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9rlk\" (UniqueName: \"kubernetes.io/projected/8251ed1f-e0cc-48dd-8bbd-14c8753a65a3-kube-api-access-c9rlk\") pod \"placement-db-create-qv4f6\" (UID: \"8251ed1f-e0cc-48dd-8bbd-14c8753a65a3\") " pod="openstack/placement-db-create-qv4f6" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.694538 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ac766a6-c8c8-4506-b86d-55b398c38783-operator-scripts\") pod \"placement-8d7c-account-create-update-lpc6t\" (UID: \"7ac766a6-c8c8-4506-b86d-55b398c38783\") " pod="openstack/placement-8d7c-account-create-update-lpc6t" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.694581 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8251ed1f-e0cc-48dd-8bbd-14c8753a65a3-operator-scripts\") pod \"placement-db-create-qv4f6\" (UID: \"8251ed1f-e0cc-48dd-8bbd-14c8753a65a3\") " pod="openstack/placement-db-create-qv4f6" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.694614 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pxtq\" (UniqueName: \"kubernetes.io/projected/7ac766a6-c8c8-4506-b86d-55b398c38783-kube-api-access-5pxtq\") pod \"placement-8d7c-account-create-update-lpc6t\" (UID: \"7ac766a6-c8c8-4506-b86d-55b398c38783\") " pod="openstack/placement-8d7c-account-create-update-lpc6t" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.695643 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8251ed1f-e0cc-48dd-8bbd-14c8753a65a3-operator-scripts\") pod \"placement-db-create-qv4f6\" (UID: \"8251ed1f-e0cc-48dd-8bbd-14c8753a65a3\") " pod="openstack/placement-db-create-qv4f6" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.708361 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nxt9h"] Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.710592 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nxt9h" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.720718 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9rlk\" (UniqueName: \"kubernetes.io/projected/8251ed1f-e0cc-48dd-8bbd-14c8753a65a3-kube-api-access-c9rlk\") pod \"placement-db-create-qv4f6\" (UID: \"8251ed1f-e0cc-48dd-8bbd-14c8753a65a3\") " pod="openstack/placement-db-create-qv4f6" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.767910 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nxt9h"] Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.800030 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7pvk\" (UniqueName: \"kubernetes.io/projected/6f9e2844-dbc2-488b-bb08-77f9a4284a35-kube-api-access-x7pvk\") pod \"redhat-operators-nxt9h\" (UID: \"6f9e2844-dbc2-488b-bb08-77f9a4284a35\") " pod="openshift-marketplace/redhat-operators-nxt9h" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.800630 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f9e2844-dbc2-488b-bb08-77f9a4284a35-catalog-content\") pod \"redhat-operators-nxt9h\" (UID: \"6f9e2844-dbc2-488b-bb08-77f9a4284a35\") " pod="openshift-marketplace/redhat-operators-nxt9h" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.801010 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ac766a6-c8c8-4506-b86d-55b398c38783-operator-scripts\") pod \"placement-8d7c-account-create-update-lpc6t\" (UID: \"7ac766a6-c8c8-4506-b86d-55b398c38783\") " pod="openstack/placement-8d7c-account-create-update-lpc6t" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.801391 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pxtq\" (UniqueName: \"kubernetes.io/projected/7ac766a6-c8c8-4506-b86d-55b398c38783-kube-api-access-5pxtq\") pod \"placement-8d7c-account-create-update-lpc6t\" (UID: \"7ac766a6-c8c8-4506-b86d-55b398c38783\") " pod="openstack/placement-8d7c-account-create-update-lpc6t" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.801836 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f9e2844-dbc2-488b-bb08-77f9a4284a35-utilities\") pod \"redhat-operators-nxt9h\" (UID: \"6f9e2844-dbc2-488b-bb08-77f9a4284a35\") " pod="openshift-marketplace/redhat-operators-nxt9h" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.805366 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ac766a6-c8c8-4506-b86d-55b398c38783-operator-scripts\") pod \"placement-8d7c-account-create-update-lpc6t\" (UID: \"7ac766a6-c8c8-4506-b86d-55b398c38783\") " pod="openstack/placement-8d7c-account-create-update-lpc6t" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.836010 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pxtq\" (UniqueName: \"kubernetes.io/projected/7ac766a6-c8c8-4506-b86d-55b398c38783-kube-api-access-5pxtq\") pod \"placement-8d7c-account-create-update-lpc6t\" (UID: \"7ac766a6-c8c8-4506-b86d-55b398c38783\") " pod="openstack/placement-8d7c-account-create-update-lpc6t" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.895710 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-qv4f6" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.905860 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7pvk\" (UniqueName: \"kubernetes.io/projected/6f9e2844-dbc2-488b-bb08-77f9a4284a35-kube-api-access-x7pvk\") pod \"redhat-operators-nxt9h\" (UID: \"6f9e2844-dbc2-488b-bb08-77f9a4284a35\") " pod="openshift-marketplace/redhat-operators-nxt9h" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.906001 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f9e2844-dbc2-488b-bb08-77f9a4284a35-catalog-content\") pod \"redhat-operators-nxt9h\" (UID: \"6f9e2844-dbc2-488b-bb08-77f9a4284a35\") " pod="openshift-marketplace/redhat-operators-nxt9h" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.906072 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f9e2844-dbc2-488b-bb08-77f9a4284a35-utilities\") pod \"redhat-operators-nxt9h\" (UID: \"6f9e2844-dbc2-488b-bb08-77f9a4284a35\") " pod="openshift-marketplace/redhat-operators-nxt9h" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.906785 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f9e2844-dbc2-488b-bb08-77f9a4284a35-catalog-content\") pod \"redhat-operators-nxt9h\" (UID: \"6f9e2844-dbc2-488b-bb08-77f9a4284a35\") " pod="openshift-marketplace/redhat-operators-nxt9h" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.906920 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f9e2844-dbc2-488b-bb08-77f9a4284a35-utilities\") pod \"redhat-operators-nxt9h\" (UID: \"6f9e2844-dbc2-488b-bb08-77f9a4284a35\") " pod="openshift-marketplace/redhat-operators-nxt9h" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.962781 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7pvk\" (UniqueName: \"kubernetes.io/projected/6f9e2844-dbc2-488b-bb08-77f9a4284a35-kube-api-access-x7pvk\") pod \"redhat-operators-nxt9h\" (UID: \"6f9e2844-dbc2-488b-bb08-77f9a4284a35\") " pod="openshift-marketplace/redhat-operators-nxt9h" Jan 03 06:02:52 crc kubenswrapper[4854]: I0103 06:02:52.992810 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8d7c-account-create-update-lpc6t" Jan 03 06:02:53 crc kubenswrapper[4854]: I0103 06:02:53.032021 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nxt9h" Jan 03 06:02:53 crc kubenswrapper[4854]: I0103 06:02:53.259441 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-fspcx"] Jan 03 06:02:53 crc kubenswrapper[4854]: I0103 06:02:53.268385 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"748d9586-5917-42ab-8f1f-3a811b724dae","Type":"ContainerStarted","Data":"b6c69658c671ea8316febf8922284a5f50e580024481037297ff09c1c29c326a"} Jan 03 06:02:53 crc kubenswrapper[4854]: I0103 06:02:53.320775 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=-9223371973.534018 podStartE2EDuration="1m3.320758131s" podCreationTimestamp="2026-01-03 06:01:50 +0000 UTC" firstStartedPulling="2026-01-03 06:02:17.280017585 +0000 UTC m=+1315.606594157" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:02:53.317860989 +0000 UTC m=+1351.644437571" watchObservedRunningTime="2026-01-03 06:02:53.320758131 +0000 UTC m=+1351.647334703" Jan 03 06:02:53 crc kubenswrapper[4854]: I0103 06:02:53.379011 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-4c08-account-create-update-7pbzj"] Jan 03 06:02:53 crc kubenswrapper[4854]: W0103 06:02:53.382735 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde481fb0_7bd9_496e_99e1_5a3d1a25e47b.slice/crio-b258adfd7f24b821faba39e0bb3c4bdd8e956b67d47d6fac5c6e1f0d44a2800b WatchSource:0}: Error finding container b258adfd7f24b821faba39e0bb3c4bdd8e956b67d47d6fac5c6e1f0d44a2800b: Status 404 returned error can't find the container with id b258adfd7f24b821faba39e0bb3c4bdd8e956b67d47d6fac5c6e1f0d44a2800b Jan 03 06:02:53 crc kubenswrapper[4854]: I0103 06:02:53.662712 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nxt9h"] Jan 03 06:02:53 crc kubenswrapper[4854]: I0103 06:02:53.670861 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8d7c-account-create-update-lpc6t"] Jan 03 06:02:53 crc kubenswrapper[4854]: I0103 06:02:53.699880 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-qv4f6"] Jan 03 06:02:53 crc kubenswrapper[4854]: W0103 06:02:53.742172 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8251ed1f_e0cc_48dd_8bbd_14c8753a65a3.slice/crio-fbd49fab20f260989eeeb2d124386d9b898d723579ba9e6021e2bd9b76f0d111 WatchSource:0}: Error finding container fbd49fab20f260989eeeb2d124386d9b898d723579ba9e6021e2bd9b76f0d111: Status 404 returned error can't find the container with id fbd49fab20f260989eeeb2d124386d9b898d723579ba9e6021e2bd9b76f0d111 Jan 03 06:02:54 crc kubenswrapper[4854]: I0103 06:02:54.130811 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ca99325-405c-467a-a9e0-53c5e4fb96e4" path="/var/lib/kubelet/pods/1ca99325-405c-467a-a9e0-53c5e4fb96e4/volumes" Jan 03 06:02:54 crc kubenswrapper[4854]: I0103 06:02:54.335490 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-4zb5c"] Jan 03 06:02:54 crc kubenswrapper[4854]: I0103 06:02:54.355658 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-4zb5c" Jan 03 06:02:54 crc kubenswrapper[4854]: I0103 06:02:54.386196 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-4c08-account-create-update-7pbzj" event={"ID":"de481fb0-7bd9-496e-99e1-5a3d1a25e47b","Type":"ContainerStarted","Data":"59cba3b7f6080425292c17d08b60991f665dcca155120710c11d2a3a5baa2a9f"} Jan 03 06:02:54 crc kubenswrapper[4854]: I0103 06:02:54.386240 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-4c08-account-create-update-7pbzj" event={"ID":"de481fb0-7bd9-496e-99e1-5a3d1a25e47b","Type":"ContainerStarted","Data":"b258adfd7f24b821faba39e0bb3c4bdd8e956b67d47d6fac5c6e1f0d44a2800b"} Jan 03 06:02:54 crc kubenswrapper[4854]: I0103 06:02:54.397644 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8d7c-account-create-update-lpc6t" event={"ID":"7ac766a6-c8c8-4506-b86d-55b398c38783","Type":"ContainerStarted","Data":"3285fcc2748cbe903ffe199d5a51313ce4d8e1adbf5d6cd6d2704540a31d2b60"} Jan 03 06:02:54 crc kubenswrapper[4854]: I0103 06:02:54.397690 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8d7c-account-create-update-lpc6t" event={"ID":"7ac766a6-c8c8-4506-b86d-55b398c38783","Type":"ContainerStarted","Data":"9541ee83f3d8483653cc38595cdf4617b25591096441f652a82513c293fe6b2e"} Jan 03 06:02:54 crc kubenswrapper[4854]: I0103 06:02:54.401963 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-4zb5c"] Jan 03 06:02:54 crc kubenswrapper[4854]: I0103 06:02:54.445760 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nxt9h" event={"ID":"6f9e2844-dbc2-488b-bb08-77f9a4284a35","Type":"ContainerStarted","Data":"d794a2b8f646de8b7b9f6c014d8157516015c573f6d3e066ed2214adc99775cb"} Jan 03 06:02:54 crc kubenswrapper[4854]: I0103 06:02:54.445850 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nxt9h" event={"ID":"6f9e2844-dbc2-488b-bb08-77f9a4284a35","Type":"ContainerStarted","Data":"c737a562b74fcbbb0e7e6a588d08d24e7fcd4ad66d86c5857d17c7b450ce65ca"} Jan 03 06:02:54 crc kubenswrapper[4854]: I0103 06:02:54.474770 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-qv4f6" event={"ID":"8251ed1f-e0cc-48dd-8bbd-14c8753a65a3","Type":"ContainerStarted","Data":"bb439f7ca9ee1ecbb09ecb225b0b0b7cfc74798269548d20314434bc74c38b50"} Jan 03 06:02:54 crc kubenswrapper[4854]: I0103 06:02:54.474842 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-qv4f6" event={"ID":"8251ed1f-e0cc-48dd-8bbd-14c8753a65a3","Type":"ContainerStarted","Data":"fbd49fab20f260989eeeb2d124386d9b898d723579ba9e6021e2bd9b76f0d111"} Jan 03 06:02:54 crc kubenswrapper[4854]: I0103 06:02:54.482679 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-fspcx" event={"ID":"1b06e03e-86ca-4379-9199-a4c1bddd4e33","Type":"ContainerStarted","Data":"7b4138294218f9b48ab15f5cf556619572463aa1f2fb2fc782f1dc3be5637c97"} Jan 03 06:02:54 crc kubenswrapper[4854]: I0103 06:02:54.482724 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-fspcx" event={"ID":"1b06e03e-86ca-4379-9199-a4c1bddd4e33","Type":"ContainerStarted","Data":"e8e1a4eb6862eca7b6bd2bab48a2bfd7e0846ac999e1d767e5f145a29ff993da"} Jan 03 06:02:54 crc kubenswrapper[4854]: I0103 06:02:54.509444 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgstn\" (UniqueName: \"kubernetes.io/projected/ec6dfccc-6930-4425-b5d6-511366ab6786-kube-api-access-dgstn\") pod \"mysqld-exporter-openstack-db-create-4zb5c\" (UID: \"ec6dfccc-6930-4425-b5d6-511366ab6786\") " pod="openstack/mysqld-exporter-openstack-db-create-4zb5c" Jan 03 06:02:54 crc kubenswrapper[4854]: I0103 06:02:54.509848 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec6dfccc-6930-4425-b5d6-511366ab6786-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-4zb5c\" (UID: \"ec6dfccc-6930-4425-b5d6-511366ab6786\") " pod="openstack/mysqld-exporter-openstack-db-create-4zb5c" Jan 03 06:02:54 crc kubenswrapper[4854]: I0103 06:02:54.511736 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-wtw2t"] Jan 03 06:02:54 crc kubenswrapper[4854]: I0103 06:02:54.515656 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-wtw2t" Jan 03 06:02:54 crc kubenswrapper[4854]: E0103 06:02:54.522344 4854 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f9e2844_dbc2_488b_bb08_77f9a4284a35.slice/crio-conmon-d794a2b8f646de8b7b9f6c014d8157516015c573f6d3e066ed2214adc99775cb.scope\": RecentStats: unable to find data in memory cache]" Jan 03 06:02:54 crc kubenswrapper[4854]: I0103 06:02:54.533212 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-a2fa-account-create-update-5tjqb"] Jan 03 06:02:54 crc kubenswrapper[4854]: I0103 06:02:54.534367 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-4c08-account-create-update-7pbzj" podStartSLOduration=2.534347942 podStartE2EDuration="2.534347942s" podCreationTimestamp="2026-01-03 06:02:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:02:54.500470096 +0000 UTC m=+1352.827046668" watchObservedRunningTime="2026-01-03 06:02:54.534347942 +0000 UTC m=+1352.860924524" Jan 03 06:02:54 crc kubenswrapper[4854]: I0103 06:02:54.535035 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-a2fa-account-create-update-5tjqb" Jan 03 06:02:54 crc kubenswrapper[4854]: I0103 06:02:54.538793 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Jan 03 06:02:54 crc kubenswrapper[4854]: I0103 06:02:54.601859 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-a2fa-account-create-update-5tjqb"] Jan 03 06:02:54 crc kubenswrapper[4854]: I0103 06:02:54.612373 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6e3f49c8-b025-4f3c-b356-847e0286a103-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-wtw2t\" (UID: \"6e3f49c8-b025-4f3c-b356-847e0286a103\") " pod="openstack/dnsmasq-dns-698758b865-wtw2t" Jan 03 06:02:54 crc kubenswrapper[4854]: I0103 06:02:54.612474 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmtmz\" (UniqueName: \"kubernetes.io/projected/7c244d13-e5f5-4f26-a2b4-361a8012b0c1-kube-api-access-wmtmz\") pod \"mysqld-exporter-a2fa-account-create-update-5tjqb\" (UID: \"7c244d13-e5f5-4f26-a2b4-361a8012b0c1\") " pod="openstack/mysqld-exporter-a2fa-account-create-update-5tjqb" Jan 03 06:02:54 crc kubenswrapper[4854]: I0103 06:02:54.612520 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8bk9\" (UniqueName: \"kubernetes.io/projected/6e3f49c8-b025-4f3c-b356-847e0286a103-kube-api-access-w8bk9\") pod \"dnsmasq-dns-698758b865-wtw2t\" (UID: \"6e3f49c8-b025-4f3c-b356-847e0286a103\") " pod="openstack/dnsmasq-dns-698758b865-wtw2t" Jan 03 06:02:54 crc kubenswrapper[4854]: I0103 06:02:54.612541 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec6dfccc-6930-4425-b5d6-511366ab6786-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-4zb5c\" (UID: \"ec6dfccc-6930-4425-b5d6-511366ab6786\") " pod="openstack/mysqld-exporter-openstack-db-create-4zb5c" Jan 03 06:02:54 crc kubenswrapper[4854]: I0103 06:02:54.612558 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c244d13-e5f5-4f26-a2b4-361a8012b0c1-operator-scripts\") pod \"mysqld-exporter-a2fa-account-create-update-5tjqb\" (UID: \"7c244d13-e5f5-4f26-a2b4-361a8012b0c1\") " pod="openstack/mysqld-exporter-a2fa-account-create-update-5tjqb" Jan 03 06:02:54 crc kubenswrapper[4854]: I0103 06:02:54.612631 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e3f49c8-b025-4f3c-b356-847e0286a103-dns-svc\") pod \"dnsmasq-dns-698758b865-wtw2t\" (UID: \"6e3f49c8-b025-4f3c-b356-847e0286a103\") " pod="openstack/dnsmasq-dns-698758b865-wtw2t" Jan 03 06:02:54 crc kubenswrapper[4854]: I0103 06:02:54.612658 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e3f49c8-b025-4f3c-b356-847e0286a103-config\") pod \"dnsmasq-dns-698758b865-wtw2t\" (UID: \"6e3f49c8-b025-4f3c-b356-847e0286a103\") " pod="openstack/dnsmasq-dns-698758b865-wtw2t" Jan 03 06:02:54 crc kubenswrapper[4854]: I0103 06:02:54.612685 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6e3f49c8-b025-4f3c-b356-847e0286a103-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-wtw2t\" (UID: \"6e3f49c8-b025-4f3c-b356-847e0286a103\") " pod="openstack/dnsmasq-dns-698758b865-wtw2t" Jan 03 06:02:54 crc kubenswrapper[4854]: I0103 06:02:54.612711 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgstn\" (UniqueName: \"kubernetes.io/projected/ec6dfccc-6930-4425-b5d6-511366ab6786-kube-api-access-dgstn\") pod \"mysqld-exporter-openstack-db-create-4zb5c\" (UID: \"ec6dfccc-6930-4425-b5d6-511366ab6786\") " pod="openstack/mysqld-exporter-openstack-db-create-4zb5c" Jan 03 06:02:54 crc kubenswrapper[4854]: I0103 06:02:54.613834 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec6dfccc-6930-4425-b5d6-511366ab6786-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-4zb5c\" (UID: \"ec6dfccc-6930-4425-b5d6-511366ab6786\") " pod="openstack/mysqld-exporter-openstack-db-create-4zb5c" Jan 03 06:02:54 crc kubenswrapper[4854]: I0103 06:02:54.639170 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-wtw2t"] Jan 03 06:02:54 crc kubenswrapper[4854]: I0103 06:02:54.647768 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-8d7c-account-create-update-lpc6t" podStartSLOduration=2.647745739 podStartE2EDuration="2.647745739s" podCreationTimestamp="2026-01-03 06:02:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:02:54.577615339 +0000 UTC m=+1352.904191921" watchObservedRunningTime="2026-01-03 06:02:54.647745739 +0000 UTC m=+1352.974322311" Jan 03 06:02:54 crc kubenswrapper[4854]: I0103 06:02:54.677921 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgstn\" (UniqueName: \"kubernetes.io/projected/ec6dfccc-6930-4425-b5d6-511366ab6786-kube-api-access-dgstn\") pod \"mysqld-exporter-openstack-db-create-4zb5c\" (UID: \"ec6dfccc-6930-4425-b5d6-511366ab6786\") " pod="openstack/mysqld-exporter-openstack-db-create-4zb5c" Jan 03 06:02:54 crc kubenswrapper[4854]: I0103 06:02:54.710413 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-qv4f6" podStartSLOduration=2.710388785 podStartE2EDuration="2.710388785s" podCreationTimestamp="2026-01-03 06:02:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:02:54.659569611 +0000 UTC m=+1352.986146193" watchObservedRunningTime="2026-01-03 06:02:54.710388785 +0000 UTC m=+1353.036965367" Jan 03 06:02:54 crc kubenswrapper[4854]: I0103 06:02:54.713942 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmtmz\" (UniqueName: \"kubernetes.io/projected/7c244d13-e5f5-4f26-a2b4-361a8012b0c1-kube-api-access-wmtmz\") pod \"mysqld-exporter-a2fa-account-create-update-5tjqb\" (UID: \"7c244d13-e5f5-4f26-a2b4-361a8012b0c1\") " pod="openstack/mysqld-exporter-a2fa-account-create-update-5tjqb" Jan 03 06:02:54 crc kubenswrapper[4854]: I0103 06:02:54.714010 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8bk9\" (UniqueName: \"kubernetes.io/projected/6e3f49c8-b025-4f3c-b356-847e0286a103-kube-api-access-w8bk9\") pod \"dnsmasq-dns-698758b865-wtw2t\" (UID: \"6e3f49c8-b025-4f3c-b356-847e0286a103\") " pod="openstack/dnsmasq-dns-698758b865-wtw2t" Jan 03 06:02:54 crc kubenswrapper[4854]: I0103 06:02:54.714036 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c244d13-e5f5-4f26-a2b4-361a8012b0c1-operator-scripts\") pod \"mysqld-exporter-a2fa-account-create-update-5tjqb\" (UID: \"7c244d13-e5f5-4f26-a2b4-361a8012b0c1\") " pod="openstack/mysqld-exporter-a2fa-account-create-update-5tjqb" Jan 03 06:02:54 crc kubenswrapper[4854]: I0103 06:02:54.714146 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e3f49c8-b025-4f3c-b356-847e0286a103-dns-svc\") pod \"dnsmasq-dns-698758b865-wtw2t\" (UID: \"6e3f49c8-b025-4f3c-b356-847e0286a103\") " pod="openstack/dnsmasq-dns-698758b865-wtw2t" Jan 03 06:02:54 crc kubenswrapper[4854]: I0103 06:02:54.714194 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e3f49c8-b025-4f3c-b356-847e0286a103-config\") pod \"dnsmasq-dns-698758b865-wtw2t\" (UID: \"6e3f49c8-b025-4f3c-b356-847e0286a103\") " pod="openstack/dnsmasq-dns-698758b865-wtw2t" Jan 03 06:02:54 crc kubenswrapper[4854]: I0103 06:02:54.714225 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6e3f49c8-b025-4f3c-b356-847e0286a103-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-wtw2t\" (UID: \"6e3f49c8-b025-4f3c-b356-847e0286a103\") " pod="openstack/dnsmasq-dns-698758b865-wtw2t" Jan 03 06:02:54 crc kubenswrapper[4854]: I0103 06:02:54.714267 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6e3f49c8-b025-4f3c-b356-847e0286a103-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-wtw2t\" (UID: \"6e3f49c8-b025-4f3c-b356-847e0286a103\") " pod="openstack/dnsmasq-dns-698758b865-wtw2t" Jan 03 06:02:54 crc kubenswrapper[4854]: I0103 06:02:54.715155 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6e3f49c8-b025-4f3c-b356-847e0286a103-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-wtw2t\" (UID: \"6e3f49c8-b025-4f3c-b356-847e0286a103\") " pod="openstack/dnsmasq-dns-698758b865-wtw2t" Jan 03 06:02:54 crc kubenswrapper[4854]: I0103 06:02:54.715224 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e3f49c8-b025-4f3c-b356-847e0286a103-dns-svc\") pod \"dnsmasq-dns-698758b865-wtw2t\" (UID: \"6e3f49c8-b025-4f3c-b356-847e0286a103\") " pod="openstack/dnsmasq-dns-698758b865-wtw2t" Jan 03 06:02:54 crc kubenswrapper[4854]: I0103 06:02:54.715729 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e3f49c8-b025-4f3c-b356-847e0286a103-config\") pod \"dnsmasq-dns-698758b865-wtw2t\" (UID: \"6e3f49c8-b025-4f3c-b356-847e0286a103\") " pod="openstack/dnsmasq-dns-698758b865-wtw2t" Jan 03 06:02:54 crc kubenswrapper[4854]: I0103 06:02:54.715983 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6e3f49c8-b025-4f3c-b356-847e0286a103-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-wtw2t\" (UID: \"6e3f49c8-b025-4f3c-b356-847e0286a103\") " pod="openstack/dnsmasq-dns-698758b865-wtw2t" Jan 03 06:02:54 crc kubenswrapper[4854]: I0103 06:02:54.716049 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c244d13-e5f5-4f26-a2b4-361a8012b0c1-operator-scripts\") pod \"mysqld-exporter-a2fa-account-create-update-5tjqb\" (UID: \"7c244d13-e5f5-4f26-a2b4-361a8012b0c1\") " pod="openstack/mysqld-exporter-a2fa-account-create-update-5tjqb" Jan 03 06:02:54 crc kubenswrapper[4854]: I0103 06:02:54.738933 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8bk9\" (UniqueName: \"kubernetes.io/projected/6e3f49c8-b025-4f3c-b356-847e0286a103-kube-api-access-w8bk9\") pod \"dnsmasq-dns-698758b865-wtw2t\" (UID: \"6e3f49c8-b025-4f3c-b356-847e0286a103\") " pod="openstack/dnsmasq-dns-698758b865-wtw2t" Jan 03 06:02:54 crc kubenswrapper[4854]: I0103 06:02:54.751432 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-fspcx" podStartSLOduration=2.7514063970000002 podStartE2EDuration="2.751406397s" podCreationTimestamp="2026-01-03 06:02:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:02:54.683648145 +0000 UTC m=+1353.010224717" watchObservedRunningTime="2026-01-03 06:02:54.751406397 +0000 UTC m=+1353.077982989" Jan 03 06:02:54 crc kubenswrapper[4854]: I0103 06:02:54.765615 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-4zb5c" Jan 03 06:02:54 crc kubenswrapper[4854]: I0103 06:02:54.768551 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmtmz\" (UniqueName: \"kubernetes.io/projected/7c244d13-e5f5-4f26-a2b4-361a8012b0c1-kube-api-access-wmtmz\") pod \"mysqld-exporter-a2fa-account-create-update-5tjqb\" (UID: \"7c244d13-e5f5-4f26-a2b4-361a8012b0c1\") " pod="openstack/mysqld-exporter-a2fa-account-create-update-5tjqb" Jan 03 06:02:54 crc kubenswrapper[4854]: I0103 06:02:54.893246 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-wtw2t" Jan 03 06:02:54 crc kubenswrapper[4854]: I0103 06:02:54.916477 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-a2fa-account-create-update-5tjqb" Jan 03 06:02:55 crc kubenswrapper[4854]: I0103 06:02:55.496818 4854 generic.go:334] "Generic (PLEG): container finished" podID="6f9e2844-dbc2-488b-bb08-77f9a4284a35" containerID="d794a2b8f646de8b7b9f6c014d8157516015c573f6d3e066ed2214adc99775cb" exitCode=0 Jan 03 06:02:55 crc kubenswrapper[4854]: I0103 06:02:55.496913 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nxt9h" event={"ID":"6f9e2844-dbc2-488b-bb08-77f9a4284a35","Type":"ContainerDied","Data":"d794a2b8f646de8b7b9f6c014d8157516015c573f6d3e066ed2214adc99775cb"} Jan 03 06:02:55 crc kubenswrapper[4854]: I0103 06:02:55.647478 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 03 06:02:55 crc kubenswrapper[4854]: I0103 06:02:55.655615 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 03 06:02:55 crc kubenswrapper[4854]: I0103 06:02:55.658171 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 03 06:02:55 crc kubenswrapper[4854]: I0103 06:02:55.658276 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-pxs4w" Jan 03 06:02:55 crc kubenswrapper[4854]: I0103 06:02:55.663176 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 03 06:02:55 crc kubenswrapper[4854]: I0103 06:02:55.663397 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 03 06:02:55 crc kubenswrapper[4854]: I0103 06:02:55.685514 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 03 06:02:55 crc kubenswrapper[4854]: I0103 06:02:55.741473 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f6a47ad8-d256-453c-910a-1506c8f73657-cache\") pod \"swift-storage-0\" (UID: \"f6a47ad8-d256-453c-910a-1506c8f73657\") " pod="openstack/swift-storage-0" Jan 03 06:02:55 crc kubenswrapper[4854]: I0103 06:02:55.741570 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6266f92c-0498-4b03-86a9-217bd98d3c20\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6266f92c-0498-4b03-86a9-217bd98d3c20\") pod \"swift-storage-0\" (UID: \"f6a47ad8-d256-453c-910a-1506c8f73657\") " pod="openstack/swift-storage-0" Jan 03 06:02:55 crc kubenswrapper[4854]: I0103 06:02:55.741667 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f6a47ad8-d256-453c-910a-1506c8f73657-etc-swift\") pod \"swift-storage-0\" (UID: \"f6a47ad8-d256-453c-910a-1506c8f73657\") " pod="openstack/swift-storage-0" Jan 03 06:02:55 crc kubenswrapper[4854]: I0103 06:02:55.741693 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f6a47ad8-d256-453c-910a-1506c8f73657-lock\") pod \"swift-storage-0\" (UID: \"f6a47ad8-d256-453c-910a-1506c8f73657\") " pod="openstack/swift-storage-0" Jan 03 06:02:55 crc kubenswrapper[4854]: I0103 06:02:55.741757 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czjs4\" (UniqueName: \"kubernetes.io/projected/f6a47ad8-d256-453c-910a-1506c8f73657-kube-api-access-czjs4\") pod \"swift-storage-0\" (UID: \"f6a47ad8-d256-453c-910a-1506c8f73657\") " pod="openstack/swift-storage-0" Jan 03 06:02:55 crc kubenswrapper[4854]: I0103 06:02:55.843398 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6266f92c-0498-4b03-86a9-217bd98d3c20\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6266f92c-0498-4b03-86a9-217bd98d3c20\") pod \"swift-storage-0\" (UID: \"f6a47ad8-d256-453c-910a-1506c8f73657\") " pod="openstack/swift-storage-0" Jan 03 06:02:55 crc kubenswrapper[4854]: I0103 06:02:55.843955 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f6a47ad8-d256-453c-910a-1506c8f73657-etc-swift\") pod \"swift-storage-0\" (UID: \"f6a47ad8-d256-453c-910a-1506c8f73657\") " pod="openstack/swift-storage-0" Jan 03 06:02:55 crc kubenswrapper[4854]: I0103 06:02:55.843980 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f6a47ad8-d256-453c-910a-1506c8f73657-lock\") pod \"swift-storage-0\" (UID: \"f6a47ad8-d256-453c-910a-1506c8f73657\") " pod="openstack/swift-storage-0" Jan 03 06:02:55 crc kubenswrapper[4854]: E0103 06:02:55.844123 4854 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 03 06:02:55 crc kubenswrapper[4854]: E0103 06:02:55.844140 4854 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 03 06:02:55 crc kubenswrapper[4854]: E0103 06:02:55.844186 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f6a47ad8-d256-453c-910a-1506c8f73657-etc-swift podName:f6a47ad8-d256-453c-910a-1506c8f73657 nodeName:}" failed. No retries permitted until 2026-01-03 06:02:56.344173448 +0000 UTC m=+1354.670750020 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f6a47ad8-d256-453c-910a-1506c8f73657-etc-swift") pod "swift-storage-0" (UID: "f6a47ad8-d256-453c-910a-1506c8f73657") : configmap "swift-ring-files" not found Jan 03 06:02:55 crc kubenswrapper[4854]: I0103 06:02:55.844435 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czjs4\" (UniqueName: \"kubernetes.io/projected/f6a47ad8-d256-453c-910a-1506c8f73657-kube-api-access-czjs4\") pod \"swift-storage-0\" (UID: \"f6a47ad8-d256-453c-910a-1506c8f73657\") " pod="openstack/swift-storage-0" Jan 03 06:02:55 crc kubenswrapper[4854]: I0103 06:02:55.844553 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f6a47ad8-d256-453c-910a-1506c8f73657-lock\") pod \"swift-storage-0\" (UID: \"f6a47ad8-d256-453c-910a-1506c8f73657\") " pod="openstack/swift-storage-0" Jan 03 06:02:55 crc kubenswrapper[4854]: I0103 06:02:55.844594 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f6a47ad8-d256-453c-910a-1506c8f73657-cache\") pod \"swift-storage-0\" (UID: \"f6a47ad8-d256-453c-910a-1506c8f73657\") " pod="openstack/swift-storage-0" Jan 03 06:02:55 crc kubenswrapper[4854]: I0103 06:02:55.844921 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f6a47ad8-d256-453c-910a-1506c8f73657-cache\") pod \"swift-storage-0\" (UID: \"f6a47ad8-d256-453c-910a-1506c8f73657\") " pod="openstack/swift-storage-0" Jan 03 06:02:55 crc kubenswrapper[4854]: I0103 06:02:55.846345 4854 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 03 06:02:55 crc kubenswrapper[4854]: I0103 06:02:55.846381 4854 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6266f92c-0498-4b03-86a9-217bd98d3c20\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6266f92c-0498-4b03-86a9-217bd98d3c20\") pod \"swift-storage-0\" (UID: \"f6a47ad8-d256-453c-910a-1506c8f73657\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3d3bbad69d3872ee3486f9d8772169e3bb11be9a7bfc4330843b7f8b93627593/globalmount\"" pod="openstack/swift-storage-0" Jan 03 06:02:55 crc kubenswrapper[4854]: I0103 06:02:55.864951 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czjs4\" (UniqueName: \"kubernetes.io/projected/f6a47ad8-d256-453c-910a-1506c8f73657-kube-api-access-czjs4\") pod \"swift-storage-0\" (UID: \"f6a47ad8-d256-453c-910a-1506c8f73657\") " pod="openstack/swift-storage-0" Jan 03 06:02:55 crc kubenswrapper[4854]: I0103 06:02:55.886416 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6266f92c-0498-4b03-86a9-217bd98d3c20\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6266f92c-0498-4b03-86a9-217bd98d3c20\") pod \"swift-storage-0\" (UID: \"f6a47ad8-d256-453c-910a-1506c8f73657\") " pod="openstack/swift-storage-0" Jan 03 06:02:56 crc kubenswrapper[4854]: I0103 06:02:56.355987 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f6a47ad8-d256-453c-910a-1506c8f73657-etc-swift\") pod \"swift-storage-0\" (UID: \"f6a47ad8-d256-453c-910a-1506c8f73657\") " pod="openstack/swift-storage-0" Jan 03 06:02:56 crc kubenswrapper[4854]: E0103 06:02:56.356227 4854 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 03 06:02:56 crc kubenswrapper[4854]: E0103 06:02:56.356266 4854 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 03 06:02:56 crc kubenswrapper[4854]: E0103 06:02:56.356341 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f6a47ad8-d256-453c-910a-1506c8f73657-etc-swift podName:f6a47ad8-d256-453c-910a-1506c8f73657 nodeName:}" failed. No retries permitted until 2026-01-03 06:02:57.356317104 +0000 UTC m=+1355.682893676 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f6a47ad8-d256-453c-910a-1506c8f73657-etc-swift") pod "swift-storage-0" (UID: "f6a47ad8-d256-453c-910a-1506c8f73657") : configmap "swift-ring-files" not found Jan 03 06:02:57 crc kubenswrapper[4854]: I0103 06:02:57.382905 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f6a47ad8-d256-453c-910a-1506c8f73657-etc-swift\") pod \"swift-storage-0\" (UID: \"f6a47ad8-d256-453c-910a-1506c8f73657\") " pod="openstack/swift-storage-0" Jan 03 06:02:57 crc kubenswrapper[4854]: E0103 06:02:57.383422 4854 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 03 06:02:57 crc kubenswrapper[4854]: E0103 06:02:57.383437 4854 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 03 06:02:57 crc kubenswrapper[4854]: E0103 06:02:57.383486 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f6a47ad8-d256-453c-910a-1506c8f73657-etc-swift podName:f6a47ad8-d256-453c-910a-1506c8f73657 nodeName:}" failed. No retries permitted until 2026-01-03 06:02:59.383471006 +0000 UTC m=+1357.710047578 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f6a47ad8-d256-453c-910a-1506c8f73657-etc-swift") pod "swift-storage-0" (UID: "f6a47ad8-d256-453c-910a-1506c8f73657") : configmap "swift-ring-files" not found Jan 03 06:02:57 crc kubenswrapper[4854]: I0103 06:02:57.667641 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-27tnm"] Jan 03 06:02:57 crc kubenswrapper[4854]: I0103 06:02:57.669559 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-27tnm" Jan 03 06:02:57 crc kubenswrapper[4854]: I0103 06:02:57.677521 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-27tnm"] Jan 03 06:02:57 crc kubenswrapper[4854]: I0103 06:02:57.756055 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-aee3-account-create-update-9rwqc"] Jan 03 06:02:57 crc kubenswrapper[4854]: I0103 06:02:57.757908 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-aee3-account-create-update-9rwqc" Jan 03 06:02:57 crc kubenswrapper[4854]: I0103 06:02:57.765153 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 03 06:02:57 crc kubenswrapper[4854]: I0103 06:02:57.770200 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-aee3-account-create-update-9rwqc"] Jan 03 06:02:57 crc kubenswrapper[4854]: I0103 06:02:57.794034 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slhff\" (UniqueName: \"kubernetes.io/projected/9e58db94-b238-4fd5-a833-0fc6f281465c-kube-api-access-slhff\") pod \"glance-aee3-account-create-update-9rwqc\" (UID: \"9e58db94-b238-4fd5-a833-0fc6f281465c\") " pod="openstack/glance-aee3-account-create-update-9rwqc" Jan 03 06:02:57 crc kubenswrapper[4854]: I0103 06:02:57.794219 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a21d16f-c305-4792-bad1-2eb5451b15dc-operator-scripts\") pod \"glance-db-create-27tnm\" (UID: \"8a21d16f-c305-4792-bad1-2eb5451b15dc\") " pod="openstack/glance-db-create-27tnm" Jan 03 06:02:57 crc kubenswrapper[4854]: I0103 06:02:57.794251 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k89sj\" (UniqueName: \"kubernetes.io/projected/8a21d16f-c305-4792-bad1-2eb5451b15dc-kube-api-access-k89sj\") pod \"glance-db-create-27tnm\" (UID: \"8a21d16f-c305-4792-bad1-2eb5451b15dc\") " pod="openstack/glance-db-create-27tnm" Jan 03 06:02:57 crc kubenswrapper[4854]: I0103 06:02:57.794411 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e58db94-b238-4fd5-a833-0fc6f281465c-operator-scripts\") pod \"glance-aee3-account-create-update-9rwqc\" (UID: \"9e58db94-b238-4fd5-a833-0fc6f281465c\") " pod="openstack/glance-aee3-account-create-update-9rwqc" Jan 03 06:02:57 crc kubenswrapper[4854]: I0103 06:02:57.896724 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a21d16f-c305-4792-bad1-2eb5451b15dc-operator-scripts\") pod \"glance-db-create-27tnm\" (UID: \"8a21d16f-c305-4792-bad1-2eb5451b15dc\") " pod="openstack/glance-db-create-27tnm" Jan 03 06:02:57 crc kubenswrapper[4854]: I0103 06:02:57.897653 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k89sj\" (UniqueName: \"kubernetes.io/projected/8a21d16f-c305-4792-bad1-2eb5451b15dc-kube-api-access-k89sj\") pod \"glance-db-create-27tnm\" (UID: \"8a21d16f-c305-4792-bad1-2eb5451b15dc\") " pod="openstack/glance-db-create-27tnm" Jan 03 06:02:57 crc kubenswrapper[4854]: I0103 06:02:57.897583 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a21d16f-c305-4792-bad1-2eb5451b15dc-operator-scripts\") pod \"glance-db-create-27tnm\" (UID: \"8a21d16f-c305-4792-bad1-2eb5451b15dc\") " pod="openstack/glance-db-create-27tnm" Jan 03 06:02:57 crc kubenswrapper[4854]: I0103 06:02:57.897932 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e58db94-b238-4fd5-a833-0fc6f281465c-operator-scripts\") pod \"glance-aee3-account-create-update-9rwqc\" (UID: \"9e58db94-b238-4fd5-a833-0fc6f281465c\") " pod="openstack/glance-aee3-account-create-update-9rwqc" Jan 03 06:02:57 crc kubenswrapper[4854]: I0103 06:02:57.898497 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e58db94-b238-4fd5-a833-0fc6f281465c-operator-scripts\") pod \"glance-aee3-account-create-update-9rwqc\" (UID: \"9e58db94-b238-4fd5-a833-0fc6f281465c\") " pod="openstack/glance-aee3-account-create-update-9rwqc" Jan 03 06:02:57 crc kubenswrapper[4854]: I0103 06:02:57.898632 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slhff\" (UniqueName: \"kubernetes.io/projected/9e58db94-b238-4fd5-a833-0fc6f281465c-kube-api-access-slhff\") pod \"glance-aee3-account-create-update-9rwqc\" (UID: \"9e58db94-b238-4fd5-a833-0fc6f281465c\") " pod="openstack/glance-aee3-account-create-update-9rwqc" Jan 03 06:02:57 crc kubenswrapper[4854]: I0103 06:02:57.918553 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k89sj\" (UniqueName: \"kubernetes.io/projected/8a21d16f-c305-4792-bad1-2eb5451b15dc-kube-api-access-k89sj\") pod \"glance-db-create-27tnm\" (UID: \"8a21d16f-c305-4792-bad1-2eb5451b15dc\") " pod="openstack/glance-db-create-27tnm" Jan 03 06:02:57 crc kubenswrapper[4854]: I0103 06:02:57.918721 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slhff\" (UniqueName: \"kubernetes.io/projected/9e58db94-b238-4fd5-a833-0fc6f281465c-kube-api-access-slhff\") pod \"glance-aee3-account-create-update-9rwqc\" (UID: \"9e58db94-b238-4fd5-a833-0fc6f281465c\") " pod="openstack/glance-aee3-account-create-update-9rwqc" Jan 03 06:02:57 crc kubenswrapper[4854]: I0103 06:02:57.992759 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-27tnm" Jan 03 06:02:58 crc kubenswrapper[4854]: I0103 06:02:58.076673 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-aee3-account-create-update-9rwqc" Jan 03 06:02:59 crc kubenswrapper[4854]: I0103 06:02:59.436003 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f6a47ad8-d256-453c-910a-1506c8f73657-etc-swift\") pod \"swift-storage-0\" (UID: \"f6a47ad8-d256-453c-910a-1506c8f73657\") " pod="openstack/swift-storage-0" Jan 03 06:02:59 crc kubenswrapper[4854]: E0103 06:02:59.436602 4854 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 03 06:02:59 crc kubenswrapper[4854]: E0103 06:02:59.438179 4854 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 03 06:02:59 crc kubenswrapper[4854]: E0103 06:02:59.438293 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f6a47ad8-d256-453c-910a-1506c8f73657-etc-swift podName:f6a47ad8-d256-453c-910a-1506c8f73657 nodeName:}" failed. No retries permitted until 2026-01-03 06:03:03.438261881 +0000 UTC m=+1361.764838493 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f6a47ad8-d256-453c-910a-1506c8f73657-etc-swift") pod "swift-storage-0" (UID: "f6a47ad8-d256-453c-910a-1506c8f73657") : configmap "swift-ring-files" not found Jan 03 06:02:59 crc kubenswrapper[4854]: I0103 06:02:59.468494 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-rmh8c"] Jan 03 06:02:59 crc kubenswrapper[4854]: I0103 06:02:59.470347 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rmh8c" Jan 03 06:02:59 crc kubenswrapper[4854]: I0103 06:02:59.473217 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 03 06:02:59 crc kubenswrapper[4854]: I0103 06:02:59.479322 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-l7b7k"] Jan 03 06:02:59 crc kubenswrapper[4854]: I0103 06:02:59.481181 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-l7b7k" Jan 03 06:02:59 crc kubenswrapper[4854]: I0103 06:02:59.484798 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 03 06:02:59 crc kubenswrapper[4854]: I0103 06:02:59.493910 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 03 06:02:59 crc kubenswrapper[4854]: I0103 06:02:59.493910 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 03 06:02:59 crc kubenswrapper[4854]: I0103 06:02:59.504187 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-rmh8c"] Jan 03 06:02:59 crc kubenswrapper[4854]: I0103 06:02:59.523806 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-l7b7k"] Jan 03 06:02:59 crc kubenswrapper[4854]: I0103 06:02:59.541318 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4adf09-eb0a-4a6e-a49f-78e43cf04124-combined-ca-bundle\") pod \"swift-ring-rebalance-l7b7k\" (UID: \"db4adf09-eb0a-4a6e-a49f-78e43cf04124\") " pod="openstack/swift-ring-rebalance-l7b7k" Jan 03 06:02:59 crc kubenswrapper[4854]: I0103 06:02:59.541380 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/db4adf09-eb0a-4a6e-a49f-78e43cf04124-dispersionconf\") pod \"swift-ring-rebalance-l7b7k\" (UID: \"db4adf09-eb0a-4a6e-a49f-78e43cf04124\") " pod="openstack/swift-ring-rebalance-l7b7k" Jan 03 06:02:59 crc kubenswrapper[4854]: I0103 06:02:59.541427 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50e5be2e-c854-47b3-b5c5-312a82700553-operator-scripts\") pod \"root-account-create-update-rmh8c\" (UID: \"50e5be2e-c854-47b3-b5c5-312a82700553\") " pod="openstack/root-account-create-update-rmh8c" Jan 03 06:02:59 crc kubenswrapper[4854]: I0103 06:02:59.541463 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/db4adf09-eb0a-4a6e-a49f-78e43cf04124-ring-data-devices\") pod \"swift-ring-rebalance-l7b7k\" (UID: \"db4adf09-eb0a-4a6e-a49f-78e43cf04124\") " pod="openstack/swift-ring-rebalance-l7b7k" Jan 03 06:02:59 crc kubenswrapper[4854]: I0103 06:02:59.541486 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddwd7\" (UniqueName: \"kubernetes.io/projected/50e5be2e-c854-47b3-b5c5-312a82700553-kube-api-access-ddwd7\") pod \"root-account-create-update-rmh8c\" (UID: \"50e5be2e-c854-47b3-b5c5-312a82700553\") " pod="openstack/root-account-create-update-rmh8c" Jan 03 06:02:59 crc kubenswrapper[4854]: I0103 06:02:59.541558 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4ssn\" (UniqueName: \"kubernetes.io/projected/db4adf09-eb0a-4a6e-a49f-78e43cf04124-kube-api-access-l4ssn\") pod \"swift-ring-rebalance-l7b7k\" (UID: \"db4adf09-eb0a-4a6e-a49f-78e43cf04124\") " pod="openstack/swift-ring-rebalance-l7b7k" Jan 03 06:02:59 crc kubenswrapper[4854]: I0103 06:02:59.541574 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/db4adf09-eb0a-4a6e-a49f-78e43cf04124-swiftconf\") pod \"swift-ring-rebalance-l7b7k\" (UID: \"db4adf09-eb0a-4a6e-a49f-78e43cf04124\") " pod="openstack/swift-ring-rebalance-l7b7k" Jan 03 06:02:59 crc kubenswrapper[4854]: I0103 06:02:59.541600 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/db4adf09-eb0a-4a6e-a49f-78e43cf04124-scripts\") pod \"swift-ring-rebalance-l7b7k\" (UID: \"db4adf09-eb0a-4a6e-a49f-78e43cf04124\") " pod="openstack/swift-ring-rebalance-l7b7k" Jan 03 06:02:59 crc kubenswrapper[4854]: I0103 06:02:59.541680 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/db4adf09-eb0a-4a6e-a49f-78e43cf04124-etc-swift\") pod \"swift-ring-rebalance-l7b7k\" (UID: \"db4adf09-eb0a-4a6e-a49f-78e43cf04124\") " pod="openstack/swift-ring-rebalance-l7b7k" Jan 03 06:02:59 crc kubenswrapper[4854]: I0103 06:02:59.643266 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/db4adf09-eb0a-4a6e-a49f-78e43cf04124-ring-data-devices\") pod \"swift-ring-rebalance-l7b7k\" (UID: \"db4adf09-eb0a-4a6e-a49f-78e43cf04124\") " pod="openstack/swift-ring-rebalance-l7b7k" Jan 03 06:02:59 crc kubenswrapper[4854]: I0103 06:02:59.643310 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddwd7\" (UniqueName: \"kubernetes.io/projected/50e5be2e-c854-47b3-b5c5-312a82700553-kube-api-access-ddwd7\") pod \"root-account-create-update-rmh8c\" (UID: \"50e5be2e-c854-47b3-b5c5-312a82700553\") " pod="openstack/root-account-create-update-rmh8c" Jan 03 06:02:59 crc kubenswrapper[4854]: I0103 06:02:59.643374 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4ssn\" (UniqueName: \"kubernetes.io/projected/db4adf09-eb0a-4a6e-a49f-78e43cf04124-kube-api-access-l4ssn\") pod \"swift-ring-rebalance-l7b7k\" (UID: \"db4adf09-eb0a-4a6e-a49f-78e43cf04124\") " pod="openstack/swift-ring-rebalance-l7b7k" Jan 03 06:02:59 crc kubenswrapper[4854]: I0103 06:02:59.643400 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/db4adf09-eb0a-4a6e-a49f-78e43cf04124-swiftconf\") pod \"swift-ring-rebalance-l7b7k\" (UID: \"db4adf09-eb0a-4a6e-a49f-78e43cf04124\") " pod="openstack/swift-ring-rebalance-l7b7k" Jan 03 06:02:59 crc kubenswrapper[4854]: I0103 06:02:59.643420 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/db4adf09-eb0a-4a6e-a49f-78e43cf04124-scripts\") pod \"swift-ring-rebalance-l7b7k\" (UID: \"db4adf09-eb0a-4a6e-a49f-78e43cf04124\") " pod="openstack/swift-ring-rebalance-l7b7k" Jan 03 06:02:59 crc kubenswrapper[4854]: I0103 06:02:59.643486 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/db4adf09-eb0a-4a6e-a49f-78e43cf04124-etc-swift\") pod \"swift-ring-rebalance-l7b7k\" (UID: \"db4adf09-eb0a-4a6e-a49f-78e43cf04124\") " pod="openstack/swift-ring-rebalance-l7b7k" Jan 03 06:02:59 crc kubenswrapper[4854]: I0103 06:02:59.643515 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4adf09-eb0a-4a6e-a49f-78e43cf04124-combined-ca-bundle\") pod \"swift-ring-rebalance-l7b7k\" (UID: \"db4adf09-eb0a-4a6e-a49f-78e43cf04124\") " pod="openstack/swift-ring-rebalance-l7b7k" Jan 03 06:02:59 crc kubenswrapper[4854]: I0103 06:02:59.643538 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/db4adf09-eb0a-4a6e-a49f-78e43cf04124-dispersionconf\") pod \"swift-ring-rebalance-l7b7k\" (UID: \"db4adf09-eb0a-4a6e-a49f-78e43cf04124\") " pod="openstack/swift-ring-rebalance-l7b7k" Jan 03 06:02:59 crc kubenswrapper[4854]: I0103 06:02:59.643569 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50e5be2e-c854-47b3-b5c5-312a82700553-operator-scripts\") pod \"root-account-create-update-rmh8c\" (UID: \"50e5be2e-c854-47b3-b5c5-312a82700553\") " pod="openstack/root-account-create-update-rmh8c" Jan 03 06:02:59 crc kubenswrapper[4854]: I0103 06:02:59.644291 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50e5be2e-c854-47b3-b5c5-312a82700553-operator-scripts\") pod \"root-account-create-update-rmh8c\" (UID: \"50e5be2e-c854-47b3-b5c5-312a82700553\") " pod="openstack/root-account-create-update-rmh8c" Jan 03 06:02:59 crc kubenswrapper[4854]: I0103 06:02:59.644314 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/db4adf09-eb0a-4a6e-a49f-78e43cf04124-ring-data-devices\") pod \"swift-ring-rebalance-l7b7k\" (UID: \"db4adf09-eb0a-4a6e-a49f-78e43cf04124\") " pod="openstack/swift-ring-rebalance-l7b7k" Jan 03 06:02:59 crc kubenswrapper[4854]: I0103 06:02:59.644464 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/db4adf09-eb0a-4a6e-a49f-78e43cf04124-scripts\") pod \"swift-ring-rebalance-l7b7k\" (UID: \"db4adf09-eb0a-4a6e-a49f-78e43cf04124\") " pod="openstack/swift-ring-rebalance-l7b7k" Jan 03 06:02:59 crc kubenswrapper[4854]: I0103 06:02:59.644511 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/db4adf09-eb0a-4a6e-a49f-78e43cf04124-etc-swift\") pod \"swift-ring-rebalance-l7b7k\" (UID: \"db4adf09-eb0a-4a6e-a49f-78e43cf04124\") " pod="openstack/swift-ring-rebalance-l7b7k" Jan 03 06:02:59 crc kubenswrapper[4854]: I0103 06:02:59.649786 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4adf09-eb0a-4a6e-a49f-78e43cf04124-combined-ca-bundle\") pod \"swift-ring-rebalance-l7b7k\" (UID: \"db4adf09-eb0a-4a6e-a49f-78e43cf04124\") " pod="openstack/swift-ring-rebalance-l7b7k" Jan 03 06:02:59 crc kubenswrapper[4854]: I0103 06:02:59.650213 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/db4adf09-eb0a-4a6e-a49f-78e43cf04124-swiftconf\") pod \"swift-ring-rebalance-l7b7k\" (UID: \"db4adf09-eb0a-4a6e-a49f-78e43cf04124\") " pod="openstack/swift-ring-rebalance-l7b7k" Jan 03 06:02:59 crc kubenswrapper[4854]: I0103 06:02:59.652056 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/db4adf09-eb0a-4a6e-a49f-78e43cf04124-dispersionconf\") pod \"swift-ring-rebalance-l7b7k\" (UID: \"db4adf09-eb0a-4a6e-a49f-78e43cf04124\") " pod="openstack/swift-ring-rebalance-l7b7k" Jan 03 06:02:59 crc kubenswrapper[4854]: I0103 06:02:59.658068 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddwd7\" (UniqueName: \"kubernetes.io/projected/50e5be2e-c854-47b3-b5c5-312a82700553-kube-api-access-ddwd7\") pod \"root-account-create-update-rmh8c\" (UID: \"50e5be2e-c854-47b3-b5c5-312a82700553\") " pod="openstack/root-account-create-update-rmh8c" Jan 03 06:02:59 crc kubenswrapper[4854]: I0103 06:02:59.658573 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4ssn\" (UniqueName: \"kubernetes.io/projected/db4adf09-eb0a-4a6e-a49f-78e43cf04124-kube-api-access-l4ssn\") pod \"swift-ring-rebalance-l7b7k\" (UID: \"db4adf09-eb0a-4a6e-a49f-78e43cf04124\") " pod="openstack/swift-ring-rebalance-l7b7k" Jan 03 06:02:59 crc kubenswrapper[4854]: I0103 06:02:59.807474 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rmh8c" Jan 03 06:02:59 crc kubenswrapper[4854]: I0103 06:02:59.820236 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-l7b7k" Jan 03 06:03:02 crc kubenswrapper[4854]: I0103 06:03:02.294117 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 03 06:03:02 crc kubenswrapper[4854]: I0103 06:03:02.294685 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 03 06:03:03 crc kubenswrapper[4854]: I0103 06:03:03.457825 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f6a47ad8-d256-453c-910a-1506c8f73657-etc-swift\") pod \"swift-storage-0\" (UID: \"f6a47ad8-d256-453c-910a-1506c8f73657\") " pod="openstack/swift-storage-0" Jan 03 06:03:03 crc kubenswrapper[4854]: E0103 06:03:03.458123 4854 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 03 06:03:03 crc kubenswrapper[4854]: E0103 06:03:03.458333 4854 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 03 06:03:03 crc kubenswrapper[4854]: E0103 06:03:03.458409 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f6a47ad8-d256-453c-910a-1506c8f73657-etc-swift podName:f6a47ad8-d256-453c-910a-1506c8f73657 nodeName:}" failed. No retries permitted until 2026-01-03 06:03:11.458383905 +0000 UTC m=+1369.784960487 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f6a47ad8-d256-453c-910a-1506c8f73657-etc-swift") pod "swift-storage-0" (UID: "f6a47ad8-d256-453c-910a-1506c8f73657") : configmap "swift-ring-files" not found Jan 03 06:03:09 crc kubenswrapper[4854]: I0103 06:03:09.043999 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="b5742bd8-396a-4174-a8b7-dd6deec69632" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Jan 03 06:03:09 crc kubenswrapper[4854]: I0103 06:03:09.395251 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="71288814-2f4e-4e92-8064-8f9ef1920212" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.132:5671: connect: connection refused" Jan 03 06:03:09 crc kubenswrapper[4854]: I0103 06:03:09.704881 4854 patch_prober.go:28] interesting pod/logging-loki-gateway-656bf7cf7c-98v92 container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 06:03:09 crc kubenswrapper[4854]: I0103 06:03:09.704969 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-656bf7cf7c-98v92" podUID="428c2117-0003-47b2-abfa-f4f7930e126c" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 06:03:11 crc kubenswrapper[4854]: I0103 06:03:11.553518 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f6a47ad8-d256-453c-910a-1506c8f73657-etc-swift\") pod \"swift-storage-0\" (UID: \"f6a47ad8-d256-453c-910a-1506c8f73657\") " pod="openstack/swift-storage-0" Jan 03 06:03:11 crc kubenswrapper[4854]: E0103 06:03:11.553681 4854 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 03 06:03:11 crc kubenswrapper[4854]: E0103 06:03:11.554284 4854 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 03 06:03:11 crc kubenswrapper[4854]: E0103 06:03:11.554376 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f6a47ad8-d256-453c-910a-1506c8f73657-etc-swift podName:f6a47ad8-d256-453c-910a-1506c8f73657 nodeName:}" failed. No retries permitted until 2026-01-03 06:03:27.554355739 +0000 UTC m=+1385.880932321 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f6a47ad8-d256-453c-910a-1506c8f73657-etc-swift") pod "swift-storage-0" (UID: "f6a47ad8-d256-453c-910a-1506c8f73657") : configmap "swift-ring-files" not found Jan 03 06:03:11 crc kubenswrapper[4854]: I0103 06:03:11.692233 4854 generic.go:334] "Generic (PLEG): container finished" podID="8251ed1f-e0cc-48dd-8bbd-14c8753a65a3" containerID="bb439f7ca9ee1ecbb09ecb225b0b0b7cfc74798269548d20314434bc74c38b50" exitCode=0 Jan 03 06:03:11 crc kubenswrapper[4854]: I0103 06:03:11.692311 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-qv4f6" event={"ID":"8251ed1f-e0cc-48dd-8bbd-14c8753a65a3","Type":"ContainerDied","Data":"bb439f7ca9ee1ecbb09ecb225b0b0b7cfc74798269548d20314434bc74c38b50"} Jan 03 06:03:17 crc kubenswrapper[4854]: I0103 06:03:12.436012 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-mkvp7" Jan 03 06:03:17 crc kubenswrapper[4854]: I0103 06:03:12.439279 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-mkvp7" Jan 03 06:03:17 crc kubenswrapper[4854]: I0103 06:03:12.984697 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-dll2c" podUID="04465680-9e76-4b04-aa5f-c94218a6bf28" containerName="ovn-controller" probeResult="failure" output=< Jan 03 06:03:17 crc kubenswrapper[4854]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 03 06:03:17 crc kubenswrapper[4854]: > Jan 03 06:03:17 crc kubenswrapper[4854]: E0103 06:03:12.996817 4854 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:1b555e21bba7c609111ace4380382a696d9aceeb6e9816bf9023b8f689b6c741" Jan 03 06:03:17 crc kubenswrapper[4854]: E0103 06:03:12.997020 4854 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus,Image:registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:1b555e21bba7c609111ace4380382a696d9aceeb6e9816bf9023b8f689b6c741,Command:[],Args:[--config.file=/etc/prometheus/config_out/prometheus.env.yaml --web.enable-lifecycle --web.route-prefix=/ --storage.tsdb.retention.time=24h --storage.tsdb.path=/prometheus --web.config.file=/etc/prometheus/web_config/web-config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:web,HostPort:0,ContainerPort:9090,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-out,ReadOnly:true,MountPath:/etc/prometheus/config_out,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tls-assets,ReadOnly:true,MountPath:/etc/prometheus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-db,ReadOnly:false,MountPath:/prometheus,SubPath:prometheus-db,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-0,ReadOnly:true,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-1,ReadOnly:true,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-1,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-2,ReadOnly:true,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-2,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:web-config,ReadOnly:true,MountPath:/etc/prometheus/web_config/web-config.yaml,SubPath:web-config.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ng2qt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/healthy,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/ready,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/ready,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:15,SuccessThreshold:1,FailureThreshold:60,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-metric-storage-0_openstack(97a38e3c-dd5a-447b-b580-ed7bd5f16fde): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 03 06:03:17 crc kubenswrapper[4854]: I0103 06:03:13.822123 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b2518f81-3d3d-47a6-a157-19c2685f07d2","Type":"ContainerStarted","Data":"b65f5d8c9356c57828ae2c5f130c3053c2ad374bb9721b32ae54d663648d9a17"} Jan 03 06:03:17 crc kubenswrapper[4854]: I0103 06:03:13.822605 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 03 06:03:17 crc kubenswrapper[4854]: I0103 06:03:13.823738 4854 generic.go:334] "Generic (PLEG): container finished" podID="ba007649-daf8-445b-b2c8-73ce6ec54403" containerID="3da223885f26a8fbf9908728d64ed6e77133ba67ab412d2929573a54cadd668b" exitCode=0 Jan 03 06:03:17 crc kubenswrapper[4854]: I0103 06:03:13.823867 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"ba007649-daf8-445b-b2c8-73ce6ec54403","Type":"ContainerDied","Data":"3da223885f26a8fbf9908728d64ed6e77133ba67ab412d2929573a54cadd668b"} Jan 03 06:03:17 crc kubenswrapper[4854]: I0103 06:03:13.825723 4854 generic.go:334] "Generic (PLEG): container finished" podID="11d4187f-5938-4054-9eec-4d84f843bd73" containerID="8b3905b680a06f0704057089d6a59eff8699f94da007aad0b4bf2bf6b922a256" exitCode=0 Jan 03 06:03:17 crc kubenswrapper[4854]: I0103 06:03:13.825769 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"11d4187f-5938-4054-9eec-4d84f843bd73","Type":"ContainerDied","Data":"8b3905b680a06f0704057089d6a59eff8699f94da007aad0b4bf2bf6b922a256"} Jan 03 06:03:17 crc kubenswrapper[4854]: I0103 06:03:13.847382 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=25.294589487 podStartE2EDuration="1m20.847363142s" podCreationTimestamp="2026-01-03 06:01:53 +0000 UTC" firstStartedPulling="2026-01-03 06:02:16.413440035 +0000 UTC m=+1314.740016607" lastFinishedPulling="2026-01-03 06:03:11.96621367 +0000 UTC m=+1370.292790262" observedRunningTime="2026-01-03 06:03:13.846021238 +0000 UTC m=+1372.172597820" watchObservedRunningTime="2026-01-03 06:03:13.847363142 +0000 UTC m=+1372.173939724" Jan 03 06:03:17 crc kubenswrapper[4854]: I0103 06:03:15.847368 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"ba007649-daf8-445b-b2c8-73ce6ec54403","Type":"ContainerStarted","Data":"47373f0c373d41d2ed0a8769659142e352b16482458dbc79ebf2fb0ab0dcee4a"} Jan 03 06:03:17 crc kubenswrapper[4854]: I0103 06:03:15.849241 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"11d4187f-5938-4054-9eec-4d84f843bd73","Type":"ContainerStarted","Data":"1692c8acfa3150463e84907272e673ac637c61b8759e684e77f9e6829b387f9e"} Jan 03 06:03:17 crc kubenswrapper[4854]: I0103 06:03:16.863016 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Jan 03 06:03:17 crc kubenswrapper[4854]: I0103 06:03:16.901267 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=-9223371946.953548 podStartE2EDuration="1m29.901228146s" podCreationTimestamp="2026-01-03 06:01:47 +0000 UTC" firstStartedPulling="2026-01-03 06:01:59.036323498 +0000 UTC m=+1297.362900080" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:03:16.894833808 +0000 UTC m=+1375.221410380" watchObservedRunningTime="2026-01-03 06:03:16.901228146 +0000 UTC m=+1375.227804808" Jan 03 06:03:17 crc kubenswrapper[4854]: I0103 06:03:16.944572 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=-9223371946.910223 podStartE2EDuration="1m29.944553295s" podCreationTimestamp="2026-01-03 06:01:47 +0000 UTC" firstStartedPulling="2026-01-03 06:01:50.016363092 +0000 UTC m=+1288.342939674" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:03:16.936365393 +0000 UTC m=+1375.262942035" watchObservedRunningTime="2026-01-03 06:03:16.944553295 +0000 UTC m=+1375.271129857" Jan 03 06:03:17 crc kubenswrapper[4854]: I0103 06:03:17.722269 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-aee3-account-create-update-9rwqc"] Jan 03 06:03:17 crc kubenswrapper[4854]: I0103 06:03:17.812961 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-rmh8c"] Jan 03 06:03:17 crc kubenswrapper[4854]: I0103 06:03:17.845665 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-dll2c" podUID="04465680-9e76-4b04-aa5f-c94218a6bf28" containerName="ovn-controller" probeResult="failure" output=< Jan 03 06:03:17 crc kubenswrapper[4854]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 03 06:03:17 crc kubenswrapper[4854]: > Jan 03 06:03:17 crc kubenswrapper[4854]: I0103 06:03:17.850920 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-27tnm"] Jan 03 06:03:17 crc kubenswrapper[4854]: I0103 06:03:17.916462 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"97a38e3c-dd5a-447b-b580-ed7bd5f16fde","Type":"ContainerStarted","Data":"c98c7572e2fadf2eedfaaf84522ebe9bdf58a2d6ca98573910bedf1e6020bc9a"} Jan 03 06:03:17 crc kubenswrapper[4854]: I0103 06:03:17.917516 4854 generic.go:334] "Generic (PLEG): container finished" podID="de481fb0-7bd9-496e-99e1-5a3d1a25e47b" containerID="59cba3b7f6080425292c17d08b60991f665dcca155120710c11d2a3a5baa2a9f" exitCode=0 Jan 03 06:03:17 crc kubenswrapper[4854]: I0103 06:03:17.917552 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-4c08-account-create-update-7pbzj" event={"ID":"de481fb0-7bd9-496e-99e1-5a3d1a25e47b","Type":"ContainerDied","Data":"59cba3b7f6080425292c17d08b60991f665dcca155120710c11d2a3a5baa2a9f"} Jan 03 06:03:17 crc kubenswrapper[4854]: I0103 06:03:17.925186 4854 generic.go:334] "Generic (PLEG): container finished" podID="7ac766a6-c8c8-4506-b86d-55b398c38783" containerID="3285fcc2748cbe903ffe199d5a51313ce4d8e1adbf5d6cd6d2704540a31d2b60" exitCode=0 Jan 03 06:03:17 crc kubenswrapper[4854]: I0103 06:03:17.925241 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8d7c-account-create-update-lpc6t" event={"ID":"7ac766a6-c8c8-4506-b86d-55b398c38783","Type":"ContainerDied","Data":"3285fcc2748cbe903ffe199d5a51313ce4d8e1adbf5d6cd6d2704540a31d2b60"} Jan 03 06:03:17 crc kubenswrapper[4854]: I0103 06:03:17.950542 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-aee3-account-create-update-9rwqc" event={"ID":"9e58db94-b238-4fd5-a833-0fc6f281465c","Type":"ContainerStarted","Data":"75a19b6adc11ec28bb484adcd2ba10a266e0129ef12a49d40e6a47ad92a65913"} Jan 03 06:03:18 crc kubenswrapper[4854]: I0103 06:03:18.046152 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-wtw2t"] Jan 03 06:03:18 crc kubenswrapper[4854]: I0103 06:03:18.063954 4854 generic.go:334] "Generic (PLEG): container finished" podID="1b06e03e-86ca-4379-9199-a4c1bddd4e33" containerID="7b4138294218f9b48ab15f5cf556619572463aa1f2fb2fc782f1dc3be5637c97" exitCode=0 Jan 03 06:03:18 crc kubenswrapper[4854]: I0103 06:03:18.064525 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-l7b7k"] Jan 03 06:03:18 crc kubenswrapper[4854]: I0103 06:03:18.064653 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-fspcx" event={"ID":"1b06e03e-86ca-4379-9199-a4c1bddd4e33","Type":"ContainerDied","Data":"7b4138294218f9b48ab15f5cf556619572463aa1f2fb2fc782f1dc3be5637c97"} Jan 03 06:03:18 crc kubenswrapper[4854]: I0103 06:03:18.095679 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-4zb5c"] Jan 03 06:03:18 crc kubenswrapper[4854]: W0103 06:03:18.157509 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb4adf09_eb0a_4a6e_a49f_78e43cf04124.slice/crio-63be384d9585ad73ddc603a544fbf895fb1e0400237aa983261a872d72a43141 WatchSource:0}: Error finding container 63be384d9585ad73ddc603a544fbf895fb1e0400237aa983261a872d72a43141: Status 404 returned error can't find the container with id 63be384d9585ad73ddc603a544fbf895fb1e0400237aa983261a872d72a43141 Jan 03 06:03:18 crc kubenswrapper[4854]: I0103 06:03:18.161172 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-a2fa-account-create-update-5tjqb"] Jan 03 06:03:18 crc kubenswrapper[4854]: I0103 06:03:18.268952 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-qv4f6" Jan 03 06:03:18 crc kubenswrapper[4854]: I0103 06:03:18.365463 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8251ed1f-e0cc-48dd-8bbd-14c8753a65a3-operator-scripts\") pod \"8251ed1f-e0cc-48dd-8bbd-14c8753a65a3\" (UID: \"8251ed1f-e0cc-48dd-8bbd-14c8753a65a3\") " Jan 03 06:03:18 crc kubenswrapper[4854]: I0103 06:03:18.365633 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9rlk\" (UniqueName: \"kubernetes.io/projected/8251ed1f-e0cc-48dd-8bbd-14c8753a65a3-kube-api-access-c9rlk\") pod \"8251ed1f-e0cc-48dd-8bbd-14c8753a65a3\" (UID: \"8251ed1f-e0cc-48dd-8bbd-14c8753a65a3\") " Jan 03 06:03:18 crc kubenswrapper[4854]: I0103 06:03:18.366217 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8251ed1f-e0cc-48dd-8bbd-14c8753a65a3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8251ed1f-e0cc-48dd-8bbd-14c8753a65a3" (UID: "8251ed1f-e0cc-48dd-8bbd-14c8753a65a3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:03:18 crc kubenswrapper[4854]: I0103 06:03:18.366448 4854 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8251ed1f-e0cc-48dd-8bbd-14c8753a65a3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:18 crc kubenswrapper[4854]: I0103 06:03:18.378349 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8251ed1f-e0cc-48dd-8bbd-14c8753a65a3-kube-api-access-c9rlk" (OuterVolumeSpecName: "kube-api-access-c9rlk") pod "8251ed1f-e0cc-48dd-8bbd-14c8753a65a3" (UID: "8251ed1f-e0cc-48dd-8bbd-14c8753a65a3"). InnerVolumeSpecName "kube-api-access-c9rlk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:03:18 crc kubenswrapper[4854]: I0103 06:03:18.468918 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9rlk\" (UniqueName: \"kubernetes.io/projected/8251ed1f-e0cc-48dd-8bbd-14c8753a65a3-kube-api-access-c9rlk\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:19 crc kubenswrapper[4854]: I0103 06:03:19.041324 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="b5742bd8-396a-4174-a8b7-dd6deec69632" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Jan 03 06:03:19 crc kubenswrapper[4854]: I0103 06:03:19.060588 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Jan 03 06:03:19 crc kubenswrapper[4854]: I0103 06:03:19.083931 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-wtw2t" event={"ID":"6e3f49c8-b025-4f3c-b356-847e0286a103","Type":"ContainerStarted","Data":"962a019d9d1075c60d4ad9fa8502f53d59fa2b6c80ecebe9f348e81e1bef1dd3"} Jan 03 06:03:19 crc kubenswrapper[4854]: I0103 06:03:19.086608 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nxt9h" event={"ID":"6f9e2844-dbc2-488b-bb08-77f9a4284a35","Type":"ContainerStarted","Data":"61da04aa23070db5bd8e652fd9d7dc4eb5ea08d20f929d20b3d913d951c1c41f"} Jan 03 06:03:19 crc kubenswrapper[4854]: I0103 06:03:19.092322 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-rmh8c" event={"ID":"50e5be2e-c854-47b3-b5c5-312a82700553","Type":"ContainerStarted","Data":"2c18ad83e63a4aed636889a42df51df1b95d61ac5f229e131fa8c306c15c3db1"} Jan 03 06:03:19 crc kubenswrapper[4854]: I0103 06:03:19.092367 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-rmh8c" event={"ID":"50e5be2e-c854-47b3-b5c5-312a82700553","Type":"ContainerStarted","Data":"a0853f075496f30ce6c40b70fba126ce448d2c5d36dcbf075d9a301c86703da8"} Jan 03 06:03:19 crc kubenswrapper[4854]: I0103 06:03:19.094297 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-l7b7k" event={"ID":"db4adf09-eb0a-4a6e-a49f-78e43cf04124","Type":"ContainerStarted","Data":"63be384d9585ad73ddc603a544fbf895fb1e0400237aa983261a872d72a43141"} Jan 03 06:03:19 crc kubenswrapper[4854]: I0103 06:03:19.099910 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-qv4f6" event={"ID":"8251ed1f-e0cc-48dd-8bbd-14c8753a65a3","Type":"ContainerDied","Data":"fbd49fab20f260989eeeb2d124386d9b898d723579ba9e6021e2bd9b76f0d111"} Jan 03 06:03:19 crc kubenswrapper[4854]: I0103 06:03:19.099940 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbd49fab20f260989eeeb2d124386d9b898d723579ba9e6021e2bd9b76f0d111" Jan 03 06:03:19 crc kubenswrapper[4854]: I0103 06:03:19.099993 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-qv4f6" Jan 03 06:03:19 crc kubenswrapper[4854]: I0103 06:03:19.115712 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-4zb5c" event={"ID":"ec6dfccc-6930-4425-b5d6-511366ab6786","Type":"ContainerStarted","Data":"de25c359eaad7b92aa63fa4ef0fd0c752e5fac56ef7791d15044fc12d09efe7f"} Jan 03 06:03:19 crc kubenswrapper[4854]: I0103 06:03:19.115769 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-4zb5c" event={"ID":"ec6dfccc-6930-4425-b5d6-511366ab6786","Type":"ContainerStarted","Data":"3e53776b63fcd5c8d37e65ff115a53e94c3904b75c520313b8c61bf7aff714e7"} Jan 03 06:03:19 crc kubenswrapper[4854]: I0103 06:03:19.118804 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-a2fa-account-create-update-5tjqb" event={"ID":"7c244d13-e5f5-4f26-a2b4-361a8012b0c1","Type":"ContainerStarted","Data":"ff6c8932491e14a996d5c0dd2761667e73f50b596c819e84d1cb1ad74860b7d1"} Jan 03 06:03:19 crc kubenswrapper[4854]: I0103 06:03:19.118844 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-a2fa-account-create-update-5tjqb" event={"ID":"7c244d13-e5f5-4f26-a2b4-361a8012b0c1","Type":"ContainerStarted","Data":"278cdf65e574bcce1384ffcd1ba7e1c073722fd3ab90cb8b7272492b5ec56be9"} Jan 03 06:03:19 crc kubenswrapper[4854]: I0103 06:03:19.126197 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"73c3ad1e-a419-4c11-a31d-81f28866fe2b","Type":"ContainerStarted","Data":"e016fe87f1741c7594400972b9fcd82c472e152e970f8acc9b823d381c1c4324"} Jan 03 06:03:19 crc kubenswrapper[4854]: I0103 06:03:19.127577 4854 generic.go:334] "Generic (PLEG): container finished" podID="9e58db94-b238-4fd5-a833-0fc6f281465c" containerID="aebe30f1e4dfce24a408d34c32ceaff1b8ec4b22e1664456f238bb50a1112a47" exitCode=0 Jan 03 06:03:19 crc kubenswrapper[4854]: I0103 06:03:19.127705 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-aee3-account-create-update-9rwqc" event={"ID":"9e58db94-b238-4fd5-a833-0fc6f281465c","Type":"ContainerDied","Data":"aebe30f1e4dfce24a408d34c32ceaff1b8ec4b22e1664456f238bb50a1112a47"} Jan 03 06:03:19 crc kubenswrapper[4854]: I0103 06:03:19.133918 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-27tnm" event={"ID":"8a21d16f-c305-4792-bad1-2eb5451b15dc","Type":"ContainerStarted","Data":"6c2b8f2cd6d5e76f60ddfd13d59dc3b7172c67d3dea0535bf21c5fea30948d35"} Jan 03 06:03:19 crc kubenswrapper[4854]: I0103 06:03:19.133992 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-27tnm" event={"ID":"8a21d16f-c305-4792-bad1-2eb5451b15dc","Type":"ContainerStarted","Data":"e3b4a24d71a98b4de63f281dc490026d4ccfdd33e449eeb684147ea5f9d79106"} Jan 03 06:03:19 crc kubenswrapper[4854]: I0103 06:03:19.177743 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-rmh8c" podStartSLOduration=20.177706391 podStartE2EDuration="20.177706391s" podCreationTimestamp="2026-01-03 06:02:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:03:19.140637147 +0000 UTC m=+1377.467213719" watchObservedRunningTime="2026-01-03 06:03:19.177706391 +0000 UTC m=+1377.504282973" Jan 03 06:03:19 crc kubenswrapper[4854]: I0103 06:03:19.179177 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-27tnm" podStartSLOduration=22.179160797 podStartE2EDuration="22.179160797s" podCreationTimestamp="2026-01-03 06:02:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:03:19.160393025 +0000 UTC m=+1377.486969597" watchObservedRunningTime="2026-01-03 06:03:19.179160797 +0000 UTC m=+1377.505737369" Jan 03 06:03:19 crc kubenswrapper[4854]: I0103 06:03:19.203535 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=19.519525149 podStartE2EDuration="1m19.203512708s" podCreationTimestamp="2026-01-03 06:02:00 +0000 UTC" firstStartedPulling="2026-01-03 06:02:17.749460128 +0000 UTC m=+1316.076036700" lastFinishedPulling="2026-01-03 06:03:17.433447687 +0000 UTC m=+1375.760024259" observedRunningTime="2026-01-03 06:03:19.201032606 +0000 UTC m=+1377.527609178" watchObservedRunningTime="2026-01-03 06:03:19.203512708 +0000 UTC m=+1377.530089280" Jan 03 06:03:19 crc kubenswrapper[4854]: I0103 06:03:19.258871 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-openstack-db-create-4zb5c" podStartSLOduration=25.258833792 podStartE2EDuration="25.258833792s" podCreationTimestamp="2026-01-03 06:02:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:03:19.240038239 +0000 UTC m=+1377.566614811" watchObservedRunningTime="2026-01-03 06:03:19.258833792 +0000 UTC m=+1377.585410364" Jan 03 06:03:19 crc kubenswrapper[4854]: I0103 06:03:19.308991 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-a2fa-account-create-update-5tjqb" podStartSLOduration=25.308970389 podStartE2EDuration="25.308970389s" podCreationTimestamp="2026-01-03 06:02:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:03:19.297516197 +0000 UTC m=+1377.624092759" watchObservedRunningTime="2026-01-03 06:03:19.308970389 +0000 UTC m=+1377.635546961" Jan 03 06:03:19 crc kubenswrapper[4854]: I0103 06:03:19.394720 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="71288814-2f4e-4e92-8064-8f9ef1920212" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.132:5671: connect: connection refused" Jan 03 06:03:19 crc kubenswrapper[4854]: I0103 06:03:19.655145 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8d7c-account-create-update-lpc6t" Jan 03 06:03:19 crc kubenswrapper[4854]: I0103 06:03:19.657254 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 03 06:03:19 crc kubenswrapper[4854]: I0103 06:03:19.746052 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pxtq\" (UniqueName: \"kubernetes.io/projected/7ac766a6-c8c8-4506-b86d-55b398c38783-kube-api-access-5pxtq\") pod \"7ac766a6-c8c8-4506-b86d-55b398c38783\" (UID: \"7ac766a6-c8c8-4506-b86d-55b398c38783\") " Jan 03 06:03:19 crc kubenswrapper[4854]: I0103 06:03:19.746201 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ac766a6-c8c8-4506-b86d-55b398c38783-operator-scripts\") pod \"7ac766a6-c8c8-4506-b86d-55b398c38783\" (UID: \"7ac766a6-c8c8-4506-b86d-55b398c38783\") " Jan 03 06:03:19 crc kubenswrapper[4854]: I0103 06:03:19.749287 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ac766a6-c8c8-4506-b86d-55b398c38783-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7ac766a6-c8c8-4506-b86d-55b398c38783" (UID: "7ac766a6-c8c8-4506-b86d-55b398c38783"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:03:19 crc kubenswrapper[4854]: I0103 06:03:19.752729 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ac766a6-c8c8-4506-b86d-55b398c38783-kube-api-access-5pxtq" (OuterVolumeSpecName: "kube-api-access-5pxtq") pod "7ac766a6-c8c8-4506-b86d-55b398c38783" (UID: "7ac766a6-c8c8-4506-b86d-55b398c38783"). InnerVolumeSpecName "kube-api-access-5pxtq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:03:19 crc kubenswrapper[4854]: I0103 06:03:19.848745 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pxtq\" (UniqueName: \"kubernetes.io/projected/7ac766a6-c8c8-4506-b86d-55b398c38783-kube-api-access-5pxtq\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:19 crc kubenswrapper[4854]: I0103 06:03:19.848777 4854 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ac766a6-c8c8-4506-b86d-55b398c38783-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:19 crc kubenswrapper[4854]: I0103 06:03:19.958907 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4c08-account-create-update-7pbzj" Jan 03 06:03:19 crc kubenswrapper[4854]: I0103 06:03:19.966627 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fspcx" Jan 03 06:03:20 crc kubenswrapper[4854]: I0103 06:03:20.056832 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b06e03e-86ca-4379-9199-a4c1bddd4e33-operator-scripts\") pod \"1b06e03e-86ca-4379-9199-a4c1bddd4e33\" (UID: \"1b06e03e-86ca-4379-9199-a4c1bddd4e33\") " Jan 03 06:03:20 crc kubenswrapper[4854]: I0103 06:03:20.057072 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rng2p\" (UniqueName: \"kubernetes.io/projected/1b06e03e-86ca-4379-9199-a4c1bddd4e33-kube-api-access-rng2p\") pod \"1b06e03e-86ca-4379-9199-a4c1bddd4e33\" (UID: \"1b06e03e-86ca-4379-9199-a4c1bddd4e33\") " Jan 03 06:03:20 crc kubenswrapper[4854]: I0103 06:03:20.057293 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rshws\" (UniqueName: \"kubernetes.io/projected/de481fb0-7bd9-496e-99e1-5a3d1a25e47b-kube-api-access-rshws\") pod \"de481fb0-7bd9-496e-99e1-5a3d1a25e47b\" (UID: \"de481fb0-7bd9-496e-99e1-5a3d1a25e47b\") " Jan 03 06:03:20 crc kubenswrapper[4854]: I0103 06:03:20.057356 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de481fb0-7bd9-496e-99e1-5a3d1a25e47b-operator-scripts\") pod \"de481fb0-7bd9-496e-99e1-5a3d1a25e47b\" (UID: \"de481fb0-7bd9-496e-99e1-5a3d1a25e47b\") " Jan 03 06:03:20 crc kubenswrapper[4854]: I0103 06:03:20.059844 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de481fb0-7bd9-496e-99e1-5a3d1a25e47b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "de481fb0-7bd9-496e-99e1-5a3d1a25e47b" (UID: "de481fb0-7bd9-496e-99e1-5a3d1a25e47b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:03:20 crc kubenswrapper[4854]: I0103 06:03:20.060525 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b06e03e-86ca-4379-9199-a4c1bddd4e33-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1b06e03e-86ca-4379-9199-a4c1bddd4e33" (UID: "1b06e03e-86ca-4379-9199-a4c1bddd4e33"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:03:20 crc kubenswrapper[4854]: I0103 06:03:20.083941 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de481fb0-7bd9-496e-99e1-5a3d1a25e47b-kube-api-access-rshws" (OuterVolumeSpecName: "kube-api-access-rshws") pod "de481fb0-7bd9-496e-99e1-5a3d1a25e47b" (UID: "de481fb0-7bd9-496e-99e1-5a3d1a25e47b"). InnerVolumeSpecName "kube-api-access-rshws". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:03:20 crc kubenswrapper[4854]: I0103 06:03:20.095636 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b06e03e-86ca-4379-9199-a4c1bddd4e33-kube-api-access-rng2p" (OuterVolumeSpecName: "kube-api-access-rng2p") pod "1b06e03e-86ca-4379-9199-a4c1bddd4e33" (UID: "1b06e03e-86ca-4379-9199-a4c1bddd4e33"). InnerVolumeSpecName "kube-api-access-rng2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:03:20 crc kubenswrapper[4854]: I0103 06:03:20.147955 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-fspcx" event={"ID":"1b06e03e-86ca-4379-9199-a4c1bddd4e33","Type":"ContainerDied","Data":"e8e1a4eb6862eca7b6bd2bab48a2bfd7e0846ac999e1d767e5f145a29ff993da"} Jan 03 06:03:20 crc kubenswrapper[4854]: I0103 06:03:20.148002 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8e1a4eb6862eca7b6bd2bab48a2bfd7e0846ac999e1d767e5f145a29ff993da" Jan 03 06:03:20 crc kubenswrapper[4854]: I0103 06:03:20.148124 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fspcx" Jan 03 06:03:20 crc kubenswrapper[4854]: I0103 06:03:20.160692 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rng2p\" (UniqueName: \"kubernetes.io/projected/1b06e03e-86ca-4379-9199-a4c1bddd4e33-kube-api-access-rng2p\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:20 crc kubenswrapper[4854]: I0103 06:03:20.161051 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rshws\" (UniqueName: \"kubernetes.io/projected/de481fb0-7bd9-496e-99e1-5a3d1a25e47b-kube-api-access-rshws\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:20 crc kubenswrapper[4854]: I0103 06:03:20.161092 4854 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de481fb0-7bd9-496e-99e1-5a3d1a25e47b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:20 crc kubenswrapper[4854]: I0103 06:03:20.161102 4854 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b06e03e-86ca-4379-9199-a4c1bddd4e33-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:20 crc kubenswrapper[4854]: I0103 06:03:20.161226 4854 generic.go:334] "Generic (PLEG): container finished" podID="7c244d13-e5f5-4f26-a2b4-361a8012b0c1" containerID="ff6c8932491e14a996d5c0dd2761667e73f50b596c819e84d1cb1ad74860b7d1" exitCode=0 Jan 03 06:03:20 crc kubenswrapper[4854]: I0103 06:03:20.161289 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-a2fa-account-create-update-5tjqb" event={"ID":"7c244d13-e5f5-4f26-a2b4-361a8012b0c1","Type":"ContainerDied","Data":"ff6c8932491e14a996d5c0dd2761667e73f50b596c819e84d1cb1ad74860b7d1"} Jan 03 06:03:20 crc kubenswrapper[4854]: I0103 06:03:20.177315 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-rmh8c" event={"ID":"50e5be2e-c854-47b3-b5c5-312a82700553","Type":"ContainerDied","Data":"2c18ad83e63a4aed636889a42df51df1b95d61ac5f229e131fa8c306c15c3db1"} Jan 03 06:03:20 crc kubenswrapper[4854]: I0103 06:03:20.177057 4854 generic.go:334] "Generic (PLEG): container finished" podID="50e5be2e-c854-47b3-b5c5-312a82700553" containerID="2c18ad83e63a4aed636889a42df51df1b95d61ac5f229e131fa8c306c15c3db1" exitCode=0 Jan 03 06:03:20 crc kubenswrapper[4854]: I0103 06:03:20.182585 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-4c08-account-create-update-7pbzj" event={"ID":"de481fb0-7bd9-496e-99e1-5a3d1a25e47b","Type":"ContainerDied","Data":"b258adfd7f24b821faba39e0bb3c4bdd8e956b67d47d6fac5c6e1f0d44a2800b"} Jan 03 06:03:20 crc kubenswrapper[4854]: I0103 06:03:20.182610 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b258adfd7f24b821faba39e0bb3c4bdd8e956b67d47d6fac5c6e1f0d44a2800b" Jan 03 06:03:20 crc kubenswrapper[4854]: I0103 06:03:20.182681 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4c08-account-create-update-7pbzj" Jan 03 06:03:20 crc kubenswrapper[4854]: I0103 06:03:20.191483 4854 generic.go:334] "Generic (PLEG): container finished" podID="6e3f49c8-b025-4f3c-b356-847e0286a103" containerID="cb33efd6867bf5e6a28760f46ec744ebf7f54b9c7e975f3f4daa06727ac6d6ee" exitCode=0 Jan 03 06:03:20 crc kubenswrapper[4854]: I0103 06:03:20.191542 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-wtw2t" event={"ID":"6e3f49c8-b025-4f3c-b356-847e0286a103","Type":"ContainerDied","Data":"cb33efd6867bf5e6a28760f46ec744ebf7f54b9c7e975f3f4daa06727ac6d6ee"} Jan 03 06:03:20 crc kubenswrapper[4854]: I0103 06:03:20.199707 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8d7c-account-create-update-lpc6t" Jan 03 06:03:20 crc kubenswrapper[4854]: I0103 06:03:20.200637 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8d7c-account-create-update-lpc6t" event={"ID":"7ac766a6-c8c8-4506-b86d-55b398c38783","Type":"ContainerDied","Data":"9541ee83f3d8483653cc38595cdf4617b25591096441f652a82513c293fe6b2e"} Jan 03 06:03:20 crc kubenswrapper[4854]: I0103 06:03:20.200672 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9541ee83f3d8483653cc38595cdf4617b25591096441f652a82513c293fe6b2e" Jan 03 06:03:20 crc kubenswrapper[4854]: I0103 06:03:20.470202 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-aee3-account-create-update-9rwqc" Jan 03 06:03:20 crc kubenswrapper[4854]: I0103 06:03:20.570914 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e58db94-b238-4fd5-a833-0fc6f281465c-operator-scripts\") pod \"9e58db94-b238-4fd5-a833-0fc6f281465c\" (UID: \"9e58db94-b238-4fd5-a833-0fc6f281465c\") " Jan 03 06:03:20 crc kubenswrapper[4854]: I0103 06:03:20.571281 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slhff\" (UniqueName: \"kubernetes.io/projected/9e58db94-b238-4fd5-a833-0fc6f281465c-kube-api-access-slhff\") pod \"9e58db94-b238-4fd5-a833-0fc6f281465c\" (UID: \"9e58db94-b238-4fd5-a833-0fc6f281465c\") " Jan 03 06:03:20 crc kubenswrapper[4854]: I0103 06:03:20.571686 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e58db94-b238-4fd5-a833-0fc6f281465c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9e58db94-b238-4fd5-a833-0fc6f281465c" (UID: "9e58db94-b238-4fd5-a833-0fc6f281465c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:03:20 crc kubenswrapper[4854]: I0103 06:03:20.572201 4854 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e58db94-b238-4fd5-a833-0fc6f281465c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:20 crc kubenswrapper[4854]: I0103 06:03:20.578741 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e58db94-b238-4fd5-a833-0fc6f281465c-kube-api-access-slhff" (OuterVolumeSpecName: "kube-api-access-slhff") pod "9e58db94-b238-4fd5-a833-0fc6f281465c" (UID: "9e58db94-b238-4fd5-a833-0fc6f281465c"). InnerVolumeSpecName "kube-api-access-slhff". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:03:20 crc kubenswrapper[4854]: I0103 06:03:20.674496 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slhff\" (UniqueName: \"kubernetes.io/projected/9e58db94-b238-4fd5-a833-0fc6f281465c-kube-api-access-slhff\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:21 crc kubenswrapper[4854]: I0103 06:03:21.212207 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-aee3-account-create-update-9rwqc" event={"ID":"9e58db94-b238-4fd5-a833-0fc6f281465c","Type":"ContainerDied","Data":"75a19b6adc11ec28bb484adcd2ba10a266e0129ef12a49d40e6a47ad92a65913"} Jan 03 06:03:21 crc kubenswrapper[4854]: I0103 06:03:21.212256 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-aee3-account-create-update-9rwqc" Jan 03 06:03:21 crc kubenswrapper[4854]: I0103 06:03:21.212262 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75a19b6adc11ec28bb484adcd2ba10a266e0129ef12a49d40e6a47ad92a65913" Jan 03 06:03:21 crc kubenswrapper[4854]: I0103 06:03:21.220707 4854 generic.go:334] "Generic (PLEG): container finished" podID="8a21d16f-c305-4792-bad1-2eb5451b15dc" containerID="6c2b8f2cd6d5e76f60ddfd13d59dc3b7172c67d3dea0535bf21c5fea30948d35" exitCode=0 Jan 03 06:03:21 crc kubenswrapper[4854]: I0103 06:03:21.220918 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-27tnm" event={"ID":"8a21d16f-c305-4792-bad1-2eb5451b15dc","Type":"ContainerDied","Data":"6c2b8f2cd6d5e76f60ddfd13d59dc3b7172c67d3dea0535bf21c5fea30948d35"} Jan 03 06:03:21 crc kubenswrapper[4854]: I0103 06:03:21.655925 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 03 06:03:21 crc kubenswrapper[4854]: I0103 06:03:21.684869 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-a2fa-account-create-update-5tjqb" Jan 03 06:03:21 crc kubenswrapper[4854]: I0103 06:03:21.783545 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rmh8c" Jan 03 06:03:21 crc kubenswrapper[4854]: I0103 06:03:21.800831 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmtmz\" (UniqueName: \"kubernetes.io/projected/7c244d13-e5f5-4f26-a2b4-361a8012b0c1-kube-api-access-wmtmz\") pod \"7c244d13-e5f5-4f26-a2b4-361a8012b0c1\" (UID: \"7c244d13-e5f5-4f26-a2b4-361a8012b0c1\") " Jan 03 06:03:21 crc kubenswrapper[4854]: I0103 06:03:21.801240 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c244d13-e5f5-4f26-a2b4-361a8012b0c1-operator-scripts\") pod \"7c244d13-e5f5-4f26-a2b4-361a8012b0c1\" (UID: \"7c244d13-e5f5-4f26-a2b4-361a8012b0c1\") " Jan 03 06:03:21 crc kubenswrapper[4854]: I0103 06:03:21.802343 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c244d13-e5f5-4f26-a2b4-361a8012b0c1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7c244d13-e5f5-4f26-a2b4-361a8012b0c1" (UID: "7c244d13-e5f5-4f26-a2b4-361a8012b0c1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:03:21 crc kubenswrapper[4854]: I0103 06:03:21.808705 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c244d13-e5f5-4f26-a2b4-361a8012b0c1-kube-api-access-wmtmz" (OuterVolumeSpecName: "kube-api-access-wmtmz") pod "7c244d13-e5f5-4f26-a2b4-361a8012b0c1" (UID: "7c244d13-e5f5-4f26-a2b4-361a8012b0c1"). InnerVolumeSpecName "kube-api-access-wmtmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:03:21 crc kubenswrapper[4854]: I0103 06:03:21.902704 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50e5be2e-c854-47b3-b5c5-312a82700553-operator-scripts\") pod \"50e5be2e-c854-47b3-b5c5-312a82700553\" (UID: \"50e5be2e-c854-47b3-b5c5-312a82700553\") " Jan 03 06:03:21 crc kubenswrapper[4854]: I0103 06:03:21.902796 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddwd7\" (UniqueName: \"kubernetes.io/projected/50e5be2e-c854-47b3-b5c5-312a82700553-kube-api-access-ddwd7\") pod \"50e5be2e-c854-47b3-b5c5-312a82700553\" (UID: \"50e5be2e-c854-47b3-b5c5-312a82700553\") " Jan 03 06:03:21 crc kubenswrapper[4854]: I0103 06:03:21.903324 4854 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c244d13-e5f5-4f26-a2b4-361a8012b0c1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:21 crc kubenswrapper[4854]: I0103 06:03:21.903344 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmtmz\" (UniqueName: \"kubernetes.io/projected/7c244d13-e5f5-4f26-a2b4-361a8012b0c1-kube-api-access-wmtmz\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:21 crc kubenswrapper[4854]: I0103 06:03:21.904273 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50e5be2e-c854-47b3-b5c5-312a82700553-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "50e5be2e-c854-47b3-b5c5-312a82700553" (UID: "50e5be2e-c854-47b3-b5c5-312a82700553"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:03:21 crc kubenswrapper[4854]: I0103 06:03:21.908843 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50e5be2e-c854-47b3-b5c5-312a82700553-kube-api-access-ddwd7" (OuterVolumeSpecName: "kube-api-access-ddwd7") pod "50e5be2e-c854-47b3-b5c5-312a82700553" (UID: "50e5be2e-c854-47b3-b5c5-312a82700553"). InnerVolumeSpecName "kube-api-access-ddwd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:03:22 crc kubenswrapper[4854]: I0103 06:03:22.006263 4854 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50e5be2e-c854-47b3-b5c5-312a82700553-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:22 crc kubenswrapper[4854]: I0103 06:03:22.006316 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddwd7\" (UniqueName: \"kubernetes.io/projected/50e5be2e-c854-47b3-b5c5-312a82700553-kube-api-access-ddwd7\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:22 crc kubenswrapper[4854]: I0103 06:03:22.232675 4854 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","pod1ca99325-405c-467a-a9e0-53c5e4fb96e4"] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod1ca99325-405c-467a-a9e0-53c5e4fb96e4] : Timed out while waiting for systemd to remove kubepods-burstable-pod1ca99325_405c_467a_a9e0_53c5e4fb96e4.slice" Jan 03 06:03:22 crc kubenswrapper[4854]: I0103 06:03:22.234470 4854 generic.go:334] "Generic (PLEG): container finished" podID="6f9e2844-dbc2-488b-bb08-77f9a4284a35" containerID="61da04aa23070db5bd8e652fd9d7dc4eb5ea08d20f929d20b3d913d951c1c41f" exitCode=0 Jan 03 06:03:22 crc kubenswrapper[4854]: I0103 06:03:22.234499 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nxt9h" event={"ID":"6f9e2844-dbc2-488b-bb08-77f9a4284a35","Type":"ContainerDied","Data":"61da04aa23070db5bd8e652fd9d7dc4eb5ea08d20f929d20b3d913d951c1c41f"} Jan 03 06:03:22 crc kubenswrapper[4854]: I0103 06:03:22.236828 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-a2fa-account-create-update-5tjqb" event={"ID":"7c244d13-e5f5-4f26-a2b4-361a8012b0c1","Type":"ContainerDied","Data":"278cdf65e574bcce1384ffcd1ba7e1c073722fd3ab90cb8b7272492b5ec56be9"} Jan 03 06:03:22 crc kubenswrapper[4854]: I0103 06:03:22.236891 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="278cdf65e574bcce1384ffcd1ba7e1c073722fd3ab90cb8b7272492b5ec56be9" Jan 03 06:03:22 crc kubenswrapper[4854]: I0103 06:03:22.238370 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-a2fa-account-create-update-5tjqb" Jan 03 06:03:22 crc kubenswrapper[4854]: I0103 06:03:22.245726 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rmh8c" Jan 03 06:03:22 crc kubenswrapper[4854]: I0103 06:03:22.246260 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-rmh8c" event={"ID":"50e5be2e-c854-47b3-b5c5-312a82700553","Type":"ContainerDied","Data":"a0853f075496f30ce6c40b70fba126ce448d2c5d36dcbf075d9a301c86703da8"} Jan 03 06:03:22 crc kubenswrapper[4854]: I0103 06:03:22.246306 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0853f075496f30ce6c40b70fba126ce448d2c5d36dcbf075d9a301c86703da8" Jan 03 06:03:22 crc kubenswrapper[4854]: I0103 06:03:22.736962 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-27tnm" Jan 03 06:03:22 crc kubenswrapper[4854]: I0103 06:03:22.781422 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 03 06:03:22 crc kubenswrapper[4854]: I0103 06:03:22.786427 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-dll2c" podUID="04465680-9e76-4b04-aa5f-c94218a6bf28" containerName="ovn-controller" probeResult="failure" output=< Jan 03 06:03:22 crc kubenswrapper[4854]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 03 06:03:22 crc kubenswrapper[4854]: > Jan 03 06:03:22 crc kubenswrapper[4854]: I0103 06:03:22.820992 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a21d16f-c305-4792-bad1-2eb5451b15dc-operator-scripts\") pod \"8a21d16f-c305-4792-bad1-2eb5451b15dc\" (UID: \"8a21d16f-c305-4792-bad1-2eb5451b15dc\") " Jan 03 06:03:22 crc kubenswrapper[4854]: I0103 06:03:22.821069 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k89sj\" (UniqueName: \"kubernetes.io/projected/8a21d16f-c305-4792-bad1-2eb5451b15dc-kube-api-access-k89sj\") pod \"8a21d16f-c305-4792-bad1-2eb5451b15dc\" (UID: \"8a21d16f-c305-4792-bad1-2eb5451b15dc\") " Jan 03 06:03:22 crc kubenswrapper[4854]: I0103 06:03:22.823851 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a21d16f-c305-4792-bad1-2eb5451b15dc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8a21d16f-c305-4792-bad1-2eb5451b15dc" (UID: "8a21d16f-c305-4792-bad1-2eb5451b15dc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:03:22 crc kubenswrapper[4854]: I0103 06:03:22.827911 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a21d16f-c305-4792-bad1-2eb5451b15dc-kube-api-access-k89sj" (OuterVolumeSpecName: "kube-api-access-k89sj") pod "8a21d16f-c305-4792-bad1-2eb5451b15dc" (UID: "8a21d16f-c305-4792-bad1-2eb5451b15dc"). InnerVolumeSpecName "kube-api-access-k89sj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:03:22 crc kubenswrapper[4854]: I0103 06:03:22.924052 4854 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a21d16f-c305-4792-bad1-2eb5451b15dc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:22 crc kubenswrapper[4854]: I0103 06:03:22.924129 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k89sj\" (UniqueName: \"kubernetes.io/projected/8a21d16f-c305-4792-bad1-2eb5451b15dc-kube-api-access-k89sj\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:23 crc kubenswrapper[4854]: I0103 06:03:23.260019 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-27tnm" event={"ID":"8a21d16f-c305-4792-bad1-2eb5451b15dc","Type":"ContainerDied","Data":"e3b4a24d71a98b4de63f281dc490026d4ccfdd33e449eeb684147ea5f9d79106"} Jan 03 06:03:23 crc kubenswrapper[4854]: I0103 06:03:23.260064 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-27tnm" Jan 03 06:03:23 crc kubenswrapper[4854]: I0103 06:03:23.260069 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3b4a24d71a98b4de63f281dc490026d4ccfdd33e449eeb684147ea5f9d79106" Jan 03 06:03:23 crc kubenswrapper[4854]: I0103 06:03:23.755522 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 03 06:03:23 crc kubenswrapper[4854]: I0103 06:03:23.881662 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 03 06:03:24 crc kubenswrapper[4854]: I0103 06:03:24.271223 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-wtw2t" event={"ID":"6e3f49c8-b025-4f3c-b356-847e0286a103","Type":"ContainerStarted","Data":"b7d8550b767b745c10631f8f4cfd712f0fbf747774b54c5ddba943f86791c42c"} Jan 03 06:03:24 crc kubenswrapper[4854]: I0103 06:03:24.271707 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-wtw2t" Jan 03 06:03:24 crc kubenswrapper[4854]: I0103 06:03:24.273579 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-4zb5c" event={"ID":"ec6dfccc-6930-4425-b5d6-511366ab6786","Type":"ContainerDied","Data":"de25c359eaad7b92aa63fa4ef0fd0c752e5fac56ef7791d15044fc12d09efe7f"} Jan 03 06:03:24 crc kubenswrapper[4854]: I0103 06:03:24.273855 4854 generic.go:334] "Generic (PLEG): container finished" podID="ec6dfccc-6930-4425-b5d6-511366ab6786" containerID="de25c359eaad7b92aa63fa4ef0fd0c752e5fac56ef7791d15044fc12d09efe7f" exitCode=0 Jan 03 06:03:24 crc kubenswrapper[4854]: I0103 06:03:24.293915 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-wtw2t" podStartSLOduration=30.293898878 podStartE2EDuration="30.293898878s" podCreationTimestamp="2026-01-03 06:02:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:03:24.288882394 +0000 UTC m=+1382.615458986" watchObservedRunningTime="2026-01-03 06:03:24.293898878 +0000 UTC m=+1382.620475470" Jan 03 06:03:24 crc kubenswrapper[4854]: I0103 06:03:24.315771 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 03 06:03:25 crc kubenswrapper[4854]: I0103 06:03:25.876988 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-4zb5c" Jan 03 06:03:26 crc kubenswrapper[4854]: I0103 06:03:26.005211 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgstn\" (UniqueName: \"kubernetes.io/projected/ec6dfccc-6930-4425-b5d6-511366ab6786-kube-api-access-dgstn\") pod \"ec6dfccc-6930-4425-b5d6-511366ab6786\" (UID: \"ec6dfccc-6930-4425-b5d6-511366ab6786\") " Jan 03 06:03:26 crc kubenswrapper[4854]: I0103 06:03:26.005256 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec6dfccc-6930-4425-b5d6-511366ab6786-operator-scripts\") pod \"ec6dfccc-6930-4425-b5d6-511366ab6786\" (UID: \"ec6dfccc-6930-4425-b5d6-511366ab6786\") " Jan 03 06:03:26 crc kubenswrapper[4854]: I0103 06:03:26.005807 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec6dfccc-6930-4425-b5d6-511366ab6786-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ec6dfccc-6930-4425-b5d6-511366ab6786" (UID: "ec6dfccc-6930-4425-b5d6-511366ab6786"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:03:26 crc kubenswrapper[4854]: I0103 06:03:26.011440 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec6dfccc-6930-4425-b5d6-511366ab6786-kube-api-access-dgstn" (OuterVolumeSpecName: "kube-api-access-dgstn") pod "ec6dfccc-6930-4425-b5d6-511366ab6786" (UID: "ec6dfccc-6930-4425-b5d6-511366ab6786"). InnerVolumeSpecName "kube-api-access-dgstn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:03:26 crc kubenswrapper[4854]: I0103 06:03:26.107341 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgstn\" (UniqueName: \"kubernetes.io/projected/ec6dfccc-6930-4425-b5d6-511366ab6786-kube-api-access-dgstn\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:26 crc kubenswrapper[4854]: I0103 06:03:26.107388 4854 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec6dfccc-6930-4425-b5d6-511366ab6786-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:26 crc kubenswrapper[4854]: I0103 06:03:26.298788 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-4zb5c" event={"ID":"ec6dfccc-6930-4425-b5d6-511366ab6786","Type":"ContainerDied","Data":"3e53776b63fcd5c8d37e65ff115a53e94c3904b75c520313b8c61bf7aff714e7"} Jan 03 06:03:26 crc kubenswrapper[4854]: I0103 06:03:26.298830 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e53776b63fcd5c8d37e65ff115a53e94c3904b75c520313b8c61bf7aff714e7" Jan 03 06:03:26 crc kubenswrapper[4854]: I0103 06:03:26.298887 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-4zb5c" Jan 03 06:03:26 crc kubenswrapper[4854]: I0103 06:03:26.706707 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.029841 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 03 06:03:27 crc kubenswrapper[4854]: E0103 06:03:27.030497 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e58db94-b238-4fd5-a833-0fc6f281465c" containerName="mariadb-account-create-update" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.030514 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e58db94-b238-4fd5-a833-0fc6f281465c" containerName="mariadb-account-create-update" Jan 03 06:03:27 crc kubenswrapper[4854]: E0103 06:03:27.030522 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec6dfccc-6930-4425-b5d6-511366ab6786" containerName="mariadb-database-create" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.030529 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec6dfccc-6930-4425-b5d6-511366ab6786" containerName="mariadb-database-create" Jan 03 06:03:27 crc kubenswrapper[4854]: E0103 06:03:27.030537 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a21d16f-c305-4792-bad1-2eb5451b15dc" containerName="mariadb-database-create" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.030546 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a21d16f-c305-4792-bad1-2eb5451b15dc" containerName="mariadb-database-create" Jan 03 06:03:27 crc kubenswrapper[4854]: E0103 06:03:27.030554 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de481fb0-7bd9-496e-99e1-5a3d1a25e47b" containerName="mariadb-account-create-update" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.030560 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="de481fb0-7bd9-496e-99e1-5a3d1a25e47b" containerName="mariadb-account-create-update" Jan 03 06:03:27 crc kubenswrapper[4854]: E0103 06:03:27.030570 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c244d13-e5f5-4f26-a2b4-361a8012b0c1" containerName="mariadb-account-create-update" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.030575 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c244d13-e5f5-4f26-a2b4-361a8012b0c1" containerName="mariadb-account-create-update" Jan 03 06:03:27 crc kubenswrapper[4854]: E0103 06:03:27.030591 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8251ed1f-e0cc-48dd-8bbd-14c8753a65a3" containerName="mariadb-database-create" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.030599 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="8251ed1f-e0cc-48dd-8bbd-14c8753a65a3" containerName="mariadb-database-create" Jan 03 06:03:27 crc kubenswrapper[4854]: E0103 06:03:27.030618 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ac766a6-c8c8-4506-b86d-55b398c38783" containerName="mariadb-account-create-update" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.030624 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ac766a6-c8c8-4506-b86d-55b398c38783" containerName="mariadb-account-create-update" Jan 03 06:03:27 crc kubenswrapper[4854]: E0103 06:03:27.030659 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50e5be2e-c854-47b3-b5c5-312a82700553" containerName="mariadb-account-create-update" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.030665 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="50e5be2e-c854-47b3-b5c5-312a82700553" containerName="mariadb-account-create-update" Jan 03 06:03:27 crc kubenswrapper[4854]: E0103 06:03:27.030673 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b06e03e-86ca-4379-9199-a4c1bddd4e33" containerName="mariadb-database-create" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.030679 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b06e03e-86ca-4379-9199-a4c1bddd4e33" containerName="mariadb-database-create" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.030902 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c244d13-e5f5-4f26-a2b4-361a8012b0c1" containerName="mariadb-account-create-update" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.030913 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="50e5be2e-c854-47b3-b5c5-312a82700553" containerName="mariadb-account-create-update" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.030926 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a21d16f-c305-4792-bad1-2eb5451b15dc" containerName="mariadb-database-create" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.030941 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e58db94-b238-4fd5-a833-0fc6f281465c" containerName="mariadb-account-create-update" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.030953 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec6dfccc-6930-4425-b5d6-511366ab6786" containerName="mariadb-database-create" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.030965 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="de481fb0-7bd9-496e-99e1-5a3d1a25e47b" containerName="mariadb-account-create-update" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.030977 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b06e03e-86ca-4379-9199-a4c1bddd4e33" containerName="mariadb-database-create" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.030991 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ac766a6-c8c8-4506-b86d-55b398c38783" containerName="mariadb-account-create-update" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.031006 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="8251ed1f-e0cc-48dd-8bbd-14c8753a65a3" containerName="mariadb-database-create" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.041951 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.050321 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.054898 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.055356 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.055490 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-94pmn" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.056306 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.091203 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-dll2c-config-h22g4"] Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.092853 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dll2c-config-h22g4" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.098323 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.126451 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-dll2c-config-h22g4"] Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.131018 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2acb1b92-2b44-4c8a-b80e-12b62db7de4a-var-run-ovn\") pod \"ovn-controller-dll2c-config-h22g4\" (UID: \"2acb1b92-2b44-4c8a-b80e-12b62db7de4a\") " pod="openstack/ovn-controller-dll2c-config-h22g4" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.131070 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2acb1b92-2b44-4c8a-b80e-12b62db7de4a-additional-scripts\") pod \"ovn-controller-dll2c-config-h22g4\" (UID: \"2acb1b92-2b44-4c8a-b80e-12b62db7de4a\") " pod="openstack/ovn-controller-dll2c-config-h22g4" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.131116 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2acb1b92-2b44-4c8a-b80e-12b62db7de4a-var-run\") pod \"ovn-controller-dll2c-config-h22g4\" (UID: \"2acb1b92-2b44-4c8a-b80e-12b62db7de4a\") " pod="openstack/ovn-controller-dll2c-config-h22g4" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.131164 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx7dz\" (UniqueName: \"kubernetes.io/projected/6047aa72-faf9-4f4d-95ab-df8b1230cedf-kube-api-access-mx7dz\") pod \"ovn-northd-0\" (UID: \"6047aa72-faf9-4f4d-95ab-df8b1230cedf\") " pod="openstack/ovn-northd-0" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.131184 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2acb1b92-2b44-4c8a-b80e-12b62db7de4a-var-log-ovn\") pod \"ovn-controller-dll2c-config-h22g4\" (UID: \"2acb1b92-2b44-4c8a-b80e-12b62db7de4a\") " pod="openstack/ovn-controller-dll2c-config-h22g4" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.131205 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6047aa72-faf9-4f4d-95ab-df8b1230cedf-config\") pod \"ovn-northd-0\" (UID: \"6047aa72-faf9-4f4d-95ab-df8b1230cedf\") " pod="openstack/ovn-northd-0" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.131233 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98pbh\" (UniqueName: \"kubernetes.io/projected/2acb1b92-2b44-4c8a-b80e-12b62db7de4a-kube-api-access-98pbh\") pod \"ovn-controller-dll2c-config-h22g4\" (UID: \"2acb1b92-2b44-4c8a-b80e-12b62db7de4a\") " pod="openstack/ovn-controller-dll2c-config-h22g4" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.131282 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6047aa72-faf9-4f4d-95ab-df8b1230cedf-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"6047aa72-faf9-4f4d-95ab-df8b1230cedf\") " pod="openstack/ovn-northd-0" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.131336 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2acb1b92-2b44-4c8a-b80e-12b62db7de4a-scripts\") pod \"ovn-controller-dll2c-config-h22g4\" (UID: \"2acb1b92-2b44-4c8a-b80e-12b62db7de4a\") " pod="openstack/ovn-controller-dll2c-config-h22g4" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.131353 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6047aa72-faf9-4f4d-95ab-df8b1230cedf-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"6047aa72-faf9-4f4d-95ab-df8b1230cedf\") " pod="openstack/ovn-northd-0" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.131375 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6047aa72-faf9-4f4d-95ab-df8b1230cedf-scripts\") pod \"ovn-northd-0\" (UID: \"6047aa72-faf9-4f4d-95ab-df8b1230cedf\") " pod="openstack/ovn-northd-0" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.131395 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6047aa72-faf9-4f4d-95ab-df8b1230cedf-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"6047aa72-faf9-4f4d-95ab-df8b1230cedf\") " pod="openstack/ovn-northd-0" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.131433 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/6047aa72-faf9-4f4d-95ab-df8b1230cedf-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"6047aa72-faf9-4f4d-95ab-df8b1230cedf\") " pod="openstack/ovn-northd-0" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.234048 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx7dz\" (UniqueName: \"kubernetes.io/projected/6047aa72-faf9-4f4d-95ab-df8b1230cedf-kube-api-access-mx7dz\") pod \"ovn-northd-0\" (UID: \"6047aa72-faf9-4f4d-95ab-df8b1230cedf\") " pod="openstack/ovn-northd-0" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.234104 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2acb1b92-2b44-4c8a-b80e-12b62db7de4a-var-log-ovn\") pod \"ovn-controller-dll2c-config-h22g4\" (UID: \"2acb1b92-2b44-4c8a-b80e-12b62db7de4a\") " pod="openstack/ovn-controller-dll2c-config-h22g4" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.234456 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6047aa72-faf9-4f4d-95ab-df8b1230cedf-config\") pod \"ovn-northd-0\" (UID: \"6047aa72-faf9-4f4d-95ab-df8b1230cedf\") " pod="openstack/ovn-northd-0" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.234556 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98pbh\" (UniqueName: \"kubernetes.io/projected/2acb1b92-2b44-4c8a-b80e-12b62db7de4a-kube-api-access-98pbh\") pod \"ovn-controller-dll2c-config-h22g4\" (UID: \"2acb1b92-2b44-4c8a-b80e-12b62db7de4a\") " pod="openstack/ovn-controller-dll2c-config-h22g4" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.234723 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6047aa72-faf9-4f4d-95ab-df8b1230cedf-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"6047aa72-faf9-4f4d-95ab-df8b1230cedf\") " pod="openstack/ovn-northd-0" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.234902 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2acb1b92-2b44-4c8a-b80e-12b62db7de4a-scripts\") pod \"ovn-controller-dll2c-config-h22g4\" (UID: \"2acb1b92-2b44-4c8a-b80e-12b62db7de4a\") " pod="openstack/ovn-controller-dll2c-config-h22g4" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.234927 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6047aa72-faf9-4f4d-95ab-df8b1230cedf-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"6047aa72-faf9-4f4d-95ab-df8b1230cedf\") " pod="openstack/ovn-northd-0" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.234962 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6047aa72-faf9-4f4d-95ab-df8b1230cedf-scripts\") pod \"ovn-northd-0\" (UID: \"6047aa72-faf9-4f4d-95ab-df8b1230cedf\") " pod="openstack/ovn-northd-0" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.234979 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6047aa72-faf9-4f4d-95ab-df8b1230cedf-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"6047aa72-faf9-4f4d-95ab-df8b1230cedf\") " pod="openstack/ovn-northd-0" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.235031 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/6047aa72-faf9-4f4d-95ab-df8b1230cedf-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"6047aa72-faf9-4f4d-95ab-df8b1230cedf\") " pod="openstack/ovn-northd-0" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.235108 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2acb1b92-2b44-4c8a-b80e-12b62db7de4a-var-run-ovn\") pod \"ovn-controller-dll2c-config-h22g4\" (UID: \"2acb1b92-2b44-4c8a-b80e-12b62db7de4a\") " pod="openstack/ovn-controller-dll2c-config-h22g4" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.235143 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2acb1b92-2b44-4c8a-b80e-12b62db7de4a-additional-scripts\") pod \"ovn-controller-dll2c-config-h22g4\" (UID: \"2acb1b92-2b44-4c8a-b80e-12b62db7de4a\") " pod="openstack/ovn-controller-dll2c-config-h22g4" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.235162 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2acb1b92-2b44-4c8a-b80e-12b62db7de4a-var-run\") pod \"ovn-controller-dll2c-config-h22g4\" (UID: \"2acb1b92-2b44-4c8a-b80e-12b62db7de4a\") " pod="openstack/ovn-controller-dll2c-config-h22g4" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.235500 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2acb1b92-2b44-4c8a-b80e-12b62db7de4a-var-run\") pod \"ovn-controller-dll2c-config-h22g4\" (UID: \"2acb1b92-2b44-4c8a-b80e-12b62db7de4a\") " pod="openstack/ovn-controller-dll2c-config-h22g4" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.235851 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2acb1b92-2b44-4c8a-b80e-12b62db7de4a-var-log-ovn\") pod \"ovn-controller-dll2c-config-h22g4\" (UID: \"2acb1b92-2b44-4c8a-b80e-12b62db7de4a\") " pod="openstack/ovn-controller-dll2c-config-h22g4" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.236767 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6047aa72-faf9-4f4d-95ab-df8b1230cedf-config\") pod \"ovn-northd-0\" (UID: \"6047aa72-faf9-4f4d-95ab-df8b1230cedf\") " pod="openstack/ovn-northd-0" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.240532 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6047aa72-faf9-4f4d-95ab-df8b1230cedf-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"6047aa72-faf9-4f4d-95ab-df8b1230cedf\") " pod="openstack/ovn-northd-0" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.241121 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6047aa72-faf9-4f4d-95ab-df8b1230cedf-scripts\") pod \"ovn-northd-0\" (UID: \"6047aa72-faf9-4f4d-95ab-df8b1230cedf\") " pod="openstack/ovn-northd-0" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.244121 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2acb1b92-2b44-4c8a-b80e-12b62db7de4a-scripts\") pod \"ovn-controller-dll2c-config-h22g4\" (UID: \"2acb1b92-2b44-4c8a-b80e-12b62db7de4a\") " pod="openstack/ovn-controller-dll2c-config-h22g4" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.244166 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6047aa72-faf9-4f4d-95ab-df8b1230cedf-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"6047aa72-faf9-4f4d-95ab-df8b1230cedf\") " pod="openstack/ovn-northd-0" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.244197 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2acb1b92-2b44-4c8a-b80e-12b62db7de4a-var-run-ovn\") pod \"ovn-controller-dll2c-config-h22g4\" (UID: \"2acb1b92-2b44-4c8a-b80e-12b62db7de4a\") " pod="openstack/ovn-controller-dll2c-config-h22g4" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.244719 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2acb1b92-2b44-4c8a-b80e-12b62db7de4a-additional-scripts\") pod \"ovn-controller-dll2c-config-h22g4\" (UID: \"2acb1b92-2b44-4c8a-b80e-12b62db7de4a\") " pod="openstack/ovn-controller-dll2c-config-h22g4" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.250166 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/6047aa72-faf9-4f4d-95ab-df8b1230cedf-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"6047aa72-faf9-4f4d-95ab-df8b1230cedf\") " pod="openstack/ovn-northd-0" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.261006 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6047aa72-faf9-4f4d-95ab-df8b1230cedf-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"6047aa72-faf9-4f4d-95ab-df8b1230cedf\") " pod="openstack/ovn-northd-0" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.261702 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98pbh\" (UniqueName: \"kubernetes.io/projected/2acb1b92-2b44-4c8a-b80e-12b62db7de4a-kube-api-access-98pbh\") pod \"ovn-controller-dll2c-config-h22g4\" (UID: \"2acb1b92-2b44-4c8a-b80e-12b62db7de4a\") " pod="openstack/ovn-controller-dll2c-config-h22g4" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.273814 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx7dz\" (UniqueName: \"kubernetes.io/projected/6047aa72-faf9-4f4d-95ab-df8b1230cedf-kube-api-access-mx7dz\") pod \"ovn-northd-0\" (UID: \"6047aa72-faf9-4f4d-95ab-df8b1230cedf\") " pod="openstack/ovn-northd-0" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.384907 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.417104 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dll2c-config-h22g4" Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.644058 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f6a47ad8-d256-453c-910a-1506c8f73657-etc-swift\") pod \"swift-storage-0\" (UID: \"f6a47ad8-d256-453c-910a-1506c8f73657\") " pod="openstack/swift-storage-0" Jan 03 06:03:27 crc kubenswrapper[4854]: E0103 06:03:27.644277 4854 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 03 06:03:27 crc kubenswrapper[4854]: E0103 06:03:27.644520 4854 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 03 06:03:27 crc kubenswrapper[4854]: E0103 06:03:27.644598 4854 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f6a47ad8-d256-453c-910a-1506c8f73657-etc-swift podName:f6a47ad8-d256-453c-910a-1506c8f73657 nodeName:}" failed. No retries permitted until 2026-01-03 06:03:59.644573485 +0000 UTC m=+1417.971150067 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f6a47ad8-d256-453c-910a-1506c8f73657-etc-swift") pod "swift-storage-0" (UID: "f6a47ad8-d256-453c-910a-1506c8f73657") : configmap "swift-ring-files" not found Jan 03 06:03:27 crc kubenswrapper[4854]: I0103 06:03:27.723698 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-dll2c" podUID="04465680-9e76-4b04-aa5f-c94218a6bf28" containerName="ovn-controller" probeResult="failure" output=< Jan 03 06:03:27 crc kubenswrapper[4854]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 03 06:03:27 crc kubenswrapper[4854]: > Jan 03 06:03:28 crc kubenswrapper[4854]: I0103 06:03:28.057802 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-v8pxd"] Jan 03 06:03:28 crc kubenswrapper[4854]: I0103 06:03:28.060285 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-v8pxd" Jan 03 06:03:28 crc kubenswrapper[4854]: I0103 06:03:28.184324 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-sgxt6" Jan 03 06:03:28 crc kubenswrapper[4854]: I0103 06:03:28.184707 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 03 06:03:28 crc kubenswrapper[4854]: I0103 06:03:28.214453 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-v8pxd"] Jan 03 06:03:28 crc kubenswrapper[4854]: I0103 06:03:28.282310 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9acf61c2-85c5-4ba2-9f4b-0778c961a268-combined-ca-bundle\") pod \"glance-db-sync-v8pxd\" (UID: \"9acf61c2-85c5-4ba2-9f4b-0778c961a268\") " pod="openstack/glance-db-sync-v8pxd" Jan 03 06:03:28 crc kubenswrapper[4854]: I0103 06:03:28.282643 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9acf61c2-85c5-4ba2-9f4b-0778c961a268-db-sync-config-data\") pod \"glance-db-sync-v8pxd\" (UID: \"9acf61c2-85c5-4ba2-9f4b-0778c961a268\") " pod="openstack/glance-db-sync-v8pxd" Jan 03 06:03:28 crc kubenswrapper[4854]: I0103 06:03:28.282854 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9acf61c2-85c5-4ba2-9f4b-0778c961a268-config-data\") pod \"glance-db-sync-v8pxd\" (UID: \"9acf61c2-85c5-4ba2-9f4b-0778c961a268\") " pod="openstack/glance-db-sync-v8pxd" Jan 03 06:03:28 crc kubenswrapper[4854]: I0103 06:03:28.282878 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clkht\" (UniqueName: \"kubernetes.io/projected/9acf61c2-85c5-4ba2-9f4b-0778c961a268-kube-api-access-clkht\") pod \"glance-db-sync-v8pxd\" (UID: \"9acf61c2-85c5-4ba2-9f4b-0778c961a268\") " pod="openstack/glance-db-sync-v8pxd" Jan 03 06:03:28 crc kubenswrapper[4854]: I0103 06:03:28.384792 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9acf61c2-85c5-4ba2-9f4b-0778c961a268-combined-ca-bundle\") pod \"glance-db-sync-v8pxd\" (UID: \"9acf61c2-85c5-4ba2-9f4b-0778c961a268\") " pod="openstack/glance-db-sync-v8pxd" Jan 03 06:03:28 crc kubenswrapper[4854]: I0103 06:03:28.384891 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9acf61c2-85c5-4ba2-9f4b-0778c961a268-db-sync-config-data\") pod \"glance-db-sync-v8pxd\" (UID: \"9acf61c2-85c5-4ba2-9f4b-0778c961a268\") " pod="openstack/glance-db-sync-v8pxd" Jan 03 06:03:28 crc kubenswrapper[4854]: I0103 06:03:28.384965 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9acf61c2-85c5-4ba2-9f4b-0778c961a268-config-data\") pod \"glance-db-sync-v8pxd\" (UID: \"9acf61c2-85c5-4ba2-9f4b-0778c961a268\") " pod="openstack/glance-db-sync-v8pxd" Jan 03 06:03:28 crc kubenswrapper[4854]: I0103 06:03:28.384986 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clkht\" (UniqueName: \"kubernetes.io/projected/9acf61c2-85c5-4ba2-9f4b-0778c961a268-kube-api-access-clkht\") pod \"glance-db-sync-v8pxd\" (UID: \"9acf61c2-85c5-4ba2-9f4b-0778c961a268\") " pod="openstack/glance-db-sync-v8pxd" Jan 03 06:03:28 crc kubenswrapper[4854]: I0103 06:03:28.393576 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9acf61c2-85c5-4ba2-9f4b-0778c961a268-config-data\") pod \"glance-db-sync-v8pxd\" (UID: \"9acf61c2-85c5-4ba2-9f4b-0778c961a268\") " pod="openstack/glance-db-sync-v8pxd" Jan 03 06:03:28 crc kubenswrapper[4854]: I0103 06:03:28.394614 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9acf61c2-85c5-4ba2-9f4b-0778c961a268-combined-ca-bundle\") pod \"glance-db-sync-v8pxd\" (UID: \"9acf61c2-85c5-4ba2-9f4b-0778c961a268\") " pod="openstack/glance-db-sync-v8pxd" Jan 03 06:03:28 crc kubenswrapper[4854]: I0103 06:03:28.398609 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9acf61c2-85c5-4ba2-9f4b-0778c961a268-db-sync-config-data\") pod \"glance-db-sync-v8pxd\" (UID: \"9acf61c2-85c5-4ba2-9f4b-0778c961a268\") " pod="openstack/glance-db-sync-v8pxd" Jan 03 06:03:28 crc kubenswrapper[4854]: I0103 06:03:28.408755 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clkht\" (UniqueName: \"kubernetes.io/projected/9acf61c2-85c5-4ba2-9f4b-0778c961a268-kube-api-access-clkht\") pod \"glance-db-sync-v8pxd\" (UID: \"9acf61c2-85c5-4ba2-9f4b-0778c961a268\") " pod="openstack/glance-db-sync-v8pxd" Jan 03 06:03:28 crc kubenswrapper[4854]: I0103 06:03:28.537034 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-v8pxd" Jan 03 06:03:28 crc kubenswrapper[4854]: E0103 06:03:28.951635 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/prometheus-metric-storage-0" podUID="97a38e3c-dd5a-447b-b580-ed7bd5f16fde" Jan 03 06:03:29 crc kubenswrapper[4854]: I0103 06:03:29.045666 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 03 06:03:29 crc kubenswrapper[4854]: I0103 06:03:29.068683 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="11d4187f-5938-4054-9eec-4d84f843bd73" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Jan 03 06:03:29 crc kubenswrapper[4854]: I0103 06:03:29.100184 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="ba007649-daf8-445b-b2c8-73ce6ec54403" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.131:5671: connect: connection refused" Jan 03 06:03:29 crc kubenswrapper[4854]: I0103 06:03:29.141787 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 03 06:03:29 crc kubenswrapper[4854]: I0103 06:03:29.306494 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-dll2c-config-h22g4"] Jan 03 06:03:29 crc kubenswrapper[4854]: I0103 06:03:29.382090 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-l7b7k" event={"ID":"db4adf09-eb0a-4a6e-a49f-78e43cf04124","Type":"ContainerStarted","Data":"339454c0978d48a1ebd80f6a9e8836152f8dc2d53111e9f1d426564d8c134fee"} Jan 03 06:03:29 crc kubenswrapper[4854]: I0103 06:03:29.388357 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nxt9h" event={"ID":"6f9e2844-dbc2-488b-bb08-77f9a4284a35","Type":"ContainerStarted","Data":"182126e5ed38e163d25670173a949168214058743116da0e2ef3272d301f52f2"} Jan 03 06:03:29 crc kubenswrapper[4854]: I0103 06:03:29.393057 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"97a38e3c-dd5a-447b-b580-ed7bd5f16fde","Type":"ContainerStarted","Data":"b229eeb9b9ea228e815ad76287293926827cfd7ae30bb370526480b9e7e3a56b"} Jan 03 06:03:29 crc kubenswrapper[4854]: I0103 06:03:29.393462 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="71288814-2f4e-4e92-8064-8f9ef1920212" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.132:5671: connect: connection refused" Jan 03 06:03:29 crc kubenswrapper[4854]: I0103 06:03:29.395379 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dll2c-config-h22g4" event={"ID":"2acb1b92-2b44-4c8a-b80e-12b62db7de4a","Type":"ContainerStarted","Data":"dba45858a9f1bcd62b14c7a21f451ba2da9a2f2b830b44937ca6e7ee2872b2a6"} Jan 03 06:03:29 crc kubenswrapper[4854]: I0103 06:03:29.415389 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"6047aa72-faf9-4f4d-95ab-df8b1230cedf","Type":"ContainerStarted","Data":"578c05a1553bf6a2dd919d77e7a875c073f2f204155400f6fcd07e6f938938b4"} Jan 03 06:03:29 crc kubenswrapper[4854]: I0103 06:03:29.511679 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nxt9h" podStartSLOduration=20.227835225 podStartE2EDuration="37.511661481s" podCreationTimestamp="2026-01-03 06:02:52 +0000 UTC" firstStartedPulling="2026-01-03 06:03:11.376191583 +0000 UTC m=+1369.702768155" lastFinishedPulling="2026-01-03 06:03:28.660017839 +0000 UTC m=+1386.986594411" observedRunningTime="2026-01-03 06:03:29.502695119 +0000 UTC m=+1387.829271691" watchObservedRunningTime="2026-01-03 06:03:29.511661481 +0000 UTC m=+1387.838238043" Jan 03 06:03:29 crc kubenswrapper[4854]: I0103 06:03:29.512634 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-l7b7k" podStartSLOduration=20.013264535 podStartE2EDuration="30.512625204s" podCreationTimestamp="2026-01-03 06:02:59 +0000 UTC" firstStartedPulling="2026-01-03 06:03:18.164896494 +0000 UTC m=+1376.491473066" lastFinishedPulling="2026-01-03 06:03:28.664257163 +0000 UTC m=+1386.990833735" observedRunningTime="2026-01-03 06:03:29.453574387 +0000 UTC m=+1387.780150959" watchObservedRunningTime="2026-01-03 06:03:29.512625204 +0000 UTC m=+1387.839201776" Jan 03 06:03:29 crc kubenswrapper[4854]: I0103 06:03:29.701161 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-q85bq"] Jan 03 06:03:29 crc kubenswrapper[4854]: I0103 06:03:29.702960 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-q85bq" Jan 03 06:03:29 crc kubenswrapper[4854]: I0103 06:03:29.731708 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-q85bq"] Jan 03 06:03:29 crc kubenswrapper[4854]: I0103 06:03:29.741135 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-v8pxd"] Jan 03 06:03:29 crc kubenswrapper[4854]: I0103 06:03:29.826570 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktptz\" (UniqueName: \"kubernetes.io/projected/d219f3df-5003-4c46-a952-cdb9485b9879-kube-api-access-ktptz\") pod \"mysqld-exporter-openstack-cell1-db-create-q85bq\" (UID: \"d219f3df-5003-4c46-a952-cdb9485b9879\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-q85bq" Jan 03 06:03:29 crc kubenswrapper[4854]: I0103 06:03:29.826733 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d219f3df-5003-4c46-a952-cdb9485b9879-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-q85bq\" (UID: \"d219f3df-5003-4c46-a952-cdb9485b9879\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-q85bq" Jan 03 06:03:29 crc kubenswrapper[4854]: I0103 06:03:29.895501 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-wtw2t" Jan 03 06:03:29 crc kubenswrapper[4854]: I0103 06:03:29.929200 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktptz\" (UniqueName: \"kubernetes.io/projected/d219f3df-5003-4c46-a952-cdb9485b9879-kube-api-access-ktptz\") pod \"mysqld-exporter-openstack-cell1-db-create-q85bq\" (UID: \"d219f3df-5003-4c46-a952-cdb9485b9879\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-q85bq" Jan 03 06:03:29 crc kubenswrapper[4854]: I0103 06:03:29.929345 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d219f3df-5003-4c46-a952-cdb9485b9879-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-q85bq\" (UID: \"d219f3df-5003-4c46-a952-cdb9485b9879\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-q85bq" Jan 03 06:03:29 crc kubenswrapper[4854]: I0103 06:03:29.930219 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d219f3df-5003-4c46-a952-cdb9485b9879-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-q85bq\" (UID: \"d219f3df-5003-4c46-a952-cdb9485b9879\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-q85bq" Jan 03 06:03:29 crc kubenswrapper[4854]: I0103 06:03:29.980317 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-fbbv8"] Jan 03 06:03:29 crc kubenswrapper[4854]: I0103 06:03:29.980580 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-fbbv8" podUID="8299f6ba-92fe-41ee-8a63-184f8a594135" containerName="dnsmasq-dns" containerID="cri-o://7c14405b7572123f74d20adf3796a74bdd3aa749405989ca480262c94aa5cf71" gracePeriod=10 Jan 03 06:03:29 crc kubenswrapper[4854]: I0103 06:03:29.983698 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktptz\" (UniqueName: \"kubernetes.io/projected/d219f3df-5003-4c46-a952-cdb9485b9879-kube-api-access-ktptz\") pod \"mysqld-exporter-openstack-cell1-db-create-q85bq\" (UID: \"d219f3df-5003-4c46-a952-cdb9485b9879\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-q85bq" Jan 03 06:03:30 crc kubenswrapper[4854]: I0103 06:03:30.029876 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-94f4-account-create-update-hlktd"] Jan 03 06:03:30 crc kubenswrapper[4854]: I0103 06:03:30.030300 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-q85bq" Jan 03 06:03:30 crc kubenswrapper[4854]: I0103 06:03:30.031675 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-94f4-account-create-update-hlktd" Jan 03 06:03:30 crc kubenswrapper[4854]: I0103 06:03:30.040374 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-94f4-account-create-update-hlktd"] Jan 03 06:03:30 crc kubenswrapper[4854]: I0103 06:03:30.040529 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Jan 03 06:03:30 crc kubenswrapper[4854]: I0103 06:03:30.134807 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70d4a259-3160-44ac-8509-3e52076196be-operator-scripts\") pod \"mysqld-exporter-94f4-account-create-update-hlktd\" (UID: \"70d4a259-3160-44ac-8509-3e52076196be\") " pod="openstack/mysqld-exporter-94f4-account-create-update-hlktd" Jan 03 06:03:30 crc kubenswrapper[4854]: I0103 06:03:30.134889 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ss2b\" (UniqueName: \"kubernetes.io/projected/70d4a259-3160-44ac-8509-3e52076196be-kube-api-access-4ss2b\") pod \"mysqld-exporter-94f4-account-create-update-hlktd\" (UID: \"70d4a259-3160-44ac-8509-3e52076196be\") " pod="openstack/mysqld-exporter-94f4-account-create-update-hlktd" Jan 03 06:03:30 crc kubenswrapper[4854]: I0103 06:03:30.239951 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70d4a259-3160-44ac-8509-3e52076196be-operator-scripts\") pod \"mysqld-exporter-94f4-account-create-update-hlktd\" (UID: \"70d4a259-3160-44ac-8509-3e52076196be\") " pod="openstack/mysqld-exporter-94f4-account-create-update-hlktd" Jan 03 06:03:30 crc kubenswrapper[4854]: I0103 06:03:30.240438 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ss2b\" (UniqueName: \"kubernetes.io/projected/70d4a259-3160-44ac-8509-3e52076196be-kube-api-access-4ss2b\") pod \"mysqld-exporter-94f4-account-create-update-hlktd\" (UID: \"70d4a259-3160-44ac-8509-3e52076196be\") " pod="openstack/mysqld-exporter-94f4-account-create-update-hlktd" Jan 03 06:03:30 crc kubenswrapper[4854]: I0103 06:03:30.241619 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70d4a259-3160-44ac-8509-3e52076196be-operator-scripts\") pod \"mysqld-exporter-94f4-account-create-update-hlktd\" (UID: \"70d4a259-3160-44ac-8509-3e52076196be\") " pod="openstack/mysqld-exporter-94f4-account-create-update-hlktd" Jan 03 06:03:30 crc kubenswrapper[4854]: I0103 06:03:30.273670 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ss2b\" (UniqueName: \"kubernetes.io/projected/70d4a259-3160-44ac-8509-3e52076196be-kube-api-access-4ss2b\") pod \"mysqld-exporter-94f4-account-create-update-hlktd\" (UID: \"70d4a259-3160-44ac-8509-3e52076196be\") " pod="openstack/mysqld-exporter-94f4-account-create-update-hlktd" Jan 03 06:03:30 crc kubenswrapper[4854]: I0103 06:03:30.486417 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-94f4-account-create-update-hlktd" Jan 03 06:03:30 crc kubenswrapper[4854]: I0103 06:03:30.523468 4854 generic.go:334] "Generic (PLEG): container finished" podID="8299f6ba-92fe-41ee-8a63-184f8a594135" containerID="7c14405b7572123f74d20adf3796a74bdd3aa749405989ca480262c94aa5cf71" exitCode=0 Jan 03 06:03:30 crc kubenswrapper[4854]: I0103 06:03:30.523584 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-fbbv8" event={"ID":"8299f6ba-92fe-41ee-8a63-184f8a594135","Type":"ContainerDied","Data":"7c14405b7572123f74d20adf3796a74bdd3aa749405989ca480262c94aa5cf71"} Jan 03 06:03:30 crc kubenswrapper[4854]: I0103 06:03:30.527172 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-v8pxd" event={"ID":"9acf61c2-85c5-4ba2-9f4b-0778c961a268","Type":"ContainerStarted","Data":"9dfefc4a57e2033d1660866effe3614369edfdb2238576eed082b3834e3a12bd"} Jan 03 06:03:30 crc kubenswrapper[4854]: I0103 06:03:30.547585 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dll2c-config-h22g4" event={"ID":"2acb1b92-2b44-4c8a-b80e-12b62db7de4a","Type":"ContainerStarted","Data":"695571ad73ffdb8f7163dbc47877775d4429b8cfb4a0a41df26e999591b95662"} Jan 03 06:03:30 crc kubenswrapper[4854]: I0103 06:03:30.579742 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-dll2c-config-h22g4" podStartSLOduration=3.579718281 podStartE2EDuration="3.579718281s" podCreationTimestamp="2026-01-03 06:03:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:03:30.568658528 +0000 UTC m=+1388.895235120" watchObservedRunningTime="2026-01-03 06:03:30.579718281 +0000 UTC m=+1388.906294853" Jan 03 06:03:30 crc kubenswrapper[4854]: I0103 06:03:30.608216 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-rmh8c"] Jan 03 06:03:30 crc kubenswrapper[4854]: I0103 06:03:30.698170 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-rmh8c"] Jan 03 06:03:30 crc kubenswrapper[4854]: I0103 06:03:30.710832 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-55ddh"] Jan 03 06:03:30 crc kubenswrapper[4854]: I0103 06:03:30.712561 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-55ddh" Jan 03 06:03:30 crc kubenswrapper[4854]: I0103 06:03:30.726093 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 03 06:03:30 crc kubenswrapper[4854]: I0103 06:03:30.734404 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-55ddh"] Jan 03 06:03:30 crc kubenswrapper[4854]: I0103 06:03:30.786583 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5plgn\" (UniqueName: \"kubernetes.io/projected/db4bd4c9-70cf-4ee8-a3c4-71bc3a2ead5d-kube-api-access-5plgn\") pod \"root-account-create-update-55ddh\" (UID: \"db4bd4c9-70cf-4ee8-a3c4-71bc3a2ead5d\") " pod="openstack/root-account-create-update-55ddh" Jan 03 06:03:30 crc kubenswrapper[4854]: I0103 06:03:30.786844 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db4bd4c9-70cf-4ee8-a3c4-71bc3a2ead5d-operator-scripts\") pod \"root-account-create-update-55ddh\" (UID: \"db4bd4c9-70cf-4ee8-a3c4-71bc3a2ead5d\") " pod="openstack/root-account-create-update-55ddh" Jan 03 06:03:30 crc kubenswrapper[4854]: I0103 06:03:30.834427 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-q85bq"] Jan 03 06:03:30 crc kubenswrapper[4854]: I0103 06:03:30.888577 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5plgn\" (UniqueName: \"kubernetes.io/projected/db4bd4c9-70cf-4ee8-a3c4-71bc3a2ead5d-kube-api-access-5plgn\") pod \"root-account-create-update-55ddh\" (UID: \"db4bd4c9-70cf-4ee8-a3c4-71bc3a2ead5d\") " pod="openstack/root-account-create-update-55ddh" Jan 03 06:03:30 crc kubenswrapper[4854]: I0103 06:03:30.888939 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db4bd4c9-70cf-4ee8-a3c4-71bc3a2ead5d-operator-scripts\") pod \"root-account-create-update-55ddh\" (UID: \"db4bd4c9-70cf-4ee8-a3c4-71bc3a2ead5d\") " pod="openstack/root-account-create-update-55ddh" Jan 03 06:03:30 crc kubenswrapper[4854]: I0103 06:03:30.889970 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db4bd4c9-70cf-4ee8-a3c4-71bc3a2ead5d-operator-scripts\") pod \"root-account-create-update-55ddh\" (UID: \"db4bd4c9-70cf-4ee8-a3c4-71bc3a2ead5d\") " pod="openstack/root-account-create-update-55ddh" Jan 03 06:03:30 crc kubenswrapper[4854]: I0103 06:03:30.911333 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5plgn\" (UniqueName: \"kubernetes.io/projected/db4bd4c9-70cf-4ee8-a3c4-71bc3a2ead5d-kube-api-access-5plgn\") pod \"root-account-create-update-55ddh\" (UID: \"db4bd4c9-70cf-4ee8-a3c4-71bc3a2ead5d\") " pod="openstack/root-account-create-update-55ddh" Jan 03 06:03:30 crc kubenswrapper[4854]: E0103 06:03:30.926132 4854 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2acb1b92_2b44_4c8a_b80e_12b62db7de4a.slice/crio-695571ad73ffdb8f7163dbc47877775d4429b8cfb4a0a41df26e999591b95662.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2acb1b92_2b44_4c8a_b80e_12b62db7de4a.slice/crio-conmon-695571ad73ffdb8f7163dbc47877775d4429b8cfb4a0a41df26e999591b95662.scope\": RecentStats: unable to find data in memory cache]" Jan 03 06:03:31 crc kubenswrapper[4854]: I0103 06:03:31.109576 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-55ddh" Jan 03 06:03:31 crc kubenswrapper[4854]: W0103 06:03:31.113423 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd219f3df_5003_4c46_a952_cdb9485b9879.slice/crio-02a471d5e2581343f7c6bd3cace4f716170289cb6f8e752c32fef9b443004c8c WatchSource:0}: Error finding container 02a471d5e2581343f7c6bd3cace4f716170289cb6f8e752c32fef9b443004c8c: Status 404 returned error can't find the container with id 02a471d5e2581343f7c6bd3cace4f716170289cb6f8e752c32fef9b443004c8c Jan 03 06:03:31 crc kubenswrapper[4854]: I0103 06:03:31.417846 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-fbbv8" Jan 03 06:03:31 crc kubenswrapper[4854]: I0103 06:03:31.507857 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8299f6ba-92fe-41ee-8a63-184f8a594135-ovsdbserver-sb\") pod \"8299f6ba-92fe-41ee-8a63-184f8a594135\" (UID: \"8299f6ba-92fe-41ee-8a63-184f8a594135\") " Jan 03 06:03:31 crc kubenswrapper[4854]: I0103 06:03:31.507923 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8299f6ba-92fe-41ee-8a63-184f8a594135-dns-svc\") pod \"8299f6ba-92fe-41ee-8a63-184f8a594135\" (UID: \"8299f6ba-92fe-41ee-8a63-184f8a594135\") " Jan 03 06:03:31 crc kubenswrapper[4854]: I0103 06:03:31.507964 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8299f6ba-92fe-41ee-8a63-184f8a594135-ovsdbserver-nb\") pod \"8299f6ba-92fe-41ee-8a63-184f8a594135\" (UID: \"8299f6ba-92fe-41ee-8a63-184f8a594135\") " Jan 03 06:03:31 crc kubenswrapper[4854]: I0103 06:03:31.508022 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgtlz\" (UniqueName: \"kubernetes.io/projected/8299f6ba-92fe-41ee-8a63-184f8a594135-kube-api-access-cgtlz\") pod \"8299f6ba-92fe-41ee-8a63-184f8a594135\" (UID: \"8299f6ba-92fe-41ee-8a63-184f8a594135\") " Jan 03 06:03:31 crc kubenswrapper[4854]: I0103 06:03:31.508400 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8299f6ba-92fe-41ee-8a63-184f8a594135-config\") pod \"8299f6ba-92fe-41ee-8a63-184f8a594135\" (UID: \"8299f6ba-92fe-41ee-8a63-184f8a594135\") " Jan 03 06:03:31 crc kubenswrapper[4854]: I0103 06:03:31.525241 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8299f6ba-92fe-41ee-8a63-184f8a594135-kube-api-access-cgtlz" (OuterVolumeSpecName: "kube-api-access-cgtlz") pod "8299f6ba-92fe-41ee-8a63-184f8a594135" (UID: "8299f6ba-92fe-41ee-8a63-184f8a594135"). InnerVolumeSpecName "kube-api-access-cgtlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:03:31 crc kubenswrapper[4854]: I0103 06:03:31.567920 4854 generic.go:334] "Generic (PLEG): container finished" podID="2acb1b92-2b44-4c8a-b80e-12b62db7de4a" containerID="695571ad73ffdb8f7163dbc47877775d4429b8cfb4a0a41df26e999591b95662" exitCode=0 Jan 03 06:03:31 crc kubenswrapper[4854]: I0103 06:03:31.568210 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dll2c-config-h22g4" event={"ID":"2acb1b92-2b44-4c8a-b80e-12b62db7de4a","Type":"ContainerDied","Data":"695571ad73ffdb8f7163dbc47877775d4429b8cfb4a0a41df26e999591b95662"} Jan 03 06:03:31 crc kubenswrapper[4854]: I0103 06:03:31.573953 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-fbbv8" event={"ID":"8299f6ba-92fe-41ee-8a63-184f8a594135","Type":"ContainerDied","Data":"4df9572494682062d0afd7715f1fb65c7de8ab388be07a0e2ac86182f54751b7"} Jan 03 06:03:31 crc kubenswrapper[4854]: I0103 06:03:31.574004 4854 scope.go:117] "RemoveContainer" containerID="7c14405b7572123f74d20adf3796a74bdd3aa749405989ca480262c94aa5cf71" Jan 03 06:03:31 crc kubenswrapper[4854]: I0103 06:03:31.574364 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-fbbv8" Jan 03 06:03:31 crc kubenswrapper[4854]: I0103 06:03:31.581388 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-q85bq" event={"ID":"d219f3df-5003-4c46-a952-cdb9485b9879","Type":"ContainerStarted","Data":"02a471d5e2581343f7c6bd3cace4f716170289cb6f8e752c32fef9b443004c8c"} Jan 03 06:03:31 crc kubenswrapper[4854]: I0103 06:03:31.618009 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgtlz\" (UniqueName: \"kubernetes.io/projected/8299f6ba-92fe-41ee-8a63-184f8a594135-kube-api-access-cgtlz\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:31 crc kubenswrapper[4854]: I0103 06:03:31.665938 4854 scope.go:117] "RemoveContainer" containerID="a5a410cda3fc8bc9f631f873d36ad8a2ab52bf33eb272f5d2703dd56daa38842" Jan 03 06:03:31 crc kubenswrapper[4854]: I0103 06:03:31.689617 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-55ddh"] Jan 03 06:03:31 crc kubenswrapper[4854]: W0103 06:03:31.692261 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb4bd4c9_70cf_4ee8_a3c4_71bc3a2ead5d.slice/crio-a229cb3ad7da62380c741671aa1bf82711e9c063af1ab5d9ad0e55ad20a7a99e WatchSource:0}: Error finding container a229cb3ad7da62380c741671aa1bf82711e9c063af1ab5d9ad0e55ad20a7a99e: Status 404 returned error can't find the container with id a229cb3ad7da62380c741671aa1bf82711e9c063af1ab5d9ad0e55ad20a7a99e Jan 03 06:03:31 crc kubenswrapper[4854]: I0103 06:03:31.811240 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8299f6ba-92fe-41ee-8a63-184f8a594135-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8299f6ba-92fe-41ee-8a63-184f8a594135" (UID: "8299f6ba-92fe-41ee-8a63-184f8a594135"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:03:31 crc kubenswrapper[4854]: I0103 06:03:31.811362 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8299f6ba-92fe-41ee-8a63-184f8a594135-config" (OuterVolumeSpecName: "config") pod "8299f6ba-92fe-41ee-8a63-184f8a594135" (UID: "8299f6ba-92fe-41ee-8a63-184f8a594135"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:03:31 crc kubenswrapper[4854]: I0103 06:03:31.820731 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8299f6ba-92fe-41ee-8a63-184f8a594135-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8299f6ba-92fe-41ee-8a63-184f8a594135" (UID: "8299f6ba-92fe-41ee-8a63-184f8a594135"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:03:31 crc kubenswrapper[4854]: I0103 06:03:31.821177 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8299f6ba-92fe-41ee-8a63-184f8a594135-dns-svc\") pod \"8299f6ba-92fe-41ee-8a63-184f8a594135\" (UID: \"8299f6ba-92fe-41ee-8a63-184f8a594135\") " Jan 03 06:03:31 crc kubenswrapper[4854]: I0103 06:03:31.822034 4854 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8299f6ba-92fe-41ee-8a63-184f8a594135-config\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:31 crc kubenswrapper[4854]: I0103 06:03:31.822179 4854 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8299f6ba-92fe-41ee-8a63-184f8a594135-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:31 crc kubenswrapper[4854]: W0103 06:03:31.822334 4854 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/8299f6ba-92fe-41ee-8a63-184f8a594135/volumes/kubernetes.io~configmap/dns-svc Jan 03 06:03:31 crc kubenswrapper[4854]: I0103 06:03:31.822397 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8299f6ba-92fe-41ee-8a63-184f8a594135-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8299f6ba-92fe-41ee-8a63-184f8a594135" (UID: "8299f6ba-92fe-41ee-8a63-184f8a594135"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:03:31 crc kubenswrapper[4854]: I0103 06:03:31.835585 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8299f6ba-92fe-41ee-8a63-184f8a594135-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8299f6ba-92fe-41ee-8a63-184f8a594135" (UID: "8299f6ba-92fe-41ee-8a63-184f8a594135"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:03:31 crc kubenswrapper[4854]: I0103 06:03:31.838743 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-94f4-account-create-update-hlktd"] Jan 03 06:03:31 crc kubenswrapper[4854]: I0103 06:03:31.925584 4854 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8299f6ba-92fe-41ee-8a63-184f8a594135-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:31 crc kubenswrapper[4854]: I0103 06:03:31.925615 4854 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8299f6ba-92fe-41ee-8a63-184f8a594135-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:32 crc kubenswrapper[4854]: I0103 06:03:32.068323 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-fbbv8"] Jan 03 06:03:32 crc kubenswrapper[4854]: I0103 06:03:32.077005 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-fbbv8"] Jan 03 06:03:32 crc kubenswrapper[4854]: I0103 06:03:32.133147 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50e5be2e-c854-47b3-b5c5-312a82700553" path="/var/lib/kubelet/pods/50e5be2e-c854-47b3-b5c5-312a82700553/volumes" Jan 03 06:03:32 crc kubenswrapper[4854]: I0103 06:03:32.133693 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8299f6ba-92fe-41ee-8a63-184f8a594135" path="/var/lib/kubelet/pods/8299f6ba-92fe-41ee-8a63-184f8a594135/volumes" Jan 03 06:03:32 crc kubenswrapper[4854]: I0103 06:03:32.595903 4854 generic.go:334] "Generic (PLEG): container finished" podID="70d4a259-3160-44ac-8509-3e52076196be" containerID="a302c8cdf88997817ebc655772509708609b0e88d3d9ad22260f2adbe7bca8f9" exitCode=0 Jan 03 06:03:32 crc kubenswrapper[4854]: I0103 06:03:32.595958 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-94f4-account-create-update-hlktd" event={"ID":"70d4a259-3160-44ac-8509-3e52076196be","Type":"ContainerDied","Data":"a302c8cdf88997817ebc655772509708609b0e88d3d9ad22260f2adbe7bca8f9"} Jan 03 06:03:32 crc kubenswrapper[4854]: I0103 06:03:32.596311 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-94f4-account-create-update-hlktd" event={"ID":"70d4a259-3160-44ac-8509-3e52076196be","Type":"ContainerStarted","Data":"31776d7faeb99db0032d145970e5f07a12b64a32acad40fc6327769fc4114d53"} Jan 03 06:03:32 crc kubenswrapper[4854]: I0103 06:03:32.600430 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"6047aa72-faf9-4f4d-95ab-df8b1230cedf","Type":"ContainerStarted","Data":"349c9190fac6b302927e50cf851cd45a2161ef6a601aaae7d9455e537fba05a9"} Jan 03 06:03:32 crc kubenswrapper[4854]: I0103 06:03:32.600493 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"6047aa72-faf9-4f4d-95ab-df8b1230cedf","Type":"ContainerStarted","Data":"bd1d9b35d30511712419b9426f660f16ade983dd8d2b80aff8b4d7f562ca61fd"} Jan 03 06:03:32 crc kubenswrapper[4854]: I0103 06:03:32.600615 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 03 06:03:32 crc kubenswrapper[4854]: I0103 06:03:32.603705 4854 generic.go:334] "Generic (PLEG): container finished" podID="d219f3df-5003-4c46-a952-cdb9485b9879" containerID="99f19915558ff686ab30f9271ddb53abb3fd788ffbefbdcb3a9af040acd5d16d" exitCode=0 Jan 03 06:03:32 crc kubenswrapper[4854]: I0103 06:03:32.603779 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-q85bq" event={"ID":"d219f3df-5003-4c46-a952-cdb9485b9879","Type":"ContainerDied","Data":"99f19915558ff686ab30f9271ddb53abb3fd788ffbefbdcb3a9af040acd5d16d"} Jan 03 06:03:32 crc kubenswrapper[4854]: I0103 06:03:32.608241 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"97a38e3c-dd5a-447b-b580-ed7bd5f16fde","Type":"ContainerStarted","Data":"8e1fb1f7a1282afb51acc14df15dfebcbd387a8ad192da5cbe5e2dec946e3413"} Jan 03 06:03:32 crc kubenswrapper[4854]: I0103 06:03:32.637034 4854 generic.go:334] "Generic (PLEG): container finished" podID="db4bd4c9-70cf-4ee8-a3c4-71bc3a2ead5d" containerID="41d1eb780711c0fefdf14ffd19fc7f190770245d6cf2aff8db72d044258042fc" exitCode=0 Jan 03 06:03:32 crc kubenswrapper[4854]: I0103 06:03:32.637150 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-55ddh" event={"ID":"db4bd4c9-70cf-4ee8-a3c4-71bc3a2ead5d","Type":"ContainerDied","Data":"41d1eb780711c0fefdf14ffd19fc7f190770245d6cf2aff8db72d044258042fc"} Jan 03 06:03:32 crc kubenswrapper[4854]: I0103 06:03:32.637225 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-55ddh" event={"ID":"db4bd4c9-70cf-4ee8-a3c4-71bc3a2ead5d","Type":"ContainerStarted","Data":"a229cb3ad7da62380c741671aa1bf82711e9c063af1ab5d9ad0e55ad20a7a99e"} Jan 03 06:03:32 crc kubenswrapper[4854]: I0103 06:03:32.665072 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=4.632974036 podStartE2EDuration="6.665054041s" podCreationTimestamp="2026-01-03 06:03:26 +0000 UTC" firstStartedPulling="2026-01-03 06:03:29.145736192 +0000 UTC m=+1387.472312764" lastFinishedPulling="2026-01-03 06:03:31.177816197 +0000 UTC m=+1389.504392769" observedRunningTime="2026-01-03 06:03:32.659420202 +0000 UTC m=+1390.985996784" watchObservedRunningTime="2026-01-03 06:03:32.665054041 +0000 UTC m=+1390.991630613" Jan 03 06:03:32 crc kubenswrapper[4854]: I0103 06:03:32.692693 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=22.655461196 podStartE2EDuration="1m38.692679172s" podCreationTimestamp="2026-01-03 06:01:54 +0000 UTC" firstStartedPulling="2026-01-03 06:02:15.590068231 +0000 UTC m=+1313.916644803" lastFinishedPulling="2026-01-03 06:03:31.627286217 +0000 UTC m=+1389.953862779" observedRunningTime="2026-01-03 06:03:32.682944682 +0000 UTC m=+1391.009521254" watchObservedRunningTime="2026-01-03 06:03:32.692679172 +0000 UTC m=+1391.019255744" Jan 03 06:03:32 crc kubenswrapper[4854]: I0103 06:03:32.722860 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-dll2c" Jan 03 06:03:33 crc kubenswrapper[4854]: I0103 06:03:33.034339 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nxt9h" Jan 03 06:03:33 crc kubenswrapper[4854]: I0103 06:03:33.034621 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nxt9h" Jan 03 06:03:33 crc kubenswrapper[4854]: I0103 06:03:33.130469 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dll2c-config-h22g4" Jan 03 06:03:33 crc kubenswrapper[4854]: I0103 06:03:33.279880 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2acb1b92-2b44-4c8a-b80e-12b62db7de4a-additional-scripts\") pod \"2acb1b92-2b44-4c8a-b80e-12b62db7de4a\" (UID: \"2acb1b92-2b44-4c8a-b80e-12b62db7de4a\") " Jan 03 06:03:33 crc kubenswrapper[4854]: I0103 06:03:33.280071 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2acb1b92-2b44-4c8a-b80e-12b62db7de4a-var-log-ovn\") pod \"2acb1b92-2b44-4c8a-b80e-12b62db7de4a\" (UID: \"2acb1b92-2b44-4c8a-b80e-12b62db7de4a\") " Jan 03 06:03:33 crc kubenswrapper[4854]: I0103 06:03:33.280175 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2acb1b92-2b44-4c8a-b80e-12b62db7de4a-var-run\") pod \"2acb1b92-2b44-4c8a-b80e-12b62db7de4a\" (UID: \"2acb1b92-2b44-4c8a-b80e-12b62db7de4a\") " Jan 03 06:03:33 crc kubenswrapper[4854]: I0103 06:03:33.280287 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2acb1b92-2b44-4c8a-b80e-12b62db7de4a-var-run-ovn\") pod \"2acb1b92-2b44-4c8a-b80e-12b62db7de4a\" (UID: \"2acb1b92-2b44-4c8a-b80e-12b62db7de4a\") " Jan 03 06:03:33 crc kubenswrapper[4854]: I0103 06:03:33.280289 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2acb1b92-2b44-4c8a-b80e-12b62db7de4a-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "2acb1b92-2b44-4c8a-b80e-12b62db7de4a" (UID: "2acb1b92-2b44-4c8a-b80e-12b62db7de4a"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 06:03:33 crc kubenswrapper[4854]: I0103 06:03:33.280317 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2acb1b92-2b44-4c8a-b80e-12b62db7de4a-var-run" (OuterVolumeSpecName: "var-run") pod "2acb1b92-2b44-4c8a-b80e-12b62db7de4a" (UID: "2acb1b92-2b44-4c8a-b80e-12b62db7de4a"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 06:03:33 crc kubenswrapper[4854]: I0103 06:03:33.280336 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2acb1b92-2b44-4c8a-b80e-12b62db7de4a-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "2acb1b92-2b44-4c8a-b80e-12b62db7de4a" (UID: "2acb1b92-2b44-4c8a-b80e-12b62db7de4a"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 06:03:33 crc kubenswrapper[4854]: I0103 06:03:33.280355 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2acb1b92-2b44-4c8a-b80e-12b62db7de4a-scripts\") pod \"2acb1b92-2b44-4c8a-b80e-12b62db7de4a\" (UID: \"2acb1b92-2b44-4c8a-b80e-12b62db7de4a\") " Jan 03 06:03:33 crc kubenswrapper[4854]: I0103 06:03:33.280477 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98pbh\" (UniqueName: \"kubernetes.io/projected/2acb1b92-2b44-4c8a-b80e-12b62db7de4a-kube-api-access-98pbh\") pod \"2acb1b92-2b44-4c8a-b80e-12b62db7de4a\" (UID: \"2acb1b92-2b44-4c8a-b80e-12b62db7de4a\") " Jan 03 06:03:33 crc kubenswrapper[4854]: I0103 06:03:33.281099 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2acb1b92-2b44-4c8a-b80e-12b62db7de4a-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "2acb1b92-2b44-4c8a-b80e-12b62db7de4a" (UID: "2acb1b92-2b44-4c8a-b80e-12b62db7de4a"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:03:33 crc kubenswrapper[4854]: I0103 06:03:33.281326 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2acb1b92-2b44-4c8a-b80e-12b62db7de4a-scripts" (OuterVolumeSpecName: "scripts") pod "2acb1b92-2b44-4c8a-b80e-12b62db7de4a" (UID: "2acb1b92-2b44-4c8a-b80e-12b62db7de4a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:03:33 crc kubenswrapper[4854]: I0103 06:03:33.281467 4854 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2acb1b92-2b44-4c8a-b80e-12b62db7de4a-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:33 crc kubenswrapper[4854]: I0103 06:03:33.281481 4854 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2acb1b92-2b44-4c8a-b80e-12b62db7de4a-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:33 crc kubenswrapper[4854]: I0103 06:03:33.281491 4854 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2acb1b92-2b44-4c8a-b80e-12b62db7de4a-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:33 crc kubenswrapper[4854]: I0103 06:03:33.281502 4854 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2acb1b92-2b44-4c8a-b80e-12b62db7de4a-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:33 crc kubenswrapper[4854]: I0103 06:03:33.281512 4854 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2acb1b92-2b44-4c8a-b80e-12b62db7de4a-var-run\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:33 crc kubenswrapper[4854]: I0103 06:03:33.287376 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2acb1b92-2b44-4c8a-b80e-12b62db7de4a-kube-api-access-98pbh" (OuterVolumeSpecName: "kube-api-access-98pbh") pod "2acb1b92-2b44-4c8a-b80e-12b62db7de4a" (UID: "2acb1b92-2b44-4c8a-b80e-12b62db7de4a"). InnerVolumeSpecName "kube-api-access-98pbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:03:33 crc kubenswrapper[4854]: I0103 06:03:33.383822 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98pbh\" (UniqueName: \"kubernetes.io/projected/2acb1b92-2b44-4c8a-b80e-12b62db7de4a-kube-api-access-98pbh\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:33 crc kubenswrapper[4854]: I0103 06:03:33.650591 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dll2c-config-h22g4" Jan 03 06:03:33 crc kubenswrapper[4854]: I0103 06:03:33.650629 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dll2c-config-h22g4" event={"ID":"2acb1b92-2b44-4c8a-b80e-12b62db7de4a","Type":"ContainerDied","Data":"dba45858a9f1bcd62b14c7a21f451ba2da9a2f2b830b44937ca6e7ee2872b2a6"} Jan 03 06:03:33 crc kubenswrapper[4854]: I0103 06:03:33.650675 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dba45858a9f1bcd62b14c7a21f451ba2da9a2f2b830b44937ca6e7ee2872b2a6" Jan 03 06:03:33 crc kubenswrapper[4854]: I0103 06:03:33.684961 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-dll2c-config-h22g4"] Jan 03 06:03:33 crc kubenswrapper[4854]: I0103 06:03:33.693646 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-dll2c-config-h22g4"] Jan 03 06:03:34 crc kubenswrapper[4854]: I0103 06:03:34.138514 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2acb1b92-2b44-4c8a-b80e-12b62db7de4a" path="/var/lib/kubelet/pods/2acb1b92-2b44-4c8a-b80e-12b62db7de4a/volumes" Jan 03 06:03:34 crc kubenswrapper[4854]: I0103 06:03:34.166054 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nxt9h" podUID="6f9e2844-dbc2-488b-bb08-77f9a4284a35" containerName="registry-server" probeResult="failure" output=< Jan 03 06:03:34 crc kubenswrapper[4854]: timeout: failed to connect service ":50051" within 1s Jan 03 06:03:34 crc kubenswrapper[4854]: > Jan 03 06:03:34 crc kubenswrapper[4854]: I0103 06:03:34.285019 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-94f4-account-create-update-hlktd" Jan 03 06:03:34 crc kubenswrapper[4854]: I0103 06:03:34.291268 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-55ddh" Jan 03 06:03:34 crc kubenswrapper[4854]: I0103 06:03:34.312358 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-q85bq" Jan 03 06:03:34 crc kubenswrapper[4854]: I0103 06:03:34.410335 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d219f3df-5003-4c46-a952-cdb9485b9879-operator-scripts\") pod \"d219f3df-5003-4c46-a952-cdb9485b9879\" (UID: \"d219f3df-5003-4c46-a952-cdb9485b9879\") " Jan 03 06:03:34 crc kubenswrapper[4854]: I0103 06:03:34.410567 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5plgn\" (UniqueName: \"kubernetes.io/projected/db4bd4c9-70cf-4ee8-a3c4-71bc3a2ead5d-kube-api-access-5plgn\") pod \"db4bd4c9-70cf-4ee8-a3c4-71bc3a2ead5d\" (UID: \"db4bd4c9-70cf-4ee8-a3c4-71bc3a2ead5d\") " Jan 03 06:03:34 crc kubenswrapper[4854]: I0103 06:03:34.410598 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70d4a259-3160-44ac-8509-3e52076196be-operator-scripts\") pod \"70d4a259-3160-44ac-8509-3e52076196be\" (UID: \"70d4a259-3160-44ac-8509-3e52076196be\") " Jan 03 06:03:34 crc kubenswrapper[4854]: I0103 06:03:34.410633 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ss2b\" (UniqueName: \"kubernetes.io/projected/70d4a259-3160-44ac-8509-3e52076196be-kube-api-access-4ss2b\") pod \"70d4a259-3160-44ac-8509-3e52076196be\" (UID: \"70d4a259-3160-44ac-8509-3e52076196be\") " Jan 03 06:03:34 crc kubenswrapper[4854]: I0103 06:03:34.410650 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktptz\" (UniqueName: \"kubernetes.io/projected/d219f3df-5003-4c46-a952-cdb9485b9879-kube-api-access-ktptz\") pod \"d219f3df-5003-4c46-a952-cdb9485b9879\" (UID: \"d219f3df-5003-4c46-a952-cdb9485b9879\") " Jan 03 06:03:34 crc kubenswrapper[4854]: I0103 06:03:34.410848 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db4bd4c9-70cf-4ee8-a3c4-71bc3a2ead5d-operator-scripts\") pod \"db4bd4c9-70cf-4ee8-a3c4-71bc3a2ead5d\" (UID: \"db4bd4c9-70cf-4ee8-a3c4-71bc3a2ead5d\") " Jan 03 06:03:34 crc kubenswrapper[4854]: I0103 06:03:34.411402 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d219f3df-5003-4c46-a952-cdb9485b9879-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d219f3df-5003-4c46-a952-cdb9485b9879" (UID: "d219f3df-5003-4c46-a952-cdb9485b9879"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:03:34 crc kubenswrapper[4854]: I0103 06:03:34.411794 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70d4a259-3160-44ac-8509-3e52076196be-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "70d4a259-3160-44ac-8509-3e52076196be" (UID: "70d4a259-3160-44ac-8509-3e52076196be"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:03:34 crc kubenswrapper[4854]: I0103 06:03:34.414461 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db4bd4c9-70cf-4ee8-a3c4-71bc3a2ead5d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "db4bd4c9-70cf-4ee8-a3c4-71bc3a2ead5d" (UID: "db4bd4c9-70cf-4ee8-a3c4-71bc3a2ead5d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:03:34 crc kubenswrapper[4854]: I0103 06:03:34.423436 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70d4a259-3160-44ac-8509-3e52076196be-kube-api-access-4ss2b" (OuterVolumeSpecName: "kube-api-access-4ss2b") pod "70d4a259-3160-44ac-8509-3e52076196be" (UID: "70d4a259-3160-44ac-8509-3e52076196be"). InnerVolumeSpecName "kube-api-access-4ss2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:03:34 crc kubenswrapper[4854]: I0103 06:03:34.423511 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db4bd4c9-70cf-4ee8-a3c4-71bc3a2ead5d-kube-api-access-5plgn" (OuterVolumeSpecName: "kube-api-access-5plgn") pod "db4bd4c9-70cf-4ee8-a3c4-71bc3a2ead5d" (UID: "db4bd4c9-70cf-4ee8-a3c4-71bc3a2ead5d"). InnerVolumeSpecName "kube-api-access-5plgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:03:34 crc kubenswrapper[4854]: I0103 06:03:34.435202 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d219f3df-5003-4c46-a952-cdb9485b9879-kube-api-access-ktptz" (OuterVolumeSpecName: "kube-api-access-ktptz") pod "d219f3df-5003-4c46-a952-cdb9485b9879" (UID: "d219f3df-5003-4c46-a952-cdb9485b9879"). InnerVolumeSpecName "kube-api-access-ktptz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:03:34 crc kubenswrapper[4854]: I0103 06:03:34.513471 4854 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db4bd4c9-70cf-4ee8-a3c4-71bc3a2ead5d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:34 crc kubenswrapper[4854]: I0103 06:03:34.513506 4854 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d219f3df-5003-4c46-a952-cdb9485b9879-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:34 crc kubenswrapper[4854]: I0103 06:03:34.513516 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5plgn\" (UniqueName: \"kubernetes.io/projected/db4bd4c9-70cf-4ee8-a3c4-71bc3a2ead5d-kube-api-access-5plgn\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:34 crc kubenswrapper[4854]: I0103 06:03:34.513529 4854 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70d4a259-3160-44ac-8509-3e52076196be-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:34 crc kubenswrapper[4854]: I0103 06:03:34.513543 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ss2b\" (UniqueName: \"kubernetes.io/projected/70d4a259-3160-44ac-8509-3e52076196be-kube-api-access-4ss2b\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:34 crc kubenswrapper[4854]: I0103 06:03:34.513554 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktptz\" (UniqueName: \"kubernetes.io/projected/d219f3df-5003-4c46-a952-cdb9485b9879-kube-api-access-ktptz\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:34 crc kubenswrapper[4854]: I0103 06:03:34.663575 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-q85bq" event={"ID":"d219f3df-5003-4c46-a952-cdb9485b9879","Type":"ContainerDied","Data":"02a471d5e2581343f7c6bd3cace4f716170289cb6f8e752c32fef9b443004c8c"} Jan 03 06:03:34 crc kubenswrapper[4854]: I0103 06:03:34.663868 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02a471d5e2581343f7c6bd3cace4f716170289cb6f8e752c32fef9b443004c8c" Jan 03 06:03:34 crc kubenswrapper[4854]: I0103 06:03:34.663610 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-q85bq" Jan 03 06:03:34 crc kubenswrapper[4854]: I0103 06:03:34.664882 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-55ddh" event={"ID":"db4bd4c9-70cf-4ee8-a3c4-71bc3a2ead5d","Type":"ContainerDied","Data":"a229cb3ad7da62380c741671aa1bf82711e9c063af1ab5d9ad0e55ad20a7a99e"} Jan 03 06:03:34 crc kubenswrapper[4854]: I0103 06:03:34.664918 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a229cb3ad7da62380c741671aa1bf82711e9c063af1ab5d9ad0e55ad20a7a99e" Jan 03 06:03:34 crc kubenswrapper[4854]: I0103 06:03:34.664936 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-55ddh" Jan 03 06:03:34 crc kubenswrapper[4854]: I0103 06:03:34.674543 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-94f4-account-create-update-hlktd" event={"ID":"70d4a259-3160-44ac-8509-3e52076196be","Type":"ContainerDied","Data":"31776d7faeb99db0032d145970e5f07a12b64a32acad40fc6327769fc4114d53"} Jan 03 06:03:34 crc kubenswrapper[4854]: I0103 06:03:34.674581 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31776d7faeb99db0032d145970e5f07a12b64a32acad40fc6327769fc4114d53" Jan 03 06:03:34 crc kubenswrapper[4854]: I0103 06:03:34.674618 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-94f4-account-create-update-hlktd" Jan 03 06:03:35 crc kubenswrapper[4854]: I0103 06:03:35.500361 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 03 06:03:37 crc kubenswrapper[4854]: I0103 06:03:37.715102 4854 generic.go:334] "Generic (PLEG): container finished" podID="db4adf09-eb0a-4a6e-a49f-78e43cf04124" containerID="339454c0978d48a1ebd80f6a9e8836152f8dc2d53111e9f1d426564d8c134fee" exitCode=0 Jan 03 06:03:37 crc kubenswrapper[4854]: I0103 06:03:37.715120 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-l7b7k" event={"ID":"db4adf09-eb0a-4a6e-a49f-78e43cf04124","Type":"ContainerDied","Data":"339454c0978d48a1ebd80f6a9e8836152f8dc2d53111e9f1d426564d8c134fee"} Jan 03 06:03:39 crc kubenswrapper[4854]: I0103 06:03:39.062368 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Jan 03 06:03:39 crc kubenswrapper[4854]: I0103 06:03:39.080381 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Jan 03 06:03:39 crc kubenswrapper[4854]: I0103 06:03:39.393189 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="71288814-2f4e-4e92-8064-8f9ef1920212" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.132:5671: connect: connection refused" Jan 03 06:03:40 crc kubenswrapper[4854]: I0103 06:03:40.195224 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Jan 03 06:03:40 crc kubenswrapper[4854]: E0103 06:03:40.196819 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d219f3df-5003-4c46-a952-cdb9485b9879" containerName="mariadb-database-create" Jan 03 06:03:40 crc kubenswrapper[4854]: I0103 06:03:40.196895 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="d219f3df-5003-4c46-a952-cdb9485b9879" containerName="mariadb-database-create" Jan 03 06:03:40 crc kubenswrapper[4854]: E0103 06:03:40.196966 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8299f6ba-92fe-41ee-8a63-184f8a594135" containerName="dnsmasq-dns" Jan 03 06:03:40 crc kubenswrapper[4854]: I0103 06:03:40.197022 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="8299f6ba-92fe-41ee-8a63-184f8a594135" containerName="dnsmasq-dns" Jan 03 06:03:40 crc kubenswrapper[4854]: E0103 06:03:40.197102 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2acb1b92-2b44-4c8a-b80e-12b62db7de4a" containerName="ovn-config" Jan 03 06:03:40 crc kubenswrapper[4854]: I0103 06:03:40.197166 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="2acb1b92-2b44-4c8a-b80e-12b62db7de4a" containerName="ovn-config" Jan 03 06:03:40 crc kubenswrapper[4854]: E0103 06:03:40.197232 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70d4a259-3160-44ac-8509-3e52076196be" containerName="mariadb-account-create-update" Jan 03 06:03:40 crc kubenswrapper[4854]: I0103 06:03:40.197284 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="70d4a259-3160-44ac-8509-3e52076196be" containerName="mariadb-account-create-update" Jan 03 06:03:40 crc kubenswrapper[4854]: E0103 06:03:40.197338 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8299f6ba-92fe-41ee-8a63-184f8a594135" containerName="init" Jan 03 06:03:40 crc kubenswrapper[4854]: I0103 06:03:40.197390 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="8299f6ba-92fe-41ee-8a63-184f8a594135" containerName="init" Jan 03 06:03:40 crc kubenswrapper[4854]: E0103 06:03:40.197461 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db4bd4c9-70cf-4ee8-a3c4-71bc3a2ead5d" containerName="mariadb-account-create-update" Jan 03 06:03:40 crc kubenswrapper[4854]: I0103 06:03:40.197525 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="db4bd4c9-70cf-4ee8-a3c4-71bc3a2ead5d" containerName="mariadb-account-create-update" Jan 03 06:03:40 crc kubenswrapper[4854]: I0103 06:03:40.197965 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="8299f6ba-92fe-41ee-8a63-184f8a594135" containerName="dnsmasq-dns" Jan 03 06:03:40 crc kubenswrapper[4854]: I0103 06:03:40.198043 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="70d4a259-3160-44ac-8509-3e52076196be" containerName="mariadb-account-create-update" Jan 03 06:03:40 crc kubenswrapper[4854]: I0103 06:03:40.198134 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="d219f3df-5003-4c46-a952-cdb9485b9879" containerName="mariadb-database-create" Jan 03 06:03:40 crc kubenswrapper[4854]: I0103 06:03:40.198196 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="2acb1b92-2b44-4c8a-b80e-12b62db7de4a" containerName="ovn-config" Jan 03 06:03:40 crc kubenswrapper[4854]: I0103 06:03:40.198252 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="db4bd4c9-70cf-4ee8-a3c4-71bc3a2ead5d" containerName="mariadb-account-create-update" Jan 03 06:03:40 crc kubenswrapper[4854]: I0103 06:03:40.199012 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 03 06:03:40 crc kubenswrapper[4854]: I0103 06:03:40.203024 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Jan 03 06:03:40 crc kubenswrapper[4854]: I0103 06:03:40.232374 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 03 06:03:40 crc kubenswrapper[4854]: I0103 06:03:40.361146 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34ba0145-7948-47f0-bec5-7f5fc6cb1150-config-data\") pod \"mysqld-exporter-0\" (UID: \"34ba0145-7948-47f0-bec5-7f5fc6cb1150\") " pod="openstack/mysqld-exporter-0" Jan 03 06:03:40 crc kubenswrapper[4854]: I0103 06:03:40.361318 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whvlb\" (UniqueName: \"kubernetes.io/projected/34ba0145-7948-47f0-bec5-7f5fc6cb1150-kube-api-access-whvlb\") pod \"mysqld-exporter-0\" (UID: \"34ba0145-7948-47f0-bec5-7f5fc6cb1150\") " pod="openstack/mysqld-exporter-0" Jan 03 06:03:40 crc kubenswrapper[4854]: I0103 06:03:40.361395 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34ba0145-7948-47f0-bec5-7f5fc6cb1150-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"34ba0145-7948-47f0-bec5-7f5fc6cb1150\") " pod="openstack/mysqld-exporter-0" Jan 03 06:03:40 crc kubenswrapper[4854]: I0103 06:03:40.463831 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whvlb\" (UniqueName: \"kubernetes.io/projected/34ba0145-7948-47f0-bec5-7f5fc6cb1150-kube-api-access-whvlb\") pod \"mysqld-exporter-0\" (UID: \"34ba0145-7948-47f0-bec5-7f5fc6cb1150\") " pod="openstack/mysqld-exporter-0" Jan 03 06:03:40 crc kubenswrapper[4854]: I0103 06:03:40.463947 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34ba0145-7948-47f0-bec5-7f5fc6cb1150-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"34ba0145-7948-47f0-bec5-7f5fc6cb1150\") " pod="openstack/mysqld-exporter-0" Jan 03 06:03:40 crc kubenswrapper[4854]: I0103 06:03:40.464059 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34ba0145-7948-47f0-bec5-7f5fc6cb1150-config-data\") pod \"mysqld-exporter-0\" (UID: \"34ba0145-7948-47f0-bec5-7f5fc6cb1150\") " pod="openstack/mysqld-exporter-0" Jan 03 06:03:40 crc kubenswrapper[4854]: I0103 06:03:40.471900 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34ba0145-7948-47f0-bec5-7f5fc6cb1150-config-data\") pod \"mysqld-exporter-0\" (UID: \"34ba0145-7948-47f0-bec5-7f5fc6cb1150\") " pod="openstack/mysqld-exporter-0" Jan 03 06:03:40 crc kubenswrapper[4854]: I0103 06:03:40.477411 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34ba0145-7948-47f0-bec5-7f5fc6cb1150-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"34ba0145-7948-47f0-bec5-7f5fc6cb1150\") " pod="openstack/mysqld-exporter-0" Jan 03 06:03:40 crc kubenswrapper[4854]: I0103 06:03:40.486350 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whvlb\" (UniqueName: \"kubernetes.io/projected/34ba0145-7948-47f0-bec5-7f5fc6cb1150-kube-api-access-whvlb\") pod \"mysqld-exporter-0\" (UID: \"34ba0145-7948-47f0-bec5-7f5fc6cb1150\") " pod="openstack/mysqld-exporter-0" Jan 03 06:03:40 crc kubenswrapper[4854]: I0103 06:03:40.499547 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 03 06:03:40 crc kubenswrapper[4854]: I0103 06:03:40.503107 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 03 06:03:40 crc kubenswrapper[4854]: I0103 06:03:40.523500 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 03 06:03:40 crc kubenswrapper[4854]: I0103 06:03:40.754900 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.307962 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-6sqkj"] Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.310685 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-6sqkj" Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.327298 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-6sqkj"] Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.386231 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7456fb80-40dc-4ef7-86ee-062ad4b064d2-operator-scripts\") pod \"cinder-db-create-6sqkj\" (UID: \"7456fb80-40dc-4ef7-86ee-062ad4b064d2\") " pod="openstack/cinder-db-create-6sqkj" Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.386462 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dld5z\" (UniqueName: \"kubernetes.io/projected/7456fb80-40dc-4ef7-86ee-062ad4b064d2-kube-api-access-dld5z\") pod \"cinder-db-create-6sqkj\" (UID: \"7456fb80-40dc-4ef7-86ee-062ad4b064d2\") " pod="openstack/cinder-db-create-6sqkj" Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.399943 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-k7lm6"] Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.402240 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-k7lm6" Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.429386 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-k7lm6"] Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.488227 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba2c8def-0d1c-4a79-a63d-c6423a1b4823-operator-scripts\") pod \"barbican-db-create-k7lm6\" (UID: \"ba2c8def-0d1c-4a79-a63d-c6423a1b4823\") " pod="openstack/barbican-db-create-k7lm6" Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.488407 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7456fb80-40dc-4ef7-86ee-062ad4b064d2-operator-scripts\") pod \"cinder-db-create-6sqkj\" (UID: \"7456fb80-40dc-4ef7-86ee-062ad4b064d2\") " pod="openstack/cinder-db-create-6sqkj" Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.488501 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5d25\" (UniqueName: \"kubernetes.io/projected/ba2c8def-0d1c-4a79-a63d-c6423a1b4823-kube-api-access-p5d25\") pod \"barbican-db-create-k7lm6\" (UID: \"ba2c8def-0d1c-4a79-a63d-c6423a1b4823\") " pod="openstack/barbican-db-create-k7lm6" Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.488940 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dld5z\" (UniqueName: \"kubernetes.io/projected/7456fb80-40dc-4ef7-86ee-062ad4b064d2-kube-api-access-dld5z\") pod \"cinder-db-create-6sqkj\" (UID: \"7456fb80-40dc-4ef7-86ee-062ad4b064d2\") " pod="openstack/cinder-db-create-6sqkj" Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.489125 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7456fb80-40dc-4ef7-86ee-062ad4b064d2-operator-scripts\") pod \"cinder-db-create-6sqkj\" (UID: \"7456fb80-40dc-4ef7-86ee-062ad4b064d2\") " pod="openstack/cinder-db-create-6sqkj" Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.506386 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-ab39-account-create-update-24pb8"] Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.507645 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-ab39-account-create-update-24pb8" Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.517221 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.520009 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-ab39-account-create-update-24pb8"] Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.544855 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dld5z\" (UniqueName: \"kubernetes.io/projected/7456fb80-40dc-4ef7-86ee-062ad4b064d2-kube-api-access-dld5z\") pod \"cinder-db-create-6sqkj\" (UID: \"7456fb80-40dc-4ef7-86ee-062ad4b064d2\") " pod="openstack/cinder-db-create-6sqkj" Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.591964 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba2c8def-0d1c-4a79-a63d-c6423a1b4823-operator-scripts\") pod \"barbican-db-create-k7lm6\" (UID: \"ba2c8def-0d1c-4a79-a63d-c6423a1b4823\") " pod="openstack/barbican-db-create-k7lm6" Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.592050 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5d25\" (UniqueName: \"kubernetes.io/projected/ba2c8def-0d1c-4a79-a63d-c6423a1b4823-kube-api-access-p5d25\") pod \"barbican-db-create-k7lm6\" (UID: \"ba2c8def-0d1c-4a79-a63d-c6423a1b4823\") " pod="openstack/barbican-db-create-k7lm6" Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.592207 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96b6abe3-ad62-48fc-bd6d-8df5e103c5d4-operator-scripts\") pod \"heat-ab39-account-create-update-24pb8\" (UID: \"96b6abe3-ad62-48fc-bd6d-8df5e103c5d4\") " pod="openstack/heat-ab39-account-create-update-24pb8" Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.592295 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj2kb\" (UniqueName: \"kubernetes.io/projected/96b6abe3-ad62-48fc-bd6d-8df5e103c5d4-kube-api-access-kj2kb\") pod \"heat-ab39-account-create-update-24pb8\" (UID: \"96b6abe3-ad62-48fc-bd6d-8df5e103c5d4\") " pod="openstack/heat-ab39-account-create-update-24pb8" Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.593394 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba2c8def-0d1c-4a79-a63d-c6423a1b4823-operator-scripts\") pod \"barbican-db-create-k7lm6\" (UID: \"ba2c8def-0d1c-4a79-a63d-c6423a1b4823\") " pod="openstack/barbican-db-create-k7lm6" Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.612008 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-6fa3-account-create-update-pb5n6"] Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.613651 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-6fa3-account-create-update-pb5n6" Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.617126 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.620259 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5d25\" (UniqueName: \"kubernetes.io/projected/ba2c8def-0d1c-4a79-a63d-c6423a1b4823-kube-api-access-p5d25\") pod \"barbican-db-create-k7lm6\" (UID: \"ba2c8def-0d1c-4a79-a63d-c6423a1b4823\") " pod="openstack/barbican-db-create-k7lm6" Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.633164 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-ddlqd"] Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.634834 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-ddlqd" Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.643174 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-6sqkj" Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.647137 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-6fa3-account-create-update-pb5n6"] Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.672225 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-ddlqd"] Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.694042 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7134f57e-784c-4c40-b9d3-cf1e86a1237e-operator-scripts\") pod \"heat-db-create-ddlqd\" (UID: \"7134f57e-784c-4c40-b9d3-cf1e86a1237e\") " pod="openstack/heat-db-create-ddlqd" Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.694130 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj2kb\" (UniqueName: \"kubernetes.io/projected/96b6abe3-ad62-48fc-bd6d-8df5e103c5d4-kube-api-access-kj2kb\") pod \"heat-ab39-account-create-update-24pb8\" (UID: \"96b6abe3-ad62-48fc-bd6d-8df5e103c5d4\") " pod="openstack/heat-ab39-account-create-update-24pb8" Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.694164 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhn76\" (UniqueName: \"kubernetes.io/projected/c448f9c4-fd70-4c6d-853e-c4197af5b80b-kube-api-access-qhn76\") pod \"barbican-6fa3-account-create-update-pb5n6\" (UID: \"c448f9c4-fd70-4c6d-853e-c4197af5b80b\") " pod="openstack/barbican-6fa3-account-create-update-pb5n6" Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.694259 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c448f9c4-fd70-4c6d-853e-c4197af5b80b-operator-scripts\") pod \"barbican-6fa3-account-create-update-pb5n6\" (UID: \"c448f9c4-fd70-4c6d-853e-c4197af5b80b\") " pod="openstack/barbican-6fa3-account-create-update-pb5n6" Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.694296 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8zvg\" (UniqueName: \"kubernetes.io/projected/7134f57e-784c-4c40-b9d3-cf1e86a1237e-kube-api-access-z8zvg\") pod \"heat-db-create-ddlqd\" (UID: \"7134f57e-784c-4c40-b9d3-cf1e86a1237e\") " pod="openstack/heat-db-create-ddlqd" Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.694377 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96b6abe3-ad62-48fc-bd6d-8df5e103c5d4-operator-scripts\") pod \"heat-ab39-account-create-update-24pb8\" (UID: \"96b6abe3-ad62-48fc-bd6d-8df5e103c5d4\") " pod="openstack/heat-ab39-account-create-update-24pb8" Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.695231 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96b6abe3-ad62-48fc-bd6d-8df5e103c5d4-operator-scripts\") pod \"heat-ab39-account-create-update-24pb8\" (UID: \"96b6abe3-ad62-48fc-bd6d-8df5e103c5d4\") " pod="openstack/heat-ab39-account-create-update-24pb8" Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.716205 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kj2kb\" (UniqueName: \"kubernetes.io/projected/96b6abe3-ad62-48fc-bd6d-8df5e103c5d4-kube-api-access-kj2kb\") pod \"heat-ab39-account-create-update-24pb8\" (UID: \"96b6abe3-ad62-48fc-bd6d-8df5e103c5d4\") " pod="openstack/heat-ab39-account-create-update-24pb8" Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.724051 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-k7lm6" Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.736953 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-8c51-account-create-update-9m5qc"] Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.746015 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8c51-account-create-update-9m5qc" Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.752517 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.756438 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-8c51-account-create-update-9m5qc"] Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.772846 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-7hvb6"] Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.778688 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-7hvb6" Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.787011 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.787262 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.787380 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.787499 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-c82z5" Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.788019 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-7hvb6"] Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.797721 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7134f57e-784c-4c40-b9d3-cf1e86a1237e-operator-scripts\") pod \"heat-db-create-ddlqd\" (UID: \"7134f57e-784c-4c40-b9d3-cf1e86a1237e\") " pod="openstack/heat-db-create-ddlqd" Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.797822 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhn76\" (UniqueName: \"kubernetes.io/projected/c448f9c4-fd70-4c6d-853e-c4197af5b80b-kube-api-access-qhn76\") pod \"barbican-6fa3-account-create-update-pb5n6\" (UID: \"c448f9c4-fd70-4c6d-853e-c4197af5b80b\") " pod="openstack/barbican-6fa3-account-create-update-pb5n6" Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.797971 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c448f9c4-fd70-4c6d-853e-c4197af5b80b-operator-scripts\") pod \"barbican-6fa3-account-create-update-pb5n6\" (UID: \"c448f9c4-fd70-4c6d-853e-c4197af5b80b\") " pod="openstack/barbican-6fa3-account-create-update-pb5n6" Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.798047 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8zvg\" (UniqueName: \"kubernetes.io/projected/7134f57e-784c-4c40-b9d3-cf1e86a1237e-kube-api-access-z8zvg\") pod \"heat-db-create-ddlqd\" (UID: \"7134f57e-784c-4c40-b9d3-cf1e86a1237e\") " pod="openstack/heat-db-create-ddlqd" Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.801119 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c448f9c4-fd70-4c6d-853e-c4197af5b80b-operator-scripts\") pod \"barbican-6fa3-account-create-update-pb5n6\" (UID: \"c448f9c4-fd70-4c6d-853e-c4197af5b80b\") " pod="openstack/barbican-6fa3-account-create-update-pb5n6" Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.805503 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7134f57e-784c-4c40-b9d3-cf1e86a1237e-operator-scripts\") pod \"heat-db-create-ddlqd\" (UID: \"7134f57e-784c-4c40-b9d3-cf1e86a1237e\") " pod="openstack/heat-db-create-ddlqd" Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.819816 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8zvg\" (UniqueName: \"kubernetes.io/projected/7134f57e-784c-4c40-b9d3-cf1e86a1237e-kube-api-access-z8zvg\") pod \"heat-db-create-ddlqd\" (UID: \"7134f57e-784c-4c40-b9d3-cf1e86a1237e\") " pod="openstack/heat-db-create-ddlqd" Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.819945 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-4rfsf"] Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.821104 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhn76\" (UniqueName: \"kubernetes.io/projected/c448f9c4-fd70-4c6d-853e-c4197af5b80b-kube-api-access-qhn76\") pod \"barbican-6fa3-account-create-update-pb5n6\" (UID: \"c448f9c4-fd70-4c6d-853e-c4197af5b80b\") " pod="openstack/barbican-6fa3-account-create-update-pb5n6" Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.822850 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-4rfsf" Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.825169 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-ab39-account-create-update-24pb8" Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.829982 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-4rfsf"] Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.900458 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0019311a-ce5a-4dbb-bef8-8cac6b78a304-operator-scripts\") pod \"cinder-8c51-account-create-update-9m5qc\" (UID: \"0019311a-ce5a-4dbb-bef8-8cac6b78a304\") " pod="openstack/cinder-8c51-account-create-update-9m5qc" Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.900526 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdc310d0-38e5-41d6-a784-d8e534a5e324-operator-scripts\") pod \"neutron-db-create-4rfsf\" (UID: \"fdc310d0-38e5-41d6-a784-d8e534a5e324\") " pod="openstack/neutron-db-create-4rfsf" Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.900571 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlffr\" (UniqueName: \"kubernetes.io/projected/0019311a-ce5a-4dbb-bef8-8cac6b78a304-kube-api-access-tlffr\") pod \"cinder-8c51-account-create-update-9m5qc\" (UID: \"0019311a-ce5a-4dbb-bef8-8cac6b78a304\") " pod="openstack/cinder-8c51-account-create-update-9m5qc" Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.900601 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d67c032-022a-4d33-95e6-cdf31147fb4c-config-data\") pod \"keystone-db-sync-7hvb6\" (UID: \"8d67c032-022a-4d33-95e6-cdf31147fb4c\") " pod="openstack/keystone-db-sync-7hvb6" Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.900678 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d67c032-022a-4d33-95e6-cdf31147fb4c-combined-ca-bundle\") pod \"keystone-db-sync-7hvb6\" (UID: \"8d67c032-022a-4d33-95e6-cdf31147fb4c\") " pod="openstack/keystone-db-sync-7hvb6" Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.900797 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq6rv\" (UniqueName: \"kubernetes.io/projected/8d67c032-022a-4d33-95e6-cdf31147fb4c-kube-api-access-fq6rv\") pod \"keystone-db-sync-7hvb6\" (UID: \"8d67c032-022a-4d33-95e6-cdf31147fb4c\") " pod="openstack/keystone-db-sync-7hvb6" Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.900824 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knv2s\" (UniqueName: \"kubernetes.io/projected/fdc310d0-38e5-41d6-a784-d8e534a5e324-kube-api-access-knv2s\") pod \"neutron-db-create-4rfsf\" (UID: \"fdc310d0-38e5-41d6-a784-d8e534a5e324\") " pod="openstack/neutron-db-create-4rfsf" Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.980562 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-6fa3-account-create-update-pb5n6" Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.995096 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-3a66-account-create-update-thf68"] Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.995457 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-ddlqd" Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.996801 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3a66-account-create-update-thf68" Jan 03 06:03:41 crc kubenswrapper[4854]: I0103 06:03:41.999934 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 03 06:03:42 crc kubenswrapper[4854]: I0103 06:03:42.003672 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0019311a-ce5a-4dbb-bef8-8cac6b78a304-operator-scripts\") pod \"cinder-8c51-account-create-update-9m5qc\" (UID: \"0019311a-ce5a-4dbb-bef8-8cac6b78a304\") " pod="openstack/cinder-8c51-account-create-update-9m5qc" Jan 03 06:03:42 crc kubenswrapper[4854]: I0103 06:03:42.003719 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdc310d0-38e5-41d6-a784-d8e534a5e324-operator-scripts\") pod \"neutron-db-create-4rfsf\" (UID: \"fdc310d0-38e5-41d6-a784-d8e534a5e324\") " pod="openstack/neutron-db-create-4rfsf" Jan 03 06:03:42 crc kubenswrapper[4854]: I0103 06:03:42.003757 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlffr\" (UniqueName: \"kubernetes.io/projected/0019311a-ce5a-4dbb-bef8-8cac6b78a304-kube-api-access-tlffr\") pod \"cinder-8c51-account-create-update-9m5qc\" (UID: \"0019311a-ce5a-4dbb-bef8-8cac6b78a304\") " pod="openstack/cinder-8c51-account-create-update-9m5qc" Jan 03 06:03:42 crc kubenswrapper[4854]: I0103 06:03:42.003777 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d67c032-022a-4d33-95e6-cdf31147fb4c-config-data\") pod \"keystone-db-sync-7hvb6\" (UID: \"8d67c032-022a-4d33-95e6-cdf31147fb4c\") " pod="openstack/keystone-db-sync-7hvb6" Jan 03 06:03:42 crc kubenswrapper[4854]: I0103 06:03:42.003828 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d67c032-022a-4d33-95e6-cdf31147fb4c-combined-ca-bundle\") pod \"keystone-db-sync-7hvb6\" (UID: \"8d67c032-022a-4d33-95e6-cdf31147fb4c\") " pod="openstack/keystone-db-sync-7hvb6" Jan 03 06:03:42 crc kubenswrapper[4854]: I0103 06:03:42.003896 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fq6rv\" (UniqueName: \"kubernetes.io/projected/8d67c032-022a-4d33-95e6-cdf31147fb4c-kube-api-access-fq6rv\") pod \"keystone-db-sync-7hvb6\" (UID: \"8d67c032-022a-4d33-95e6-cdf31147fb4c\") " pod="openstack/keystone-db-sync-7hvb6" Jan 03 06:03:42 crc kubenswrapper[4854]: I0103 06:03:42.003914 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knv2s\" (UniqueName: \"kubernetes.io/projected/fdc310d0-38e5-41d6-a784-d8e534a5e324-kube-api-access-knv2s\") pod \"neutron-db-create-4rfsf\" (UID: \"fdc310d0-38e5-41d6-a784-d8e534a5e324\") " pod="openstack/neutron-db-create-4rfsf" Jan 03 06:03:42 crc kubenswrapper[4854]: I0103 06:03:42.008879 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0019311a-ce5a-4dbb-bef8-8cac6b78a304-operator-scripts\") pod \"cinder-8c51-account-create-update-9m5qc\" (UID: \"0019311a-ce5a-4dbb-bef8-8cac6b78a304\") " pod="openstack/cinder-8c51-account-create-update-9m5qc" Jan 03 06:03:42 crc kubenswrapper[4854]: I0103 06:03:42.009518 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdc310d0-38e5-41d6-a784-d8e534a5e324-operator-scripts\") pod \"neutron-db-create-4rfsf\" (UID: \"fdc310d0-38e5-41d6-a784-d8e534a5e324\") " pod="openstack/neutron-db-create-4rfsf" Jan 03 06:03:42 crc kubenswrapper[4854]: I0103 06:03:42.010012 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d67c032-022a-4d33-95e6-cdf31147fb4c-config-data\") pod \"keystone-db-sync-7hvb6\" (UID: \"8d67c032-022a-4d33-95e6-cdf31147fb4c\") " pod="openstack/keystone-db-sync-7hvb6" Jan 03 06:03:42 crc kubenswrapper[4854]: I0103 06:03:42.010203 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d67c032-022a-4d33-95e6-cdf31147fb4c-combined-ca-bundle\") pod \"keystone-db-sync-7hvb6\" (UID: \"8d67c032-022a-4d33-95e6-cdf31147fb4c\") " pod="openstack/keystone-db-sync-7hvb6" Jan 03 06:03:42 crc kubenswrapper[4854]: I0103 06:03:42.022211 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knv2s\" (UniqueName: \"kubernetes.io/projected/fdc310d0-38e5-41d6-a784-d8e534a5e324-kube-api-access-knv2s\") pod \"neutron-db-create-4rfsf\" (UID: \"fdc310d0-38e5-41d6-a784-d8e534a5e324\") " pod="openstack/neutron-db-create-4rfsf" Jan 03 06:03:42 crc kubenswrapper[4854]: I0103 06:03:42.022310 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fq6rv\" (UniqueName: \"kubernetes.io/projected/8d67c032-022a-4d33-95e6-cdf31147fb4c-kube-api-access-fq6rv\") pod \"keystone-db-sync-7hvb6\" (UID: \"8d67c032-022a-4d33-95e6-cdf31147fb4c\") " pod="openstack/keystone-db-sync-7hvb6" Jan 03 06:03:42 crc kubenswrapper[4854]: I0103 06:03:42.034802 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlffr\" (UniqueName: \"kubernetes.io/projected/0019311a-ce5a-4dbb-bef8-8cac6b78a304-kube-api-access-tlffr\") pod \"cinder-8c51-account-create-update-9m5qc\" (UID: \"0019311a-ce5a-4dbb-bef8-8cac6b78a304\") " pod="openstack/cinder-8c51-account-create-update-9m5qc" Jan 03 06:03:42 crc kubenswrapper[4854]: I0103 06:03:42.034974 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-3a66-account-create-update-thf68"] Jan 03 06:03:42 crc kubenswrapper[4854]: I0103 06:03:42.092342 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8c51-account-create-update-9m5qc" Jan 03 06:03:42 crc kubenswrapper[4854]: I0103 06:03:42.105411 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e681201f-947c-41ba-93fe-0533bd1d071a-operator-scripts\") pod \"neutron-3a66-account-create-update-thf68\" (UID: \"e681201f-947c-41ba-93fe-0533bd1d071a\") " pod="openstack/neutron-3a66-account-create-update-thf68" Jan 03 06:03:42 crc kubenswrapper[4854]: I0103 06:03:42.105490 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xtpx\" (UniqueName: \"kubernetes.io/projected/e681201f-947c-41ba-93fe-0533bd1d071a-kube-api-access-2xtpx\") pod \"neutron-3a66-account-create-update-thf68\" (UID: \"e681201f-947c-41ba-93fe-0533bd1d071a\") " pod="openstack/neutron-3a66-account-create-update-thf68" Jan 03 06:03:42 crc kubenswrapper[4854]: I0103 06:03:42.117921 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-7hvb6" Jan 03 06:03:42 crc kubenswrapper[4854]: I0103 06:03:42.209039 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e681201f-947c-41ba-93fe-0533bd1d071a-operator-scripts\") pod \"neutron-3a66-account-create-update-thf68\" (UID: \"e681201f-947c-41ba-93fe-0533bd1d071a\") " pod="openstack/neutron-3a66-account-create-update-thf68" Jan 03 06:03:42 crc kubenswrapper[4854]: I0103 06:03:42.209223 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xtpx\" (UniqueName: \"kubernetes.io/projected/e681201f-947c-41ba-93fe-0533bd1d071a-kube-api-access-2xtpx\") pod \"neutron-3a66-account-create-update-thf68\" (UID: \"e681201f-947c-41ba-93fe-0533bd1d071a\") " pod="openstack/neutron-3a66-account-create-update-thf68" Jan 03 06:03:42 crc kubenswrapper[4854]: I0103 06:03:42.209373 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e681201f-947c-41ba-93fe-0533bd1d071a-operator-scripts\") pod \"neutron-3a66-account-create-update-thf68\" (UID: \"e681201f-947c-41ba-93fe-0533bd1d071a\") " pod="openstack/neutron-3a66-account-create-update-thf68" Jan 03 06:03:42 crc kubenswrapper[4854]: I0103 06:03:42.216297 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-4rfsf" Jan 03 06:03:42 crc kubenswrapper[4854]: I0103 06:03:42.229204 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xtpx\" (UniqueName: \"kubernetes.io/projected/e681201f-947c-41ba-93fe-0533bd1d071a-kube-api-access-2xtpx\") pod \"neutron-3a66-account-create-update-thf68\" (UID: \"e681201f-947c-41ba-93fe-0533bd1d071a\") " pod="openstack/neutron-3a66-account-create-update-thf68" Jan 03 06:03:42 crc kubenswrapper[4854]: I0103 06:03:42.321413 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3a66-account-create-update-thf68" Jan 03 06:03:42 crc kubenswrapper[4854]: I0103 06:03:42.611680 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 03 06:03:43 crc kubenswrapper[4854]: I0103 06:03:43.242389 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nxt9h" Jan 03 06:03:43 crc kubenswrapper[4854]: I0103 06:03:43.303297 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nxt9h" Jan 03 06:03:43 crc kubenswrapper[4854]: I0103 06:03:43.487242 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nxt9h"] Jan 03 06:03:44 crc kubenswrapper[4854]: I0103 06:03:44.817694 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nxt9h" podUID="6f9e2844-dbc2-488b-bb08-77f9a4284a35" containerName="registry-server" containerID="cri-o://182126e5ed38e163d25670173a949168214058743116da0e2ef3272d301f52f2" gracePeriod=2 Jan 03 06:03:44 crc kubenswrapper[4854]: I0103 06:03:44.989812 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 03 06:03:44 crc kubenswrapper[4854]: I0103 06:03:44.990398 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="97a38e3c-dd5a-447b-b580-ed7bd5f16fde" containerName="config-reloader" containerID="cri-o://c98c7572e2fadf2eedfaaf84522ebe9bdf58a2d6ca98573910bedf1e6020bc9a" gracePeriod=600 Jan 03 06:03:44 crc kubenswrapper[4854]: I0103 06:03:44.990744 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="97a38e3c-dd5a-447b-b580-ed7bd5f16fde" containerName="prometheus" containerID="cri-o://8e1fb1f7a1282afb51acc14df15dfebcbd387a8ad192da5cbe5e2dec946e3413" gracePeriod=600 Jan 03 06:03:44 crc kubenswrapper[4854]: I0103 06:03:44.990761 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="97a38e3c-dd5a-447b-b580-ed7bd5f16fde" containerName="thanos-sidecar" containerID="cri-o://b229eeb9b9ea228e815ad76287293926827cfd7ae30bb370526480b9e7e3a56b" gracePeriod=600 Jan 03 06:03:45 crc kubenswrapper[4854]: I0103 06:03:45.501239 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="97a38e3c-dd5a-447b-b580-ed7bd5f16fde" containerName="prometheus" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 03 06:03:45 crc kubenswrapper[4854]: I0103 06:03:45.831822 4854 generic.go:334] "Generic (PLEG): container finished" podID="6f9e2844-dbc2-488b-bb08-77f9a4284a35" containerID="182126e5ed38e163d25670173a949168214058743116da0e2ef3272d301f52f2" exitCode=0 Jan 03 06:03:45 crc kubenswrapper[4854]: I0103 06:03:45.831947 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nxt9h" event={"ID":"6f9e2844-dbc2-488b-bb08-77f9a4284a35","Type":"ContainerDied","Data":"182126e5ed38e163d25670173a949168214058743116da0e2ef3272d301f52f2"} Jan 03 06:03:45 crc kubenswrapper[4854]: I0103 06:03:45.835278 4854 generic.go:334] "Generic (PLEG): container finished" podID="97a38e3c-dd5a-447b-b580-ed7bd5f16fde" containerID="8e1fb1f7a1282afb51acc14df15dfebcbd387a8ad192da5cbe5e2dec946e3413" exitCode=0 Jan 03 06:03:45 crc kubenswrapper[4854]: I0103 06:03:45.835306 4854 generic.go:334] "Generic (PLEG): container finished" podID="97a38e3c-dd5a-447b-b580-ed7bd5f16fde" containerID="b229eeb9b9ea228e815ad76287293926827cfd7ae30bb370526480b9e7e3a56b" exitCode=0 Jan 03 06:03:45 crc kubenswrapper[4854]: I0103 06:03:45.835314 4854 generic.go:334] "Generic (PLEG): container finished" podID="97a38e3c-dd5a-447b-b580-ed7bd5f16fde" containerID="c98c7572e2fadf2eedfaaf84522ebe9bdf58a2d6ca98573910bedf1e6020bc9a" exitCode=0 Jan 03 06:03:45 crc kubenswrapper[4854]: I0103 06:03:45.835333 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"97a38e3c-dd5a-447b-b580-ed7bd5f16fde","Type":"ContainerDied","Data":"8e1fb1f7a1282afb51acc14df15dfebcbd387a8ad192da5cbe5e2dec946e3413"} Jan 03 06:03:45 crc kubenswrapper[4854]: I0103 06:03:45.835384 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"97a38e3c-dd5a-447b-b580-ed7bd5f16fde","Type":"ContainerDied","Data":"b229eeb9b9ea228e815ad76287293926827cfd7ae30bb370526480b9e7e3a56b"} Jan 03 06:03:45 crc kubenswrapper[4854]: I0103 06:03:45.835394 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"97a38e3c-dd5a-447b-b580-ed7bd5f16fde","Type":"ContainerDied","Data":"c98c7572e2fadf2eedfaaf84522ebe9bdf58a2d6ca98573910bedf1e6020bc9a"} Jan 03 06:03:46 crc kubenswrapper[4854]: E0103 06:03:46.846366 4854 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Jan 03 06:03:46 crc kubenswrapper[4854]: E0103 06:03:46.846581 4854 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-clkht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-v8pxd_openstack(9acf61c2-85c5-4ba2-9f4b-0778c961a268): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 03 06:03:46 crc kubenswrapper[4854]: E0103 06:03:46.847766 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-v8pxd" podUID="9acf61c2-85c5-4ba2-9f4b-0778c961a268" Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.022269 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-l7b7k" Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.099972 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4adf09-eb0a-4a6e-a49f-78e43cf04124-combined-ca-bundle\") pod \"db4adf09-eb0a-4a6e-a49f-78e43cf04124\" (UID: \"db4adf09-eb0a-4a6e-a49f-78e43cf04124\") " Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.100346 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/db4adf09-eb0a-4a6e-a49f-78e43cf04124-etc-swift\") pod \"db4adf09-eb0a-4a6e-a49f-78e43cf04124\" (UID: \"db4adf09-eb0a-4a6e-a49f-78e43cf04124\") " Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.100373 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/db4adf09-eb0a-4a6e-a49f-78e43cf04124-scripts\") pod \"db4adf09-eb0a-4a6e-a49f-78e43cf04124\" (UID: \"db4adf09-eb0a-4a6e-a49f-78e43cf04124\") " Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.100409 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/db4adf09-eb0a-4a6e-a49f-78e43cf04124-swiftconf\") pod \"db4adf09-eb0a-4a6e-a49f-78e43cf04124\" (UID: \"db4adf09-eb0a-4a6e-a49f-78e43cf04124\") " Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.100431 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4ssn\" (UniqueName: \"kubernetes.io/projected/db4adf09-eb0a-4a6e-a49f-78e43cf04124-kube-api-access-l4ssn\") pod \"db4adf09-eb0a-4a6e-a49f-78e43cf04124\" (UID: \"db4adf09-eb0a-4a6e-a49f-78e43cf04124\") " Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.100552 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/db4adf09-eb0a-4a6e-a49f-78e43cf04124-ring-data-devices\") pod \"db4adf09-eb0a-4a6e-a49f-78e43cf04124\" (UID: \"db4adf09-eb0a-4a6e-a49f-78e43cf04124\") " Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.100572 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/db4adf09-eb0a-4a6e-a49f-78e43cf04124-dispersionconf\") pod \"db4adf09-eb0a-4a6e-a49f-78e43cf04124\" (UID: \"db4adf09-eb0a-4a6e-a49f-78e43cf04124\") " Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.103523 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db4adf09-eb0a-4a6e-a49f-78e43cf04124-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "db4adf09-eb0a-4a6e-a49f-78e43cf04124" (UID: "db4adf09-eb0a-4a6e-a49f-78e43cf04124"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.109168 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db4adf09-eb0a-4a6e-a49f-78e43cf04124-kube-api-access-l4ssn" (OuterVolumeSpecName: "kube-api-access-l4ssn") pod "db4adf09-eb0a-4a6e-a49f-78e43cf04124" (UID: "db4adf09-eb0a-4a6e-a49f-78e43cf04124"). InnerVolumeSpecName "kube-api-access-l4ssn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.110607 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db4adf09-eb0a-4a6e-a49f-78e43cf04124-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "db4adf09-eb0a-4a6e-a49f-78e43cf04124" (UID: "db4adf09-eb0a-4a6e-a49f-78e43cf04124"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.118963 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db4adf09-eb0a-4a6e-a49f-78e43cf04124-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "db4adf09-eb0a-4a6e-a49f-78e43cf04124" (UID: "db4adf09-eb0a-4a6e-a49f-78e43cf04124"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.138050 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db4adf09-eb0a-4a6e-a49f-78e43cf04124-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "db4adf09-eb0a-4a6e-a49f-78e43cf04124" (UID: "db4adf09-eb0a-4a6e-a49f-78e43cf04124"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.152061 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db4adf09-eb0a-4a6e-a49f-78e43cf04124-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db4adf09-eb0a-4a6e-a49f-78e43cf04124" (UID: "db4adf09-eb0a-4a6e-a49f-78e43cf04124"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.163848 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db4adf09-eb0a-4a6e-a49f-78e43cf04124-scripts" (OuterVolumeSpecName: "scripts") pod "db4adf09-eb0a-4a6e-a49f-78e43cf04124" (UID: "db4adf09-eb0a-4a6e-a49f-78e43cf04124"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.208114 4854 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/db4adf09-eb0a-4a6e-a49f-78e43cf04124-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.208144 4854 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/db4adf09-eb0a-4a6e-a49f-78e43cf04124-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.208155 4854 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4adf09-eb0a-4a6e-a49f-78e43cf04124-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.208163 4854 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/db4adf09-eb0a-4a6e-a49f-78e43cf04124-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.208173 4854 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/db4adf09-eb0a-4a6e-a49f-78e43cf04124-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.208180 4854 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/db4adf09-eb0a-4a6e-a49f-78e43cf04124-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.208189 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4ssn\" (UniqueName: \"kubernetes.io/projected/db4adf09-eb0a-4a6e-a49f-78e43cf04124-kube-api-access-l4ssn\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.262956 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nxt9h" Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.320972 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7pvk\" (UniqueName: \"kubernetes.io/projected/6f9e2844-dbc2-488b-bb08-77f9a4284a35-kube-api-access-x7pvk\") pod \"6f9e2844-dbc2-488b-bb08-77f9a4284a35\" (UID: \"6f9e2844-dbc2-488b-bb08-77f9a4284a35\") " Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.321301 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f9e2844-dbc2-488b-bb08-77f9a4284a35-utilities\") pod \"6f9e2844-dbc2-488b-bb08-77f9a4284a35\" (UID: \"6f9e2844-dbc2-488b-bb08-77f9a4284a35\") " Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.321437 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f9e2844-dbc2-488b-bb08-77f9a4284a35-catalog-content\") pod \"6f9e2844-dbc2-488b-bb08-77f9a4284a35\" (UID: \"6f9e2844-dbc2-488b-bb08-77f9a4284a35\") " Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.321903 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f9e2844-dbc2-488b-bb08-77f9a4284a35-utilities" (OuterVolumeSpecName: "utilities") pod "6f9e2844-dbc2-488b-bb08-77f9a4284a35" (UID: "6f9e2844-dbc2-488b-bb08-77f9a4284a35"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.322109 4854 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f9e2844-dbc2-488b-bb08-77f9a4284a35-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.328514 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f9e2844-dbc2-488b-bb08-77f9a4284a35-kube-api-access-x7pvk" (OuterVolumeSpecName: "kube-api-access-x7pvk") pod "6f9e2844-dbc2-488b-bb08-77f9a4284a35" (UID: "6f9e2844-dbc2-488b-bb08-77f9a4284a35"). InnerVolumeSpecName "kube-api-access-x7pvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.427776 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.430156 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7pvk\" (UniqueName: \"kubernetes.io/projected/6f9e2844-dbc2-488b-bb08-77f9a4284a35-kube-api-access-x7pvk\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.435327 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f9e2844-dbc2-488b-bb08-77f9a4284a35-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6f9e2844-dbc2-488b-bb08-77f9a4284a35" (UID: "6f9e2844-dbc2-488b-bb08-77f9a4284a35"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.531643 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d0c0986-6456-41c5-893f-749533411374\") pod \"97a38e3c-dd5a-447b-b580-ed7bd5f16fde\" (UID: \"97a38e3c-dd5a-447b-b580-ed7bd5f16fde\") " Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.531707 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/97a38e3c-dd5a-447b-b580-ed7bd5f16fde-config-out\") pod \"97a38e3c-dd5a-447b-b580-ed7bd5f16fde\" (UID: \"97a38e3c-dd5a-447b-b580-ed7bd5f16fde\") " Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.531945 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/97a38e3c-dd5a-447b-b580-ed7bd5f16fde-prometheus-metric-storage-rulefiles-0\") pod \"97a38e3c-dd5a-447b-b580-ed7bd5f16fde\" (UID: \"97a38e3c-dd5a-447b-b580-ed7bd5f16fde\") " Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.531992 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/97a38e3c-dd5a-447b-b580-ed7bd5f16fde-thanos-prometheus-http-client-file\") pod \"97a38e3c-dd5a-447b-b580-ed7bd5f16fde\" (UID: \"97a38e3c-dd5a-447b-b580-ed7bd5f16fde\") " Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.532114 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/97a38e3c-dd5a-447b-b580-ed7bd5f16fde-prometheus-metric-storage-rulefiles-2\") pod \"97a38e3c-dd5a-447b-b580-ed7bd5f16fde\" (UID: \"97a38e3c-dd5a-447b-b580-ed7bd5f16fde\") " Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.532177 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ng2qt\" (UniqueName: \"kubernetes.io/projected/97a38e3c-dd5a-447b-b580-ed7bd5f16fde-kube-api-access-ng2qt\") pod \"97a38e3c-dd5a-447b-b580-ed7bd5f16fde\" (UID: \"97a38e3c-dd5a-447b-b580-ed7bd5f16fde\") " Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.532201 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/97a38e3c-dd5a-447b-b580-ed7bd5f16fde-config\") pod \"97a38e3c-dd5a-447b-b580-ed7bd5f16fde\" (UID: \"97a38e3c-dd5a-447b-b580-ed7bd5f16fde\") " Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.532224 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/97a38e3c-dd5a-447b-b580-ed7bd5f16fde-prometheus-metric-storage-rulefiles-1\") pod \"97a38e3c-dd5a-447b-b580-ed7bd5f16fde\" (UID: \"97a38e3c-dd5a-447b-b580-ed7bd5f16fde\") " Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.532264 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/97a38e3c-dd5a-447b-b580-ed7bd5f16fde-tls-assets\") pod \"97a38e3c-dd5a-447b-b580-ed7bd5f16fde\" (UID: \"97a38e3c-dd5a-447b-b580-ed7bd5f16fde\") " Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.532298 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/97a38e3c-dd5a-447b-b580-ed7bd5f16fde-web-config\") pod \"97a38e3c-dd5a-447b-b580-ed7bd5f16fde\" (UID: \"97a38e3c-dd5a-447b-b580-ed7bd5f16fde\") " Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.540010 4854 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f9e2844-dbc2-488b-bb08-77f9a4284a35-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.540647 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97a38e3c-dd5a-447b-b580-ed7bd5f16fde-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "97a38e3c-dd5a-447b-b580-ed7bd5f16fde" (UID: "97a38e3c-dd5a-447b-b580-ed7bd5f16fde"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.541278 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97a38e3c-dd5a-447b-b580-ed7bd5f16fde-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "97a38e3c-dd5a-447b-b580-ed7bd5f16fde" (UID: "97a38e3c-dd5a-447b-b580-ed7bd5f16fde"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.543624 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97a38e3c-dd5a-447b-b580-ed7bd5f16fde-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "97a38e3c-dd5a-447b-b580-ed7bd5f16fde" (UID: "97a38e3c-dd5a-447b-b580-ed7bd5f16fde"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.548288 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97a38e3c-dd5a-447b-b580-ed7bd5f16fde-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "97a38e3c-dd5a-447b-b580-ed7bd5f16fde" (UID: "97a38e3c-dd5a-447b-b580-ed7bd5f16fde"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.551273 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97a38e3c-dd5a-447b-b580-ed7bd5f16fde-kube-api-access-ng2qt" (OuterVolumeSpecName: "kube-api-access-ng2qt") pod "97a38e3c-dd5a-447b-b580-ed7bd5f16fde" (UID: "97a38e3c-dd5a-447b-b580-ed7bd5f16fde"). InnerVolumeSpecName "kube-api-access-ng2qt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.552495 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97a38e3c-dd5a-447b-b580-ed7bd5f16fde-config" (OuterVolumeSpecName: "config") pod "97a38e3c-dd5a-447b-b580-ed7bd5f16fde" (UID: "97a38e3c-dd5a-447b-b580-ed7bd5f16fde"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.552617 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97a38e3c-dd5a-447b-b580-ed7bd5f16fde-config-out" (OuterVolumeSpecName: "config-out") pod "97a38e3c-dd5a-447b-b580-ed7bd5f16fde" (UID: "97a38e3c-dd5a-447b-b580-ed7bd5f16fde"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.555564 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97a38e3c-dd5a-447b-b580-ed7bd5f16fde-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "97a38e3c-dd5a-447b-b580-ed7bd5f16fde" (UID: "97a38e3c-dd5a-447b-b580-ed7bd5f16fde"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.586465 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d0c0986-6456-41c5-893f-749533411374" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "97a38e3c-dd5a-447b-b580-ed7bd5f16fde" (UID: "97a38e3c-dd5a-447b-b580-ed7bd5f16fde"). InnerVolumeSpecName "pvc-4d0c0986-6456-41c5-893f-749533411374". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.590722 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97a38e3c-dd5a-447b-b580-ed7bd5f16fde-web-config" (OuterVolumeSpecName: "web-config") pod "97a38e3c-dd5a-447b-b580-ed7bd5f16fde" (UID: "97a38e3c-dd5a-447b-b580-ed7bd5f16fde"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.641849 4854 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/97a38e3c-dd5a-447b-b580-ed7bd5f16fde-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.641878 4854 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/97a38e3c-dd5a-447b-b580-ed7bd5f16fde-web-config\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.641911 4854 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-4d0c0986-6456-41c5-893f-749533411374\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d0c0986-6456-41c5-893f-749533411374\") on node \"crc\" " Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.641922 4854 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/97a38e3c-dd5a-447b-b580-ed7bd5f16fde-config-out\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.641931 4854 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/97a38e3c-dd5a-447b-b580-ed7bd5f16fde-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.641942 4854 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/97a38e3c-dd5a-447b-b580-ed7bd5f16fde-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.641951 4854 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/97a38e3c-dd5a-447b-b580-ed7bd5f16fde-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.641977 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ng2qt\" (UniqueName: \"kubernetes.io/projected/97a38e3c-dd5a-447b-b580-ed7bd5f16fde-kube-api-access-ng2qt\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.641987 4854 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/97a38e3c-dd5a-447b-b580-ed7bd5f16fde-config\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.641995 4854 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/97a38e3c-dd5a-447b-b580-ed7bd5f16fde-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.696766 4854 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.697109 4854 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-4d0c0986-6456-41c5-893f-749533411374" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d0c0986-6456-41c5-893f-749533411374") on node "crc" Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.714131 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-3a66-account-create-update-thf68"] Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.743820 4854 reconciler_common.go:293] "Volume detached for volume \"pvc-4d0c0986-6456-41c5-893f-749533411374\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d0c0986-6456-41c5-893f-749533411374\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.860555 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-l7b7k" Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.860542 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-l7b7k" event={"ID":"db4adf09-eb0a-4a6e-a49f-78e43cf04124","Type":"ContainerDied","Data":"63be384d9585ad73ddc603a544fbf895fb1e0400237aa983261a872d72a43141"} Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.860645 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63be384d9585ad73ddc603a544fbf895fb1e0400237aa983261a872d72a43141" Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.865826 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nxt9h" event={"ID":"6f9e2844-dbc2-488b-bb08-77f9a4284a35","Type":"ContainerDied","Data":"c737a562b74fcbbb0e7e6a588d08d24e7fcd4ad66d86c5857d17c7b450ce65ca"} Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.865885 4854 scope.go:117] "RemoveContainer" containerID="182126e5ed38e163d25670173a949168214058743116da0e2ef3272d301f52f2" Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.866026 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nxt9h" Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.873123 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3a66-account-create-update-thf68" event={"ID":"e681201f-947c-41ba-93fe-0533bd1d071a","Type":"ContainerStarted","Data":"e260c2215667b15ae5a4225fd8318f5ce45b047ad232b6815c0b65e2f15ba8b9"} Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.878508 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.878508 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"97a38e3c-dd5a-447b-b580-ed7bd5f16fde","Type":"ContainerDied","Data":"961e13b21b5d51e645d050cb57f92e4a88a73fcf36afafffd6547add501ccefc"} Jan 03 06:03:47 crc kubenswrapper[4854]: E0103 06:03:47.880037 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-v8pxd" podUID="9acf61c2-85c5-4ba2-9f4b-0778c961a268" Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.925875 4854 scope.go:117] "RemoveContainer" containerID="61da04aa23070db5bd8e652fd9d7dc4eb5ea08d20f929d20b3d913d951c1c41f" Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.983695 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nxt9h"] Jan 03 06:03:47 crc kubenswrapper[4854]: I0103 06:03:47.989338 4854 scope.go:117] "RemoveContainer" containerID="d794a2b8f646de8b7b9f6c014d8157516015c573f6d3e066ed2214adc99775cb" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.001373 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nxt9h"] Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.021284 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.027679 4854 scope.go:117] "RemoveContainer" containerID="8e1fb1f7a1282afb51acc14df15dfebcbd387a8ad192da5cbe5e2dec946e3413" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.058340 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.086251 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 03 06:03:48 crc kubenswrapper[4854]: E0103 06:03:48.086848 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97a38e3c-dd5a-447b-b580-ed7bd5f16fde" containerName="config-reloader" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.086891 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="97a38e3c-dd5a-447b-b580-ed7bd5f16fde" containerName="config-reloader" Jan 03 06:03:48 crc kubenswrapper[4854]: E0103 06:03:48.086914 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f9e2844-dbc2-488b-bb08-77f9a4284a35" containerName="extract-utilities" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.086921 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f9e2844-dbc2-488b-bb08-77f9a4284a35" containerName="extract-utilities" Jan 03 06:03:48 crc kubenswrapper[4854]: E0103 06:03:48.086937 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db4adf09-eb0a-4a6e-a49f-78e43cf04124" containerName="swift-ring-rebalance" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.086968 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="db4adf09-eb0a-4a6e-a49f-78e43cf04124" containerName="swift-ring-rebalance" Jan 03 06:03:48 crc kubenswrapper[4854]: E0103 06:03:48.086978 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97a38e3c-dd5a-447b-b580-ed7bd5f16fde" containerName="thanos-sidecar" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.086983 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="97a38e3c-dd5a-447b-b580-ed7bd5f16fde" containerName="thanos-sidecar" Jan 03 06:03:48 crc kubenswrapper[4854]: E0103 06:03:48.086991 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97a38e3c-dd5a-447b-b580-ed7bd5f16fde" containerName="init-config-reloader" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.086996 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="97a38e3c-dd5a-447b-b580-ed7bd5f16fde" containerName="init-config-reloader" Jan 03 06:03:48 crc kubenswrapper[4854]: E0103 06:03:48.087014 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97a38e3c-dd5a-447b-b580-ed7bd5f16fde" containerName="prometheus" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.087021 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="97a38e3c-dd5a-447b-b580-ed7bd5f16fde" containerName="prometheus" Jan 03 06:03:48 crc kubenswrapper[4854]: E0103 06:03:48.087055 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f9e2844-dbc2-488b-bb08-77f9a4284a35" containerName="extract-content" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.087064 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f9e2844-dbc2-488b-bb08-77f9a4284a35" containerName="extract-content" Jan 03 06:03:48 crc kubenswrapper[4854]: E0103 06:03:48.087113 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f9e2844-dbc2-488b-bb08-77f9a4284a35" containerName="registry-server" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.087121 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f9e2844-dbc2-488b-bb08-77f9a4284a35" containerName="registry-server" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.087443 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="db4adf09-eb0a-4a6e-a49f-78e43cf04124" containerName="swift-ring-rebalance" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.087465 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f9e2844-dbc2-488b-bb08-77f9a4284a35" containerName="registry-server" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.087477 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="97a38e3c-dd5a-447b-b580-ed7bd5f16fde" containerName="prometheus" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.087487 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="97a38e3c-dd5a-447b-b580-ed7bd5f16fde" containerName="config-reloader" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.087530 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="97a38e3c-dd5a-447b-b580-ed7bd5f16fde" containerName="thanos-sidecar" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.089673 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.099147 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.099392 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-hmglt" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.099593 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.099721 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.100616 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.106608 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.107056 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.107190 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.108515 4854 scope.go:117] "RemoveContainer" containerID="b229eeb9b9ea228e815ad76287293926827cfd7ae30bb370526480b9e7e3a56b" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.108699 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.122371 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.171515 4854 scope.go:117] "RemoveContainer" containerID="c98c7572e2fadf2eedfaaf84522ebe9bdf58a2d6ca98573910bedf1e6020bc9a" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.198598 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f9e2844-dbc2-488b-bb08-77f9a4284a35" path="/var/lib/kubelet/pods/6f9e2844-dbc2-488b-bb08-77f9a4284a35/volumes" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.201639 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97a38e3c-dd5a-447b-b580-ed7bd5f16fde" path="/var/lib/kubelet/pods/97a38e3c-dd5a-447b-b580-ed7bd5f16fde/volumes" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.216649 4854 scope.go:117] "RemoveContainer" containerID="d95b4d4b362297f6c2586c50864692d21d1c811876211a128deb95be83ec1ff2" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.278915 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzxv5\" (UniqueName: \"kubernetes.io/projected/79760a75-c798-415d-be02-dd3a6a9c74ee-kube-api-access-vzxv5\") pod \"prometheus-metric-storage-0\" (UID: \"79760a75-c798-415d-be02-dd3a6a9c74ee\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.278992 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/79760a75-c798-415d-be02-dd3a6a9c74ee-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"79760a75-c798-415d-be02-dd3a6a9c74ee\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.279045 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/79760a75-c798-415d-be02-dd3a6a9c74ee-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"79760a75-c798-415d-be02-dd3a6a9c74ee\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.279169 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4d0c0986-6456-41c5-893f-749533411374\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d0c0986-6456-41c5-893f-749533411374\") pod \"prometheus-metric-storage-0\" (UID: \"79760a75-c798-415d-be02-dd3a6a9c74ee\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.279454 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/79760a75-c798-415d-be02-dd3a6a9c74ee-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"79760a75-c798-415d-be02-dd3a6a9c74ee\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.281393 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/79760a75-c798-415d-be02-dd3a6a9c74ee-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"79760a75-c798-415d-be02-dd3a6a9c74ee\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.281529 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/79760a75-c798-415d-be02-dd3a6a9c74ee-config\") pod \"prometheus-metric-storage-0\" (UID: \"79760a75-c798-415d-be02-dd3a6a9c74ee\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.281561 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/79760a75-c798-415d-be02-dd3a6a9c74ee-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"79760a75-c798-415d-be02-dd3a6a9c74ee\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.281591 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/79760a75-c798-415d-be02-dd3a6a9c74ee-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"79760a75-c798-415d-be02-dd3a6a9c74ee\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.281885 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/79760a75-c798-415d-be02-dd3a6a9c74ee-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"79760a75-c798-415d-be02-dd3a6a9c74ee\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.284611 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/79760a75-c798-415d-be02-dd3a6a9c74ee-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"79760a75-c798-415d-be02-dd3a6a9c74ee\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.284780 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79760a75-c798-415d-be02-dd3a6a9c74ee-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"79760a75-c798-415d-be02-dd3a6a9c74ee\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.284832 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/79760a75-c798-415d-be02-dd3a6a9c74ee-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"79760a75-c798-415d-be02-dd3a6a9c74ee\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.389586 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/79760a75-c798-415d-be02-dd3a6a9c74ee-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"79760a75-c798-415d-be02-dd3a6a9c74ee\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.389638 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/79760a75-c798-415d-be02-dd3a6a9c74ee-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"79760a75-c798-415d-be02-dd3a6a9c74ee\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.389797 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79760a75-c798-415d-be02-dd3a6a9c74ee-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"79760a75-c798-415d-be02-dd3a6a9c74ee\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.389814 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/79760a75-c798-415d-be02-dd3a6a9c74ee-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"79760a75-c798-415d-be02-dd3a6a9c74ee\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.389841 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzxv5\" (UniqueName: \"kubernetes.io/projected/79760a75-c798-415d-be02-dd3a6a9c74ee-kube-api-access-vzxv5\") pod \"prometheus-metric-storage-0\" (UID: \"79760a75-c798-415d-be02-dd3a6a9c74ee\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.390880 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/79760a75-c798-415d-be02-dd3a6a9c74ee-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"79760a75-c798-415d-be02-dd3a6a9c74ee\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.396170 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-7hvb6"] Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.396838 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79760a75-c798-415d-be02-dd3a6a9c74ee-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"79760a75-c798-415d-be02-dd3a6a9c74ee\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.396956 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/79760a75-c798-415d-be02-dd3a6a9c74ee-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"79760a75-c798-415d-be02-dd3a6a9c74ee\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.397012 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/79760a75-c798-415d-be02-dd3a6a9c74ee-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"79760a75-c798-415d-be02-dd3a6a9c74ee\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.397046 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4d0c0986-6456-41c5-893f-749533411374\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d0c0986-6456-41c5-893f-749533411374\") pod \"prometheus-metric-storage-0\" (UID: \"79760a75-c798-415d-be02-dd3a6a9c74ee\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.397090 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/79760a75-c798-415d-be02-dd3a6a9c74ee-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"79760a75-c798-415d-be02-dd3a6a9c74ee\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.397141 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/79760a75-c798-415d-be02-dd3a6a9c74ee-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"79760a75-c798-415d-be02-dd3a6a9c74ee\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.397308 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/79760a75-c798-415d-be02-dd3a6a9c74ee-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"79760a75-c798-415d-be02-dd3a6a9c74ee\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.397342 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/79760a75-c798-415d-be02-dd3a6a9c74ee-config\") pod \"prometheus-metric-storage-0\" (UID: \"79760a75-c798-415d-be02-dd3a6a9c74ee\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.399712 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/79760a75-c798-415d-be02-dd3a6a9c74ee-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"79760a75-c798-415d-be02-dd3a6a9c74ee\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.399758 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/79760a75-c798-415d-be02-dd3a6a9c74ee-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"79760a75-c798-415d-be02-dd3a6a9c74ee\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.399782 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/79760a75-c798-415d-be02-dd3a6a9c74ee-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"79760a75-c798-415d-be02-dd3a6a9c74ee\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.402031 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/79760a75-c798-415d-be02-dd3a6a9c74ee-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"79760a75-c798-415d-be02-dd3a6a9c74ee\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.406393 4854 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.406451 4854 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4d0c0986-6456-41c5-893f-749533411374\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d0c0986-6456-41c5-893f-749533411374\") pod \"prometheus-metric-storage-0\" (UID: \"79760a75-c798-415d-be02-dd3a6a9c74ee\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c042c513fa3aca66ad55ee0b68f2245eaa190a63e0e2078526e0ed40cb362657/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.408032 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/79760a75-c798-415d-be02-dd3a6a9c74ee-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"79760a75-c798-415d-be02-dd3a6a9c74ee\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.410004 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/79760a75-c798-415d-be02-dd3a6a9c74ee-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"79760a75-c798-415d-be02-dd3a6a9c74ee\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.410288 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/79760a75-c798-415d-be02-dd3a6a9c74ee-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"79760a75-c798-415d-be02-dd3a6a9c74ee\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.429187 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-6fa3-account-create-update-pb5n6"] Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.443185 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-6sqkj"] Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.447815 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/79760a75-c798-415d-be02-dd3a6a9c74ee-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"79760a75-c798-415d-be02-dd3a6a9c74ee\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.447861 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/79760a75-c798-415d-be02-dd3a6a9c74ee-config\") pod \"prometheus-metric-storage-0\" (UID: \"79760a75-c798-415d-be02-dd3a6a9c74ee\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.447931 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/79760a75-c798-415d-be02-dd3a6a9c74ee-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"79760a75-c798-415d-be02-dd3a6a9c74ee\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.454846 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzxv5\" (UniqueName: \"kubernetes.io/projected/79760a75-c798-415d-be02-dd3a6a9c74ee-kube-api-access-vzxv5\") pod \"prometheus-metric-storage-0\" (UID: \"79760a75-c798-415d-be02-dd3a6a9c74ee\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.455010 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-ab39-account-create-update-24pb8"] Jan 03 06:03:48 crc kubenswrapper[4854]: W0103 06:03:48.466024 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdc310d0_38e5_41d6_a784_d8e534a5e324.slice/crio-51234c1de7e5c47713f8ddea57d127e1fc6f454f8105fb591f86dcd46cf3cd26 WatchSource:0}: Error finding container 51234c1de7e5c47713f8ddea57d127e1fc6f454f8105fb591f86dcd46cf3cd26: Status 404 returned error can't find the container with id 51234c1de7e5c47713f8ddea57d127e1fc6f454f8105fb591f86dcd46cf3cd26 Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.467436 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-ddlqd"] Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.476499 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-k7lm6"] Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.486602 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-4rfsf"] Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.498112 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.505463 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4d0c0986-6456-41c5-893f-749533411374\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d0c0986-6456-41c5-893f-749533411374\") pod \"prometheus-metric-storage-0\" (UID: \"79760a75-c798-415d-be02-dd3a6a9c74ee\") " pod="openstack/prometheus-metric-storage-0" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.507043 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-8c51-account-create-update-9m5qc"] Jan 03 06:03:48 crc kubenswrapper[4854]: W0103 06:03:48.523287 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba2c8def_0d1c_4a79_a63d_c6423a1b4823.slice/crio-9a3b0ea70fb0258e169b22c249e5714adbaaa61c150dfe0aa29ceb086d984afe WatchSource:0}: Error finding container 9a3b0ea70fb0258e169b22c249e5714adbaaa61c150dfe0aa29ceb086d984afe: Status 404 returned error can't find the container with id 9a3b0ea70fb0258e169b22c249e5714adbaaa61c150dfe0aa29ceb086d984afe Jan 03 06:03:48 crc kubenswrapper[4854]: W0103 06:03:48.534632 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7456fb80_40dc_4ef7_86ee_062ad4b064d2.slice/crio-3d4c5638c1b414385c3d04a09a08934818bee488fcfc33ac4c37571a6167aff8 WatchSource:0}: Error finding container 3d4c5638c1b414385c3d04a09a08934818bee488fcfc33ac4c37571a6167aff8: Status 404 returned error can't find the container with id 3d4c5638c1b414385c3d04a09a08934818bee488fcfc33ac4c37571a6167aff8 Jan 03 06:03:48 crc kubenswrapper[4854]: W0103 06:03:48.535678 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7134f57e_784c_4c40_b9d3_cf1e86a1237e.slice/crio-903e299b93111fc5a3b80b99be653e367ae5fcaf84913e992223d38c0e727a22 WatchSource:0}: Error finding container 903e299b93111fc5a3b80b99be653e367ae5fcaf84913e992223d38c0e727a22: Status 404 returned error can't find the container with id 903e299b93111fc5a3b80b99be653e367ae5fcaf84913e992223d38c0e727a22 Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.744604 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.902059 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-k7lm6" event={"ID":"ba2c8def-0d1c-4a79-a63d-c6423a1b4823","Type":"ContainerStarted","Data":"9a3b0ea70fb0258e169b22c249e5714adbaaa61c150dfe0aa29ceb086d984afe"} Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.910174 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-6sqkj" event={"ID":"7456fb80-40dc-4ef7-86ee-062ad4b064d2","Type":"ContainerStarted","Data":"3d4c5638c1b414385c3d04a09a08934818bee488fcfc33ac4c37571a6167aff8"} Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.912724 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"34ba0145-7948-47f0-bec5-7f5fc6cb1150","Type":"ContainerStarted","Data":"fe3cf8768f12da88fd1a54bdc23aaee6ba78a7d2bda073cf5c769354949edd57"} Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.913815 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-ddlqd" event={"ID":"7134f57e-784c-4c40-b9d3-cf1e86a1237e","Type":"ContainerStarted","Data":"903e299b93111fc5a3b80b99be653e367ae5fcaf84913e992223d38c0e727a22"} Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.917051 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8c51-account-create-update-9m5qc" event={"ID":"0019311a-ce5a-4dbb-bef8-8cac6b78a304","Type":"ContainerStarted","Data":"d7b1ff233526c78dbb69ba765c7662aa201e992b6326a633a465b6df5c9e3246"} Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.921010 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-4rfsf" event={"ID":"fdc310d0-38e5-41d6-a784-d8e534a5e324","Type":"ContainerStarted","Data":"51234c1de7e5c47713f8ddea57d127e1fc6f454f8105fb591f86dcd46cf3cd26"} Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.922842 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-ab39-account-create-update-24pb8" event={"ID":"96b6abe3-ad62-48fc-bd6d-8df5e103c5d4","Type":"ContainerStarted","Data":"5cd0b48c756e8e924a820026f3dd7e038f0416a66665ae69d5ad3b1ab4c2eff0"} Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.932037 4854 generic.go:334] "Generic (PLEG): container finished" podID="e681201f-947c-41ba-93fe-0533bd1d071a" containerID="f10e5e2935b25c279f3520595d2c1e8c63da466262a6d47e5cd62567480f6a37" exitCode=0 Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.932113 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3a66-account-create-update-thf68" event={"ID":"e681201f-947c-41ba-93fe-0533bd1d071a","Type":"ContainerDied","Data":"f10e5e2935b25c279f3520595d2c1e8c63da466262a6d47e5cd62567480f6a37"} Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.933587 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-7hvb6" event={"ID":"8d67c032-022a-4d33-95e6-cdf31147fb4c","Type":"ContainerStarted","Data":"1cd08d13cd048e2d31b71482d9316b498bbe2e1c0059683dd5482d28b5edacea"} Jan 03 06:03:48 crc kubenswrapper[4854]: I0103 06:03:48.943422 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-6fa3-account-create-update-pb5n6" event={"ID":"c448f9c4-fd70-4c6d-853e-c4197af5b80b","Type":"ContainerStarted","Data":"68db69f47e1b854c35adbe74b4643393aab7596c0eaf13c5d2f1739eebe59922"} Jan 03 06:03:49 crc kubenswrapper[4854]: I0103 06:03:49.365905 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 03 06:03:49 crc kubenswrapper[4854]: I0103 06:03:49.395149 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:03:49 crc kubenswrapper[4854]: I0103 06:03:49.955163 4854 generic.go:334] "Generic (PLEG): container finished" podID="c448f9c4-fd70-4c6d-853e-c4197af5b80b" containerID="d8ab9dc75dd9131d4da0d649ace9b9643f50ef6d2ec1f4ff874297d58979af95" exitCode=0 Jan 03 06:03:49 crc kubenswrapper[4854]: I0103 06:03:49.955229 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-6fa3-account-create-update-pb5n6" event={"ID":"c448f9c4-fd70-4c6d-853e-c4197af5b80b","Type":"ContainerDied","Data":"d8ab9dc75dd9131d4da0d649ace9b9643f50ef6d2ec1f4ff874297d58979af95"} Jan 03 06:03:49 crc kubenswrapper[4854]: I0103 06:03:49.958620 4854 generic.go:334] "Generic (PLEG): container finished" podID="7134f57e-784c-4c40-b9d3-cf1e86a1237e" containerID="86aeb111cd41b99b4e25c4a90df9a0c5af23d8a02afdd671ecfdff248a495fec" exitCode=0 Jan 03 06:03:49 crc kubenswrapper[4854]: I0103 06:03:49.958681 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-ddlqd" event={"ID":"7134f57e-784c-4c40-b9d3-cf1e86a1237e","Type":"ContainerDied","Data":"86aeb111cd41b99b4e25c4a90df9a0c5af23d8a02afdd671ecfdff248a495fec"} Jan 03 06:03:49 crc kubenswrapper[4854]: I0103 06:03:49.960873 4854 generic.go:334] "Generic (PLEG): container finished" podID="7456fb80-40dc-4ef7-86ee-062ad4b064d2" containerID="d605944eac94e87a457aafb1289ad4229f88a9b5361db72fa726fc00a240d35a" exitCode=0 Jan 03 06:03:49 crc kubenswrapper[4854]: I0103 06:03:49.961004 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-6sqkj" event={"ID":"7456fb80-40dc-4ef7-86ee-062ad4b064d2","Type":"ContainerDied","Data":"d605944eac94e87a457aafb1289ad4229f88a9b5361db72fa726fc00a240d35a"} Jan 03 06:03:49 crc kubenswrapper[4854]: I0103 06:03:49.963570 4854 generic.go:334] "Generic (PLEG): container finished" podID="0019311a-ce5a-4dbb-bef8-8cac6b78a304" containerID="ab50404c6d7ec773d7b40476abf6b60c9a0771004e8ec4d93f1937e47c8b1a68" exitCode=0 Jan 03 06:03:49 crc kubenswrapper[4854]: I0103 06:03:49.963789 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8c51-account-create-update-9m5qc" event={"ID":"0019311a-ce5a-4dbb-bef8-8cac6b78a304","Type":"ContainerDied","Data":"ab50404c6d7ec773d7b40476abf6b60c9a0771004e8ec4d93f1937e47c8b1a68"} Jan 03 06:03:49 crc kubenswrapper[4854]: I0103 06:03:49.965835 4854 generic.go:334] "Generic (PLEG): container finished" podID="fdc310d0-38e5-41d6-a784-d8e534a5e324" containerID="b9363bbc3e9e0398e365e3eb65cad2c07b8aa8c85d49d2c35cbbb209f78823e5" exitCode=0 Jan 03 06:03:49 crc kubenswrapper[4854]: I0103 06:03:49.965974 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-4rfsf" event={"ID":"fdc310d0-38e5-41d6-a784-d8e534a5e324","Type":"ContainerDied","Data":"b9363bbc3e9e0398e365e3eb65cad2c07b8aa8c85d49d2c35cbbb209f78823e5"} Jan 03 06:03:49 crc kubenswrapper[4854]: I0103 06:03:49.967903 4854 generic.go:334] "Generic (PLEG): container finished" podID="ba2c8def-0d1c-4a79-a63d-c6423a1b4823" containerID="53658e927cd0cd5f5b0a1356e3571ca5e90e2c58ed658a69bbcd3643d85e6ffd" exitCode=0 Jan 03 06:03:49 crc kubenswrapper[4854]: I0103 06:03:49.967944 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-k7lm6" event={"ID":"ba2c8def-0d1c-4a79-a63d-c6423a1b4823","Type":"ContainerDied","Data":"53658e927cd0cd5f5b0a1356e3571ca5e90e2c58ed658a69bbcd3643d85e6ffd"} Jan 03 06:03:49 crc kubenswrapper[4854]: I0103 06:03:49.974354 4854 generic.go:334] "Generic (PLEG): container finished" podID="96b6abe3-ad62-48fc-bd6d-8df5e103c5d4" containerID="f63a5954aec391713b07f14e9ae550f7a5ef3bf7d214bb0ce824b671a4499301" exitCode=0 Jan 03 06:03:49 crc kubenswrapper[4854]: I0103 06:03:49.974572 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-ab39-account-create-update-24pb8" event={"ID":"96b6abe3-ad62-48fc-bd6d-8df5e103c5d4","Type":"ContainerDied","Data":"f63a5954aec391713b07f14e9ae550f7a5ef3bf7d214bb0ce824b671a4499301"} Jan 03 06:03:50 crc kubenswrapper[4854]: I0103 06:03:50.528092 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3a66-account-create-update-thf68" Jan 03 06:03:50 crc kubenswrapper[4854]: I0103 06:03:50.563707 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xtpx\" (UniqueName: \"kubernetes.io/projected/e681201f-947c-41ba-93fe-0533bd1d071a-kube-api-access-2xtpx\") pod \"e681201f-947c-41ba-93fe-0533bd1d071a\" (UID: \"e681201f-947c-41ba-93fe-0533bd1d071a\") " Jan 03 06:03:50 crc kubenswrapper[4854]: I0103 06:03:50.563772 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e681201f-947c-41ba-93fe-0533bd1d071a-operator-scripts\") pod \"e681201f-947c-41ba-93fe-0533bd1d071a\" (UID: \"e681201f-947c-41ba-93fe-0533bd1d071a\") " Jan 03 06:03:50 crc kubenswrapper[4854]: I0103 06:03:50.564832 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e681201f-947c-41ba-93fe-0533bd1d071a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e681201f-947c-41ba-93fe-0533bd1d071a" (UID: "e681201f-947c-41ba-93fe-0533bd1d071a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:03:50 crc kubenswrapper[4854]: I0103 06:03:50.596650 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e681201f-947c-41ba-93fe-0533bd1d071a-kube-api-access-2xtpx" (OuterVolumeSpecName: "kube-api-access-2xtpx") pod "e681201f-947c-41ba-93fe-0533bd1d071a" (UID: "e681201f-947c-41ba-93fe-0533bd1d071a"). InnerVolumeSpecName "kube-api-access-2xtpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:03:50 crc kubenswrapper[4854]: I0103 06:03:50.667414 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xtpx\" (UniqueName: \"kubernetes.io/projected/e681201f-947c-41ba-93fe-0533bd1d071a-kube-api-access-2xtpx\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:50 crc kubenswrapper[4854]: I0103 06:03:50.667459 4854 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e681201f-947c-41ba-93fe-0533bd1d071a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:50 crc kubenswrapper[4854]: I0103 06:03:50.989166 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"34ba0145-7948-47f0-bec5-7f5fc6cb1150","Type":"ContainerStarted","Data":"02d19c211e557252722ce483b873a9bb932af341ec47b481c980ccc8a449aaeb"} Jan 03 06:03:50 crc kubenswrapper[4854]: I0103 06:03:50.992001 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"79760a75-c798-415d-be02-dd3a6a9c74ee","Type":"ContainerStarted","Data":"584080d2bebd3367f9923ec3166926b4681deb7bac0012082fbcf075f3f7e554"} Jan 03 06:03:50 crc kubenswrapper[4854]: I0103 06:03:50.994092 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3a66-account-create-update-thf68" event={"ID":"e681201f-947c-41ba-93fe-0533bd1d071a","Type":"ContainerDied","Data":"e260c2215667b15ae5a4225fd8318f5ce45b047ad232b6815c0b65e2f15ba8b9"} Jan 03 06:03:50 crc kubenswrapper[4854]: I0103 06:03:50.994159 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e260c2215667b15ae5a4225fd8318f5ce45b047ad232b6815c0b65e2f15ba8b9" Jan 03 06:03:50 crc kubenswrapper[4854]: I0103 06:03:50.994263 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3a66-account-create-update-thf68" Jan 03 06:03:51 crc kubenswrapper[4854]: I0103 06:03:51.017580 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=8.945423699 podStartE2EDuration="11.017562203s" podCreationTimestamp="2026-01-03 06:03:40 +0000 UTC" firstStartedPulling="2026-01-03 06:03:48.457984403 +0000 UTC m=+1406.784560975" lastFinishedPulling="2026-01-03 06:03:50.530122907 +0000 UTC m=+1408.856699479" observedRunningTime="2026-01-03 06:03:51.009276609 +0000 UTC m=+1409.335853201" watchObservedRunningTime="2026-01-03 06:03:51.017562203 +0000 UTC m=+1409.344138775" Jan 03 06:03:53 crc kubenswrapper[4854]: I0103 06:03:53.975594 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-6sqkj" Jan 03 06:03:53 crc kubenswrapper[4854]: I0103 06:03:53.982741 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-4rfsf" Jan 03 06:03:53 crc kubenswrapper[4854]: I0103 06:03:53.995345 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-ddlqd" Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.032665 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-ab39-account-create-update-24pb8" Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.035708 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-ab39-account-create-update-24pb8" Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.035705 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-ab39-account-create-update-24pb8" event={"ID":"96b6abe3-ad62-48fc-bd6d-8df5e103c5d4","Type":"ContainerDied","Data":"5cd0b48c756e8e924a820026f3dd7e038f0416a66665ae69d5ad3b1ab4c2eff0"} Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.035889 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5cd0b48c756e8e924a820026f3dd7e038f0416a66665ae69d5ad3b1ab4c2eff0" Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.037158 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-k7lm6" Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.044607 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"79760a75-c798-415d-be02-dd3a6a9c74ee","Type":"ContainerStarted","Data":"dfa07bc1646e00894267d072cfbd8cafc77ee3509d5e1b1db5a4a341d00721f4"} Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.051729 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-6fa3-account-create-update-pb5n6" event={"ID":"c448f9c4-fd70-4c6d-853e-c4197af5b80b","Type":"ContainerDied","Data":"68db69f47e1b854c35adbe74b4643393aab7596c0eaf13c5d2f1739eebe59922"} Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.051775 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68db69f47e1b854c35adbe74b4643393aab7596c0eaf13c5d2f1739eebe59922" Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.062816 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-6fa3-account-create-update-pb5n6" Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.058045 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8c51-account-create-update-9m5qc" Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.065952 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-ddlqd" event={"ID":"7134f57e-784c-4c40-b9d3-cf1e86a1237e","Type":"ContainerDied","Data":"903e299b93111fc5a3b80b99be653e367ae5fcaf84913e992223d38c0e727a22"} Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.065987 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="903e299b93111fc5a3b80b99be653e367ae5fcaf84913e992223d38c0e727a22" Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.066035 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-ddlqd" Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.071436 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-6sqkj" event={"ID":"7456fb80-40dc-4ef7-86ee-062ad4b064d2","Type":"ContainerDied","Data":"3d4c5638c1b414385c3d04a09a08934818bee488fcfc33ac4c37571a6167aff8"} Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.071468 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d4c5638c1b414385c3d04a09a08934818bee488fcfc33ac4c37571a6167aff8" Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.071471 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-6sqkj" Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.076939 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8c51-account-create-update-9m5qc" event={"ID":"0019311a-ce5a-4dbb-bef8-8cac6b78a304","Type":"ContainerDied","Data":"d7b1ff233526c78dbb69ba765c7662aa201e992b6326a633a465b6df5c9e3246"} Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.076979 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7b1ff233526c78dbb69ba765c7662aa201e992b6326a633a465b6df5c9e3246" Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.077045 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8c51-account-create-update-9m5qc" Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.078536 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-4rfsf" event={"ID":"fdc310d0-38e5-41d6-a784-d8e534a5e324","Type":"ContainerDied","Data":"51234c1de7e5c47713f8ddea57d127e1fc6f454f8105fb591f86dcd46cf3cd26"} Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.078562 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51234c1de7e5c47713f8ddea57d127e1fc6f454f8105fb591f86dcd46cf3cd26" Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.078598 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-4rfsf" Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.080027 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-k7lm6" event={"ID":"ba2c8def-0d1c-4a79-a63d-c6423a1b4823","Type":"ContainerDied","Data":"9a3b0ea70fb0258e169b22c249e5714adbaaa61c150dfe0aa29ceb086d984afe"} Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.080068 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a3b0ea70fb0258e169b22c249e5714adbaaa61c150dfe0aa29ceb086d984afe" Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.080130 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-k7lm6" Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.150975 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8zvg\" (UniqueName: \"kubernetes.io/projected/7134f57e-784c-4c40-b9d3-cf1e86a1237e-kube-api-access-z8zvg\") pod \"7134f57e-784c-4c40-b9d3-cf1e86a1237e\" (UID: \"7134f57e-784c-4c40-b9d3-cf1e86a1237e\") " Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.151072 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dld5z\" (UniqueName: \"kubernetes.io/projected/7456fb80-40dc-4ef7-86ee-062ad4b064d2-kube-api-access-dld5z\") pod \"7456fb80-40dc-4ef7-86ee-062ad4b064d2\" (UID: \"7456fb80-40dc-4ef7-86ee-062ad4b064d2\") " Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.151136 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdc310d0-38e5-41d6-a784-d8e534a5e324-operator-scripts\") pod \"fdc310d0-38e5-41d6-a784-d8e534a5e324\" (UID: \"fdc310d0-38e5-41d6-a784-d8e534a5e324\") " Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.151244 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7134f57e-784c-4c40-b9d3-cf1e86a1237e-operator-scripts\") pod \"7134f57e-784c-4c40-b9d3-cf1e86a1237e\" (UID: \"7134f57e-784c-4c40-b9d3-cf1e86a1237e\") " Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.151275 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kj2kb\" (UniqueName: \"kubernetes.io/projected/96b6abe3-ad62-48fc-bd6d-8df5e103c5d4-kube-api-access-kj2kb\") pod \"96b6abe3-ad62-48fc-bd6d-8df5e103c5d4\" (UID: \"96b6abe3-ad62-48fc-bd6d-8df5e103c5d4\") " Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.151315 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5d25\" (UniqueName: \"kubernetes.io/projected/ba2c8def-0d1c-4a79-a63d-c6423a1b4823-kube-api-access-p5d25\") pod \"ba2c8def-0d1c-4a79-a63d-c6423a1b4823\" (UID: \"ba2c8def-0d1c-4a79-a63d-c6423a1b4823\") " Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.151346 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba2c8def-0d1c-4a79-a63d-c6423a1b4823-operator-scripts\") pod \"ba2c8def-0d1c-4a79-a63d-c6423a1b4823\" (UID: \"ba2c8def-0d1c-4a79-a63d-c6423a1b4823\") " Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.151399 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knv2s\" (UniqueName: \"kubernetes.io/projected/fdc310d0-38e5-41d6-a784-d8e534a5e324-kube-api-access-knv2s\") pod \"fdc310d0-38e5-41d6-a784-d8e534a5e324\" (UID: \"fdc310d0-38e5-41d6-a784-d8e534a5e324\") " Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.151503 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7456fb80-40dc-4ef7-86ee-062ad4b064d2-operator-scripts\") pod \"7456fb80-40dc-4ef7-86ee-062ad4b064d2\" (UID: \"7456fb80-40dc-4ef7-86ee-062ad4b064d2\") " Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.151566 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96b6abe3-ad62-48fc-bd6d-8df5e103c5d4-operator-scripts\") pod \"96b6abe3-ad62-48fc-bd6d-8df5e103c5d4\" (UID: \"96b6abe3-ad62-48fc-bd6d-8df5e103c5d4\") " Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.152945 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7456fb80-40dc-4ef7-86ee-062ad4b064d2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7456fb80-40dc-4ef7-86ee-062ad4b064d2" (UID: "7456fb80-40dc-4ef7-86ee-062ad4b064d2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.153410 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96b6abe3-ad62-48fc-bd6d-8df5e103c5d4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "96b6abe3-ad62-48fc-bd6d-8df5e103c5d4" (UID: "96b6abe3-ad62-48fc-bd6d-8df5e103c5d4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.153653 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba2c8def-0d1c-4a79-a63d-c6423a1b4823-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ba2c8def-0d1c-4a79-a63d-c6423a1b4823" (UID: "ba2c8def-0d1c-4a79-a63d-c6423a1b4823"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.153912 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7134f57e-784c-4c40-b9d3-cf1e86a1237e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7134f57e-784c-4c40-b9d3-cf1e86a1237e" (UID: "7134f57e-784c-4c40-b9d3-cf1e86a1237e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.154223 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdc310d0-38e5-41d6-a784-d8e534a5e324-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fdc310d0-38e5-41d6-a784-d8e534a5e324" (UID: "fdc310d0-38e5-41d6-a784-d8e534a5e324"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.155740 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7134f57e-784c-4c40-b9d3-cf1e86a1237e-kube-api-access-z8zvg" (OuterVolumeSpecName: "kube-api-access-z8zvg") pod "7134f57e-784c-4c40-b9d3-cf1e86a1237e" (UID: "7134f57e-784c-4c40-b9d3-cf1e86a1237e"). InnerVolumeSpecName "kube-api-access-z8zvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.157117 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7456fb80-40dc-4ef7-86ee-062ad4b064d2-kube-api-access-dld5z" (OuterVolumeSpecName: "kube-api-access-dld5z") pod "7456fb80-40dc-4ef7-86ee-062ad4b064d2" (UID: "7456fb80-40dc-4ef7-86ee-062ad4b064d2"). InnerVolumeSpecName "kube-api-access-dld5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.157391 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdc310d0-38e5-41d6-a784-d8e534a5e324-kube-api-access-knv2s" (OuterVolumeSpecName: "kube-api-access-knv2s") pod "fdc310d0-38e5-41d6-a784-d8e534a5e324" (UID: "fdc310d0-38e5-41d6-a784-d8e534a5e324"). InnerVolumeSpecName "kube-api-access-knv2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.158307 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba2c8def-0d1c-4a79-a63d-c6423a1b4823-kube-api-access-p5d25" (OuterVolumeSpecName: "kube-api-access-p5d25") pod "ba2c8def-0d1c-4a79-a63d-c6423a1b4823" (UID: "ba2c8def-0d1c-4a79-a63d-c6423a1b4823"). InnerVolumeSpecName "kube-api-access-p5d25". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.159812 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b6abe3-ad62-48fc-bd6d-8df5e103c5d4-kube-api-access-kj2kb" (OuterVolumeSpecName: "kube-api-access-kj2kb") pod "96b6abe3-ad62-48fc-bd6d-8df5e103c5d4" (UID: "96b6abe3-ad62-48fc-bd6d-8df5e103c5d4"). InnerVolumeSpecName "kube-api-access-kj2kb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.252932 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c448f9c4-fd70-4c6d-853e-c4197af5b80b-operator-scripts\") pod \"c448f9c4-fd70-4c6d-853e-c4197af5b80b\" (UID: \"c448f9c4-fd70-4c6d-853e-c4197af5b80b\") " Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.252971 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhn76\" (UniqueName: \"kubernetes.io/projected/c448f9c4-fd70-4c6d-853e-c4197af5b80b-kube-api-access-qhn76\") pod \"c448f9c4-fd70-4c6d-853e-c4197af5b80b\" (UID: \"c448f9c4-fd70-4c6d-853e-c4197af5b80b\") " Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.253068 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlffr\" (UniqueName: \"kubernetes.io/projected/0019311a-ce5a-4dbb-bef8-8cac6b78a304-kube-api-access-tlffr\") pod \"0019311a-ce5a-4dbb-bef8-8cac6b78a304\" (UID: \"0019311a-ce5a-4dbb-bef8-8cac6b78a304\") " Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.253138 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0019311a-ce5a-4dbb-bef8-8cac6b78a304-operator-scripts\") pod \"0019311a-ce5a-4dbb-bef8-8cac6b78a304\" (UID: \"0019311a-ce5a-4dbb-bef8-8cac6b78a304\") " Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.253316 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c448f9c4-fd70-4c6d-853e-c4197af5b80b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c448f9c4-fd70-4c6d-853e-c4197af5b80b" (UID: "c448f9c4-fd70-4c6d-853e-c4197af5b80b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.253803 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dld5z\" (UniqueName: \"kubernetes.io/projected/7456fb80-40dc-4ef7-86ee-062ad4b064d2-kube-api-access-dld5z\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.253831 4854 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdc310d0-38e5-41d6-a784-d8e534a5e324-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.253857 4854 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c448f9c4-fd70-4c6d-853e-c4197af5b80b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.253872 4854 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7134f57e-784c-4c40-b9d3-cf1e86a1237e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.253887 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kj2kb\" (UniqueName: \"kubernetes.io/projected/96b6abe3-ad62-48fc-bd6d-8df5e103c5d4-kube-api-access-kj2kb\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.253900 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5d25\" (UniqueName: \"kubernetes.io/projected/ba2c8def-0d1c-4a79-a63d-c6423a1b4823-kube-api-access-p5d25\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.253914 4854 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba2c8def-0d1c-4a79-a63d-c6423a1b4823-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.253925 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knv2s\" (UniqueName: \"kubernetes.io/projected/fdc310d0-38e5-41d6-a784-d8e534a5e324-kube-api-access-knv2s\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.253938 4854 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7456fb80-40dc-4ef7-86ee-062ad4b064d2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.253950 4854 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96b6abe3-ad62-48fc-bd6d-8df5e103c5d4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.253964 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8zvg\" (UniqueName: \"kubernetes.io/projected/7134f57e-784c-4c40-b9d3-cf1e86a1237e-kube-api-access-z8zvg\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.254365 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0019311a-ce5a-4dbb-bef8-8cac6b78a304-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0019311a-ce5a-4dbb-bef8-8cac6b78a304" (UID: "0019311a-ce5a-4dbb-bef8-8cac6b78a304"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.256162 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c448f9c4-fd70-4c6d-853e-c4197af5b80b-kube-api-access-qhn76" (OuterVolumeSpecName: "kube-api-access-qhn76") pod "c448f9c4-fd70-4c6d-853e-c4197af5b80b" (UID: "c448f9c4-fd70-4c6d-853e-c4197af5b80b"). InnerVolumeSpecName "kube-api-access-qhn76". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.257514 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0019311a-ce5a-4dbb-bef8-8cac6b78a304-kube-api-access-tlffr" (OuterVolumeSpecName: "kube-api-access-tlffr") pod "0019311a-ce5a-4dbb-bef8-8cac6b78a304" (UID: "0019311a-ce5a-4dbb-bef8-8cac6b78a304"). InnerVolumeSpecName "kube-api-access-tlffr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:03:54 crc kubenswrapper[4854]: E0103 06:03:54.316427 4854 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7134f57e_784c_4c40_b9d3_cf1e86a1237e.slice\": RecentStats: unable to find data in memory cache]" Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.356551 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tlffr\" (UniqueName: \"kubernetes.io/projected/0019311a-ce5a-4dbb-bef8-8cac6b78a304-kube-api-access-tlffr\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.356588 4854 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0019311a-ce5a-4dbb-bef8-8cac6b78a304-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:54 crc kubenswrapper[4854]: I0103 06:03:54.356601 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhn76\" (UniqueName: \"kubernetes.io/projected/c448f9c4-fd70-4c6d-853e-c4197af5b80b-kube-api-access-qhn76\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:55 crc kubenswrapper[4854]: I0103 06:03:55.119716 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-7hvb6" event={"ID":"8d67c032-022a-4d33-95e6-cdf31147fb4c","Type":"ContainerStarted","Data":"336595bce2732d146ca99d23eabf746c246400618dd53c65b617389cb270e350"} Jan 03 06:03:55 crc kubenswrapper[4854]: I0103 06:03:55.120039 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-6fa3-account-create-update-pb5n6" Jan 03 06:03:55 crc kubenswrapper[4854]: I0103 06:03:55.186821 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-7hvb6" podStartSLOduration=8.799631203 podStartE2EDuration="14.186800575s" podCreationTimestamp="2026-01-03 06:03:41 +0000 UTC" firstStartedPulling="2026-01-03 06:03:48.457347307 +0000 UTC m=+1406.783923879" lastFinishedPulling="2026-01-03 06:03:53.844516679 +0000 UTC m=+1412.171093251" observedRunningTime="2026-01-03 06:03:55.145156548 +0000 UTC m=+1413.471733140" watchObservedRunningTime="2026-01-03 06:03:55.186800575 +0000 UTC m=+1413.513377158" Jan 03 06:03:57 crc kubenswrapper[4854]: I0103 06:03:57.149137 4854 generic.go:334] "Generic (PLEG): container finished" podID="8d67c032-022a-4d33-95e6-cdf31147fb4c" containerID="336595bce2732d146ca99d23eabf746c246400618dd53c65b617389cb270e350" exitCode=0 Jan 03 06:03:57 crc kubenswrapper[4854]: I0103 06:03:57.149469 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-7hvb6" event={"ID":"8d67c032-022a-4d33-95e6-cdf31147fb4c","Type":"ContainerDied","Data":"336595bce2732d146ca99d23eabf746c246400618dd53c65b617389cb270e350"} Jan 03 06:03:58 crc kubenswrapper[4854]: I0103 06:03:58.528432 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-7hvb6" Jan 03 06:03:58 crc kubenswrapper[4854]: I0103 06:03:58.685382 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d67c032-022a-4d33-95e6-cdf31147fb4c-combined-ca-bundle\") pod \"8d67c032-022a-4d33-95e6-cdf31147fb4c\" (UID: \"8d67c032-022a-4d33-95e6-cdf31147fb4c\") " Jan 03 06:03:58 crc kubenswrapper[4854]: I0103 06:03:58.685669 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d67c032-022a-4d33-95e6-cdf31147fb4c-config-data\") pod \"8d67c032-022a-4d33-95e6-cdf31147fb4c\" (UID: \"8d67c032-022a-4d33-95e6-cdf31147fb4c\") " Jan 03 06:03:58 crc kubenswrapper[4854]: I0103 06:03:58.686479 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fq6rv\" (UniqueName: \"kubernetes.io/projected/8d67c032-022a-4d33-95e6-cdf31147fb4c-kube-api-access-fq6rv\") pod \"8d67c032-022a-4d33-95e6-cdf31147fb4c\" (UID: \"8d67c032-022a-4d33-95e6-cdf31147fb4c\") " Jan 03 06:03:58 crc kubenswrapper[4854]: I0103 06:03:58.693412 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d67c032-022a-4d33-95e6-cdf31147fb4c-kube-api-access-fq6rv" (OuterVolumeSpecName: "kube-api-access-fq6rv") pod "8d67c032-022a-4d33-95e6-cdf31147fb4c" (UID: "8d67c032-022a-4d33-95e6-cdf31147fb4c"). InnerVolumeSpecName "kube-api-access-fq6rv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:03:58 crc kubenswrapper[4854]: I0103 06:03:58.715196 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d67c032-022a-4d33-95e6-cdf31147fb4c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d67c032-022a-4d33-95e6-cdf31147fb4c" (UID: "8d67c032-022a-4d33-95e6-cdf31147fb4c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:03:58 crc kubenswrapper[4854]: I0103 06:03:58.753510 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d67c032-022a-4d33-95e6-cdf31147fb4c-config-data" (OuterVolumeSpecName: "config-data") pod "8d67c032-022a-4d33-95e6-cdf31147fb4c" (UID: "8d67c032-022a-4d33-95e6-cdf31147fb4c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:03:58 crc kubenswrapper[4854]: I0103 06:03:58.789618 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d67c032-022a-4d33-95e6-cdf31147fb4c-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:58 crc kubenswrapper[4854]: I0103 06:03:58.789652 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fq6rv\" (UniqueName: \"kubernetes.io/projected/8d67c032-022a-4d33-95e6-cdf31147fb4c-kube-api-access-fq6rv\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:58 crc kubenswrapper[4854]: I0103 06:03:58.789661 4854 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d67c032-022a-4d33-95e6-cdf31147fb4c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.168115 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-7hvb6" event={"ID":"8d67c032-022a-4d33-95e6-cdf31147fb4c","Type":"ContainerDied","Data":"1cd08d13cd048e2d31b71482d9316b498bbe2e1c0059683dd5482d28b5edacea"} Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.168449 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cd08d13cd048e2d31b71482d9316b498bbe2e1c0059683dd5482d28b5edacea" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.168178 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-7hvb6" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.475829 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-vkkvk"] Jan 03 06:03:59 crc kubenswrapper[4854]: E0103 06:03:59.476273 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0019311a-ce5a-4dbb-bef8-8cac6b78a304" containerName="mariadb-account-create-update" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.476288 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="0019311a-ce5a-4dbb-bef8-8cac6b78a304" containerName="mariadb-account-create-update" Jan 03 06:03:59 crc kubenswrapper[4854]: E0103 06:03:59.476303 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e681201f-947c-41ba-93fe-0533bd1d071a" containerName="mariadb-account-create-update" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.476309 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="e681201f-947c-41ba-93fe-0533bd1d071a" containerName="mariadb-account-create-update" Jan 03 06:03:59 crc kubenswrapper[4854]: E0103 06:03:59.476325 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7456fb80-40dc-4ef7-86ee-062ad4b064d2" containerName="mariadb-database-create" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.476331 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="7456fb80-40dc-4ef7-86ee-062ad4b064d2" containerName="mariadb-database-create" Jan 03 06:03:59 crc kubenswrapper[4854]: E0103 06:03:59.476348 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c448f9c4-fd70-4c6d-853e-c4197af5b80b" containerName="mariadb-account-create-update" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.476354 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="c448f9c4-fd70-4c6d-853e-c4197af5b80b" containerName="mariadb-account-create-update" Jan 03 06:03:59 crc kubenswrapper[4854]: E0103 06:03:59.476365 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96b6abe3-ad62-48fc-bd6d-8df5e103c5d4" containerName="mariadb-account-create-update" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.476371 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="96b6abe3-ad62-48fc-bd6d-8df5e103c5d4" containerName="mariadb-account-create-update" Jan 03 06:03:59 crc kubenswrapper[4854]: E0103 06:03:59.476383 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d67c032-022a-4d33-95e6-cdf31147fb4c" containerName="keystone-db-sync" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.476389 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d67c032-022a-4d33-95e6-cdf31147fb4c" containerName="keystone-db-sync" Jan 03 06:03:59 crc kubenswrapper[4854]: E0103 06:03:59.476401 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7134f57e-784c-4c40-b9d3-cf1e86a1237e" containerName="mariadb-database-create" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.476407 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="7134f57e-784c-4c40-b9d3-cf1e86a1237e" containerName="mariadb-database-create" Jan 03 06:03:59 crc kubenswrapper[4854]: E0103 06:03:59.476417 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba2c8def-0d1c-4a79-a63d-c6423a1b4823" containerName="mariadb-database-create" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.476423 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba2c8def-0d1c-4a79-a63d-c6423a1b4823" containerName="mariadb-database-create" Jan 03 06:03:59 crc kubenswrapper[4854]: E0103 06:03:59.476429 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdc310d0-38e5-41d6-a784-d8e534a5e324" containerName="mariadb-database-create" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.476435 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdc310d0-38e5-41d6-a784-d8e534a5e324" containerName="mariadb-database-create" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.476619 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="7456fb80-40dc-4ef7-86ee-062ad4b064d2" containerName="mariadb-database-create" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.476632 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="e681201f-947c-41ba-93fe-0533bd1d071a" containerName="mariadb-account-create-update" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.476643 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdc310d0-38e5-41d6-a784-d8e534a5e324" containerName="mariadb-database-create" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.476653 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d67c032-022a-4d33-95e6-cdf31147fb4c" containerName="keystone-db-sync" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.476660 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="c448f9c4-fd70-4c6d-853e-c4197af5b80b" containerName="mariadb-account-create-update" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.476669 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="0019311a-ce5a-4dbb-bef8-8cac6b78a304" containerName="mariadb-account-create-update" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.476677 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba2c8def-0d1c-4a79-a63d-c6423a1b4823" containerName="mariadb-database-create" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.476685 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="7134f57e-784c-4c40-b9d3-cf1e86a1237e" containerName="mariadb-database-create" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.476698 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="96b6abe3-ad62-48fc-bd6d-8df5e103c5d4" containerName="mariadb-account-create-update" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.477768 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-vkkvk" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.505231 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-pscs7"] Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.506591 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pscs7" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.511789 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.512042 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.512216 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-c82z5" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.512349 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.512484 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.533157 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-pscs7"] Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.555294 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-vkkvk"] Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.594140 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-9rnh5"] Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.595573 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-9rnh5" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.597821 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.607633 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2-dns-svc\") pod \"dnsmasq-dns-f877ddd87-vkkvk\" (UID: \"a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2\") " pod="openstack/dnsmasq-dns-f877ddd87-vkkvk" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.607727 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2-ovsdbserver-sb\") pod \"dnsmasq-dns-f877ddd87-vkkvk\" (UID: \"a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2\") " pod="openstack/dnsmasq-dns-f877ddd87-vkkvk" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.607757 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70382168-87d2-405b-9ee0-8a3969573750-config-data\") pod \"keystone-bootstrap-pscs7\" (UID: \"70382168-87d2-405b-9ee0-8a3969573750\") " pod="openstack/keystone-bootstrap-pscs7" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.607781 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70382168-87d2-405b-9ee0-8a3969573750-combined-ca-bundle\") pod \"keystone-bootstrap-pscs7\" (UID: \"70382168-87d2-405b-9ee0-8a3969573750\") " pod="openstack/keystone-bootstrap-pscs7" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.607798 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv9w8\" (UniqueName: \"kubernetes.io/projected/a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2-kube-api-access-rv9w8\") pod \"dnsmasq-dns-f877ddd87-vkkvk\" (UID: \"a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2\") " pod="openstack/dnsmasq-dns-f877ddd87-vkkvk" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.607814 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/70382168-87d2-405b-9ee0-8a3969573750-fernet-keys\") pod \"keystone-bootstrap-pscs7\" (UID: \"70382168-87d2-405b-9ee0-8a3969573750\") " pod="openstack/keystone-bootstrap-pscs7" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.607901 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6bf6\" (UniqueName: \"kubernetes.io/projected/70382168-87d2-405b-9ee0-8a3969573750-kube-api-access-d6bf6\") pod \"keystone-bootstrap-pscs7\" (UID: \"70382168-87d2-405b-9ee0-8a3969573750\") " pod="openstack/keystone-bootstrap-pscs7" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.607947 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/70382168-87d2-405b-9ee0-8a3969573750-credential-keys\") pod \"keystone-bootstrap-pscs7\" (UID: \"70382168-87d2-405b-9ee0-8a3969573750\") " pod="openstack/keystone-bootstrap-pscs7" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.607961 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2-config\") pod \"dnsmasq-dns-f877ddd87-vkkvk\" (UID: \"a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2\") " pod="openstack/dnsmasq-dns-f877ddd87-vkkvk" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.608006 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70382168-87d2-405b-9ee0-8a3969573750-scripts\") pod \"keystone-bootstrap-pscs7\" (UID: \"70382168-87d2-405b-9ee0-8a3969573750\") " pod="openstack/keystone-bootstrap-pscs7" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.608042 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2-ovsdbserver-nb\") pod \"dnsmasq-dns-f877ddd87-vkkvk\" (UID: \"a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2\") " pod="openstack/dnsmasq-dns-f877ddd87-vkkvk" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.611053 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-pvtl7" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.647367 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-9rnh5"] Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.684809 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-sd52b"] Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.686386 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-sd52b" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.694511 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.694773 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.694946 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-46g77" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.698847 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-sd52b"] Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.713248 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70382168-87d2-405b-9ee0-8a3969573750-scripts\") pod \"keystone-bootstrap-pscs7\" (UID: \"70382168-87d2-405b-9ee0-8a3969573750\") " pod="openstack/keystone-bootstrap-pscs7" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.713296 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2-ovsdbserver-nb\") pod \"dnsmasq-dns-f877ddd87-vkkvk\" (UID: \"a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2\") " pod="openstack/dnsmasq-dns-f877ddd87-vkkvk" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.713323 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2-dns-svc\") pod \"dnsmasq-dns-f877ddd87-vkkvk\" (UID: \"a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2\") " pod="openstack/dnsmasq-dns-f877ddd87-vkkvk" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.713355 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f46296d-5d5c-4aa8-94e1-e8e5951da088-combined-ca-bundle\") pod \"heat-db-sync-9rnh5\" (UID: \"8f46296d-5d5c-4aa8-94e1-e8e5951da088\") " pod="openstack/heat-db-sync-9rnh5" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.713389 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrnjg\" (UniqueName: \"kubernetes.io/projected/8f46296d-5d5c-4aa8-94e1-e8e5951da088-kube-api-access-vrnjg\") pod \"heat-db-sync-9rnh5\" (UID: \"8f46296d-5d5c-4aa8-94e1-e8e5951da088\") " pod="openstack/heat-db-sync-9rnh5" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.713460 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2-ovsdbserver-sb\") pod \"dnsmasq-dns-f877ddd87-vkkvk\" (UID: \"a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2\") " pod="openstack/dnsmasq-dns-f877ddd87-vkkvk" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.713501 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70382168-87d2-405b-9ee0-8a3969573750-config-data\") pod \"keystone-bootstrap-pscs7\" (UID: \"70382168-87d2-405b-9ee0-8a3969573750\") " pod="openstack/keystone-bootstrap-pscs7" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.713528 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70382168-87d2-405b-9ee0-8a3969573750-combined-ca-bundle\") pod \"keystone-bootstrap-pscs7\" (UID: \"70382168-87d2-405b-9ee0-8a3969573750\") " pod="openstack/keystone-bootstrap-pscs7" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.713568 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv9w8\" (UniqueName: \"kubernetes.io/projected/a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2-kube-api-access-rv9w8\") pod \"dnsmasq-dns-f877ddd87-vkkvk\" (UID: \"a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2\") " pod="openstack/dnsmasq-dns-f877ddd87-vkkvk" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.713588 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/70382168-87d2-405b-9ee0-8a3969573750-fernet-keys\") pod \"keystone-bootstrap-pscs7\" (UID: \"70382168-87d2-405b-9ee0-8a3969573750\") " pod="openstack/keystone-bootstrap-pscs7" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.713617 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f6a47ad8-d256-453c-910a-1506c8f73657-etc-swift\") pod \"swift-storage-0\" (UID: \"f6a47ad8-d256-453c-910a-1506c8f73657\") " pod="openstack/swift-storage-0" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.713704 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6bf6\" (UniqueName: \"kubernetes.io/projected/70382168-87d2-405b-9ee0-8a3969573750-kube-api-access-d6bf6\") pod \"keystone-bootstrap-pscs7\" (UID: \"70382168-87d2-405b-9ee0-8a3969573750\") " pod="openstack/keystone-bootstrap-pscs7" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.713728 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f46296d-5d5c-4aa8-94e1-e8e5951da088-config-data\") pod \"heat-db-sync-9rnh5\" (UID: \"8f46296d-5d5c-4aa8-94e1-e8e5951da088\") " pod="openstack/heat-db-sync-9rnh5" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.713777 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/70382168-87d2-405b-9ee0-8a3969573750-credential-keys\") pod \"keystone-bootstrap-pscs7\" (UID: \"70382168-87d2-405b-9ee0-8a3969573750\") " pod="openstack/keystone-bootstrap-pscs7" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.713792 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2-config\") pod \"dnsmasq-dns-f877ddd87-vkkvk\" (UID: \"a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2\") " pod="openstack/dnsmasq-dns-f877ddd87-vkkvk" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.714241 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2-ovsdbserver-nb\") pod \"dnsmasq-dns-f877ddd87-vkkvk\" (UID: \"a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2\") " pod="openstack/dnsmasq-dns-f877ddd87-vkkvk" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.715378 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2-config\") pod \"dnsmasq-dns-f877ddd87-vkkvk\" (UID: \"a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2\") " pod="openstack/dnsmasq-dns-f877ddd87-vkkvk" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.715798 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2-dns-svc\") pod \"dnsmasq-dns-f877ddd87-vkkvk\" (UID: \"a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2\") " pod="openstack/dnsmasq-dns-f877ddd87-vkkvk" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.720281 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2-ovsdbserver-sb\") pod \"dnsmasq-dns-f877ddd87-vkkvk\" (UID: \"a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2\") " pod="openstack/dnsmasq-dns-f877ddd87-vkkvk" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.729422 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70382168-87d2-405b-9ee0-8a3969573750-scripts\") pod \"keystone-bootstrap-pscs7\" (UID: \"70382168-87d2-405b-9ee0-8a3969573750\") " pod="openstack/keystone-bootstrap-pscs7" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.729924 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70382168-87d2-405b-9ee0-8a3969573750-combined-ca-bundle\") pod \"keystone-bootstrap-pscs7\" (UID: \"70382168-87d2-405b-9ee0-8a3969573750\") " pod="openstack/keystone-bootstrap-pscs7" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.731442 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f6a47ad8-d256-453c-910a-1506c8f73657-etc-swift\") pod \"swift-storage-0\" (UID: \"f6a47ad8-d256-453c-910a-1506c8f73657\") " pod="openstack/swift-storage-0" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.731703 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/70382168-87d2-405b-9ee0-8a3969573750-credential-keys\") pod \"keystone-bootstrap-pscs7\" (UID: \"70382168-87d2-405b-9ee0-8a3969573750\") " pod="openstack/keystone-bootstrap-pscs7" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.732332 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/70382168-87d2-405b-9ee0-8a3969573750-fernet-keys\") pod \"keystone-bootstrap-pscs7\" (UID: \"70382168-87d2-405b-9ee0-8a3969573750\") " pod="openstack/keystone-bootstrap-pscs7" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.735970 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-lk7dp"] Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.736747 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70382168-87d2-405b-9ee0-8a3969573750-config-data\") pod \"keystone-bootstrap-pscs7\" (UID: \"70382168-87d2-405b-9ee0-8a3969573750\") " pod="openstack/keystone-bootstrap-pscs7" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.737375 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-lk7dp" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.748649 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.748864 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-zhvd5" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.750804 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6bf6\" (UniqueName: \"kubernetes.io/projected/70382168-87d2-405b-9ee0-8a3969573750-kube-api-access-d6bf6\") pod \"keystone-bootstrap-pscs7\" (UID: \"70382168-87d2-405b-9ee0-8a3969573750\") " pod="openstack/keystone-bootstrap-pscs7" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.751386 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv9w8\" (UniqueName: \"kubernetes.io/projected/a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2-kube-api-access-rv9w8\") pod \"dnsmasq-dns-f877ddd87-vkkvk\" (UID: \"a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2\") " pod="openstack/dnsmasq-dns-f877ddd87-vkkvk" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.755210 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-lk7dp"] Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.802621 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-xqtnh"] Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.804067 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-xqtnh" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.805234 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-vkkvk" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.811763 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.812098 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.812467 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-vqhx8" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.815484 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f46296d-5d5c-4aa8-94e1-e8e5951da088-combined-ca-bundle\") pod \"heat-db-sync-9rnh5\" (UID: \"8f46296d-5d5c-4aa8-94e1-e8e5951da088\") " pod="openstack/heat-db-sync-9rnh5" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.815527 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrnjg\" (UniqueName: \"kubernetes.io/projected/8f46296d-5d5c-4aa8-94e1-e8e5951da088-kube-api-access-vrnjg\") pod \"heat-db-sync-9rnh5\" (UID: \"8f46296d-5d5c-4aa8-94e1-e8e5951da088\") " pod="openstack/heat-db-sync-9rnh5" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.815578 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pk9wm\" (UniqueName: \"kubernetes.io/projected/cc6ba4e7-fb71-40f0-b478-e12ed5fc5ae4-kube-api-access-pk9wm\") pod \"barbican-db-sync-lk7dp\" (UID: \"cc6ba4e7-fb71-40f0-b478-e12ed5fc5ae4\") " pod="openstack/barbican-db-sync-lk7dp" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.815607 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca061deb-f600-49db-8ac3-6213e22b2f76-config-data\") pod \"cinder-db-sync-sd52b\" (UID: \"ca061deb-f600-49db-8ac3-6213e22b2f76\") " pod="openstack/cinder-db-sync-sd52b" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.815657 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca061deb-f600-49db-8ac3-6213e22b2f76-scripts\") pod \"cinder-db-sync-sd52b\" (UID: \"ca061deb-f600-49db-8ac3-6213e22b2f76\") " pod="openstack/cinder-db-sync-sd52b" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.815679 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ca061deb-f600-49db-8ac3-6213e22b2f76-db-sync-config-data\") pod \"cinder-db-sync-sd52b\" (UID: \"ca061deb-f600-49db-8ac3-6213e22b2f76\") " pod="openstack/cinder-db-sync-sd52b" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.815720 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc7nb\" (UniqueName: \"kubernetes.io/projected/ca061deb-f600-49db-8ac3-6213e22b2f76-kube-api-access-bc7nb\") pod \"cinder-db-sync-sd52b\" (UID: \"ca061deb-f600-49db-8ac3-6213e22b2f76\") " pod="openstack/cinder-db-sync-sd52b" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.815741 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f46296d-5d5c-4aa8-94e1-e8e5951da088-config-data\") pod \"heat-db-sync-9rnh5\" (UID: \"8f46296d-5d5c-4aa8-94e1-e8e5951da088\") " pod="openstack/heat-db-sync-9rnh5" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.815774 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cc6ba4e7-fb71-40f0-b478-e12ed5fc5ae4-db-sync-config-data\") pod \"barbican-db-sync-lk7dp\" (UID: \"cc6ba4e7-fb71-40f0-b478-e12ed5fc5ae4\") " pod="openstack/barbican-db-sync-lk7dp" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.815794 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca061deb-f600-49db-8ac3-6213e22b2f76-combined-ca-bundle\") pod \"cinder-db-sync-sd52b\" (UID: \"ca061deb-f600-49db-8ac3-6213e22b2f76\") " pod="openstack/cinder-db-sync-sd52b" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.815835 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ca061deb-f600-49db-8ac3-6213e22b2f76-etc-machine-id\") pod \"cinder-db-sync-sd52b\" (UID: \"ca061deb-f600-49db-8ac3-6213e22b2f76\") " pod="openstack/cinder-db-sync-sd52b" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.815853 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc6ba4e7-fb71-40f0-b478-e12ed5fc5ae4-combined-ca-bundle\") pod \"barbican-db-sync-lk7dp\" (UID: \"cc6ba4e7-fb71-40f0-b478-e12ed5fc5ae4\") " pod="openstack/barbican-db-sync-lk7dp" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.835898 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-xqtnh"] Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.836690 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pscs7" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.836908 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f46296d-5d5c-4aa8-94e1-e8e5951da088-combined-ca-bundle\") pod \"heat-db-sync-9rnh5\" (UID: \"8f46296d-5d5c-4aa8-94e1-e8e5951da088\") " pod="openstack/heat-db-sync-9rnh5" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.845254 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f46296d-5d5c-4aa8-94e1-e8e5951da088-config-data\") pod \"heat-db-sync-9rnh5\" (UID: \"8f46296d-5d5c-4aa8-94e1-e8e5951da088\") " pod="openstack/heat-db-sync-9rnh5" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.858352 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrnjg\" (UniqueName: \"kubernetes.io/projected/8f46296d-5d5c-4aa8-94e1-e8e5951da088-kube-api-access-vrnjg\") pod \"heat-db-sync-9rnh5\" (UID: \"8f46296d-5d5c-4aa8-94e1-e8e5951da088\") " pod="openstack/heat-db-sync-9rnh5" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.876375 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.925404 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-9rnh5" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.926622 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cc6ba4e7-fb71-40f0-b478-e12ed5fc5ae4-db-sync-config-data\") pod \"barbican-db-sync-lk7dp\" (UID: \"cc6ba4e7-fb71-40f0-b478-e12ed5fc5ae4\") " pod="openstack/barbican-db-sync-lk7dp" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.926659 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca061deb-f600-49db-8ac3-6213e22b2f76-combined-ca-bundle\") pod \"cinder-db-sync-sd52b\" (UID: \"ca061deb-f600-49db-8ac3-6213e22b2f76\") " pod="openstack/cinder-db-sync-sd52b" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.926886 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fe19914-d9c1-4a1d-bba5-77167bca38f2-combined-ca-bundle\") pod \"neutron-db-sync-xqtnh\" (UID: \"4fe19914-d9c1-4a1d-bba5-77167bca38f2\") " pod="openstack/neutron-db-sync-xqtnh" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.926929 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ca061deb-f600-49db-8ac3-6213e22b2f76-etc-machine-id\") pod \"cinder-db-sync-sd52b\" (UID: \"ca061deb-f600-49db-8ac3-6213e22b2f76\") " pod="openstack/cinder-db-sync-sd52b" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.926950 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc6ba4e7-fb71-40f0-b478-e12ed5fc5ae4-combined-ca-bundle\") pod \"barbican-db-sync-lk7dp\" (UID: \"cc6ba4e7-fb71-40f0-b478-e12ed5fc5ae4\") " pod="openstack/barbican-db-sync-lk7dp" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.927092 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pk9wm\" (UniqueName: \"kubernetes.io/projected/cc6ba4e7-fb71-40f0-b478-e12ed5fc5ae4-kube-api-access-pk9wm\") pod \"barbican-db-sync-lk7dp\" (UID: \"cc6ba4e7-fb71-40f0-b478-e12ed5fc5ae4\") " pod="openstack/barbican-db-sync-lk7dp" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.927135 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca061deb-f600-49db-8ac3-6213e22b2f76-config-data\") pod \"cinder-db-sync-sd52b\" (UID: \"ca061deb-f600-49db-8ac3-6213e22b2f76\") " pod="openstack/cinder-db-sync-sd52b" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.927177 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skm6d\" (UniqueName: \"kubernetes.io/projected/4fe19914-d9c1-4a1d-bba5-77167bca38f2-kube-api-access-skm6d\") pod \"neutron-db-sync-xqtnh\" (UID: \"4fe19914-d9c1-4a1d-bba5-77167bca38f2\") " pod="openstack/neutron-db-sync-xqtnh" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.927234 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca061deb-f600-49db-8ac3-6213e22b2f76-scripts\") pod \"cinder-db-sync-sd52b\" (UID: \"ca061deb-f600-49db-8ac3-6213e22b2f76\") " pod="openstack/cinder-db-sync-sd52b" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.927276 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ca061deb-f600-49db-8ac3-6213e22b2f76-db-sync-config-data\") pod \"cinder-db-sync-sd52b\" (UID: \"ca061deb-f600-49db-8ac3-6213e22b2f76\") " pod="openstack/cinder-db-sync-sd52b" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.927323 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4fe19914-d9c1-4a1d-bba5-77167bca38f2-config\") pod \"neutron-db-sync-xqtnh\" (UID: \"4fe19914-d9c1-4a1d-bba5-77167bca38f2\") " pod="openstack/neutron-db-sync-xqtnh" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.927350 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bc7nb\" (UniqueName: \"kubernetes.io/projected/ca061deb-f600-49db-8ac3-6213e22b2f76-kube-api-access-bc7nb\") pod \"cinder-db-sync-sd52b\" (UID: \"ca061deb-f600-49db-8ac3-6213e22b2f76\") " pod="openstack/cinder-db-sync-sd52b" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.927846 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ca061deb-f600-49db-8ac3-6213e22b2f76-etc-machine-id\") pod \"cinder-db-sync-sd52b\" (UID: \"ca061deb-f600-49db-8ac3-6213e22b2f76\") " pod="openstack/cinder-db-sync-sd52b" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.931813 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-ff9wl"] Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.933251 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-ff9wl" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.938065 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.941686 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-2ljdj" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.955153 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-ff9wl"] Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.961876 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.965406 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cc6ba4e7-fb71-40f0-b478-e12ed5fc5ae4-db-sync-config-data\") pod \"barbican-db-sync-lk7dp\" (UID: \"cc6ba4e7-fb71-40f0-b478-e12ed5fc5ae4\") " pod="openstack/barbican-db-sync-lk7dp" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.968154 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca061deb-f600-49db-8ac3-6213e22b2f76-config-data\") pod \"cinder-db-sync-sd52b\" (UID: \"ca061deb-f600-49db-8ac3-6213e22b2f76\") " pod="openstack/cinder-db-sync-sd52b" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.969881 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ca061deb-f600-49db-8ac3-6213e22b2f76-db-sync-config-data\") pod \"cinder-db-sync-sd52b\" (UID: \"ca061deb-f600-49db-8ac3-6213e22b2f76\") " pod="openstack/cinder-db-sync-sd52b" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.970723 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca061deb-f600-49db-8ac3-6213e22b2f76-scripts\") pod \"cinder-db-sync-sd52b\" (UID: \"ca061deb-f600-49db-8ac3-6213e22b2f76\") " pod="openstack/cinder-db-sync-sd52b" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.980244 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc6ba4e7-fb71-40f0-b478-e12ed5fc5ae4-combined-ca-bundle\") pod \"barbican-db-sync-lk7dp\" (UID: \"cc6ba4e7-fb71-40f0-b478-e12ed5fc5ae4\") " pod="openstack/barbican-db-sync-lk7dp" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.986420 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bc7nb\" (UniqueName: \"kubernetes.io/projected/ca061deb-f600-49db-8ac3-6213e22b2f76-kube-api-access-bc7nb\") pod \"cinder-db-sync-sd52b\" (UID: \"ca061deb-f600-49db-8ac3-6213e22b2f76\") " pod="openstack/cinder-db-sync-sd52b" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.987610 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca061deb-f600-49db-8ac3-6213e22b2f76-combined-ca-bundle\") pod \"cinder-db-sync-sd52b\" (UID: \"ca061deb-f600-49db-8ac3-6213e22b2f76\") " pod="openstack/cinder-db-sync-sd52b" Jan 03 06:03:59 crc kubenswrapper[4854]: I0103 06:03:59.988418 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-vkkvk"] Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.009975 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-sd52b" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.010708 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pk9wm\" (UniqueName: \"kubernetes.io/projected/cc6ba4e7-fb71-40f0-b478-e12ed5fc5ae4-kube-api-access-pk9wm\") pod \"barbican-db-sync-lk7dp\" (UID: \"cc6ba4e7-fb71-40f0-b478-e12ed5fc5ae4\") " pod="openstack/barbican-db-sync-lk7dp" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.013132 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.016289 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.018275 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-lk7dp" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.020238 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.021368 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.029809 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc184ac1-7e14-435e-898d-93e19dab6615-scripts\") pod \"placement-db-sync-ff9wl\" (UID: \"dc184ac1-7e14-435e-898d-93e19dab6615\") " pod="openstack/placement-db-sync-ff9wl" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.029849 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc184ac1-7e14-435e-898d-93e19dab6615-combined-ca-bundle\") pod \"placement-db-sync-ff9wl\" (UID: \"dc184ac1-7e14-435e-898d-93e19dab6615\") " pod="openstack/placement-db-sync-ff9wl" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.029872 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs7wq\" (UniqueName: \"kubernetes.io/projected/dc184ac1-7e14-435e-898d-93e19dab6615-kube-api-access-fs7wq\") pod \"placement-db-sync-ff9wl\" (UID: \"dc184ac1-7e14-435e-898d-93e19dab6615\") " pod="openstack/placement-db-sync-ff9wl" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.029900 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skm6d\" (UniqueName: \"kubernetes.io/projected/4fe19914-d9c1-4a1d-bba5-77167bca38f2-kube-api-access-skm6d\") pod \"neutron-db-sync-xqtnh\" (UID: \"4fe19914-d9c1-4a1d-bba5-77167bca38f2\") " pod="openstack/neutron-db-sync-xqtnh" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.029964 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4fe19914-d9c1-4a1d-bba5-77167bca38f2-config\") pod \"neutron-db-sync-xqtnh\" (UID: \"4fe19914-d9c1-4a1d-bba5-77167bca38f2\") " pod="openstack/neutron-db-sync-xqtnh" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.029994 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc184ac1-7e14-435e-898d-93e19dab6615-logs\") pod \"placement-db-sync-ff9wl\" (UID: \"dc184ac1-7e14-435e-898d-93e19dab6615\") " pod="openstack/placement-db-sync-ff9wl" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.030044 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fe19914-d9c1-4a1d-bba5-77167bca38f2-combined-ca-bundle\") pod \"neutron-db-sync-xqtnh\" (UID: \"4fe19914-d9c1-4a1d-bba5-77167bca38f2\") " pod="openstack/neutron-db-sync-xqtnh" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.030099 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc184ac1-7e14-435e-898d-93e19dab6615-config-data\") pod \"placement-db-sync-ff9wl\" (UID: \"dc184ac1-7e14-435e-898d-93e19dab6615\") " pod="openstack/placement-db-sync-ff9wl" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.033857 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4fe19914-d9c1-4a1d-bba5-77167bca38f2-config\") pod \"neutron-db-sync-xqtnh\" (UID: \"4fe19914-d9c1-4a1d-bba5-77167bca38f2\") " pod="openstack/neutron-db-sync-xqtnh" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.033908 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fe19914-d9c1-4a1d-bba5-77167bca38f2-combined-ca-bundle\") pod \"neutron-db-sync-xqtnh\" (UID: \"4fe19914-d9c1-4a1d-bba5-77167bca38f2\") " pod="openstack/neutron-db-sync-xqtnh" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.083025 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-nrqr2"] Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.085168 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68dcc9cf6f-nrqr2" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.124224 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.129522 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skm6d\" (UniqueName: \"kubernetes.io/projected/4fe19914-d9c1-4a1d-bba5-77167bca38f2-kube-api-access-skm6d\") pod \"neutron-db-sync-xqtnh\" (UID: \"4fe19914-d9c1-4a1d-bba5-77167bca38f2\") " pod="openstack/neutron-db-sync-xqtnh" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.135429 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41de8a7f-850a-4f6e-8623-e0cdbcdf79e6-scripts\") pod \"ceilometer-0\" (UID: \"41de8a7f-850a-4f6e-8623-e0cdbcdf79e6\") " pod="openstack/ceilometer-0" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.135474 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41de8a7f-850a-4f6e-8623-e0cdbcdf79e6-log-httpd\") pod \"ceilometer-0\" (UID: \"41de8a7f-850a-4f6e-8623-e0cdbcdf79e6\") " pod="openstack/ceilometer-0" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.135503 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc184ac1-7e14-435e-898d-93e19dab6615-logs\") pod \"placement-db-sync-ff9wl\" (UID: \"dc184ac1-7e14-435e-898d-93e19dab6615\") " pod="openstack/placement-db-sync-ff9wl" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.135562 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmj96\" (UniqueName: \"kubernetes.io/projected/41de8a7f-850a-4f6e-8623-e0cdbcdf79e6-kube-api-access-bmj96\") pod \"ceilometer-0\" (UID: \"41de8a7f-850a-4f6e-8623-e0cdbcdf79e6\") " pod="openstack/ceilometer-0" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.135610 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc184ac1-7e14-435e-898d-93e19dab6615-config-data\") pod \"placement-db-sync-ff9wl\" (UID: \"dc184ac1-7e14-435e-898d-93e19dab6615\") " pod="openstack/placement-db-sync-ff9wl" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.135667 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc184ac1-7e14-435e-898d-93e19dab6615-scripts\") pod \"placement-db-sync-ff9wl\" (UID: \"dc184ac1-7e14-435e-898d-93e19dab6615\") " pod="openstack/placement-db-sync-ff9wl" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.135682 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41de8a7f-850a-4f6e-8623-e0cdbcdf79e6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"41de8a7f-850a-4f6e-8623-e0cdbcdf79e6\") " pod="openstack/ceilometer-0" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.135707 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc184ac1-7e14-435e-898d-93e19dab6615-combined-ca-bundle\") pod \"placement-db-sync-ff9wl\" (UID: \"dc184ac1-7e14-435e-898d-93e19dab6615\") " pod="openstack/placement-db-sync-ff9wl" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.135726 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fs7wq\" (UniqueName: \"kubernetes.io/projected/dc184ac1-7e14-435e-898d-93e19dab6615-kube-api-access-fs7wq\") pod \"placement-db-sync-ff9wl\" (UID: \"dc184ac1-7e14-435e-898d-93e19dab6615\") " pod="openstack/placement-db-sync-ff9wl" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.135750 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41de8a7f-850a-4f6e-8623-e0cdbcdf79e6-run-httpd\") pod \"ceilometer-0\" (UID: \"41de8a7f-850a-4f6e-8623-e0cdbcdf79e6\") " pod="openstack/ceilometer-0" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.135772 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41de8a7f-850a-4f6e-8623-e0cdbcdf79e6-config-data\") pod \"ceilometer-0\" (UID: \"41de8a7f-850a-4f6e-8623-e0cdbcdf79e6\") " pod="openstack/ceilometer-0" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.135794 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/41de8a7f-850a-4f6e-8623-e0cdbcdf79e6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"41de8a7f-850a-4f6e-8623-e0cdbcdf79e6\") " pod="openstack/ceilometer-0" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.137715 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc184ac1-7e14-435e-898d-93e19dab6615-logs\") pod \"placement-db-sync-ff9wl\" (UID: \"dc184ac1-7e14-435e-898d-93e19dab6615\") " pod="openstack/placement-db-sync-ff9wl" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.268825 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/30740ab8-3540-4d4e-a677-2deac6c1b280-ovsdbserver-sb\") pod \"dnsmasq-dns-68dcc9cf6f-nrqr2\" (UID: \"30740ab8-3540-4d4e-a677-2deac6c1b280\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-nrqr2" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.268951 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41de8a7f-850a-4f6e-8623-e0cdbcdf79e6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"41de8a7f-850a-4f6e-8623-e0cdbcdf79e6\") " pod="openstack/ceilometer-0" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.268982 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/30740ab8-3540-4d4e-a677-2deac6c1b280-ovsdbserver-nb\") pod \"dnsmasq-dns-68dcc9cf6f-nrqr2\" (UID: \"30740ab8-3540-4d4e-a677-2deac6c1b280\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-nrqr2" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.269150 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41de8a7f-850a-4f6e-8623-e0cdbcdf79e6-run-httpd\") pod \"ceilometer-0\" (UID: \"41de8a7f-850a-4f6e-8623-e0cdbcdf79e6\") " pod="openstack/ceilometer-0" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.269236 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41de8a7f-850a-4f6e-8623-e0cdbcdf79e6-config-data\") pod \"ceilometer-0\" (UID: \"41de8a7f-850a-4f6e-8623-e0cdbcdf79e6\") " pod="openstack/ceilometer-0" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.269299 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/41de8a7f-850a-4f6e-8623-e0cdbcdf79e6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"41de8a7f-850a-4f6e-8623-e0cdbcdf79e6\") " pod="openstack/ceilometer-0" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.269439 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41de8a7f-850a-4f6e-8623-e0cdbcdf79e6-scripts\") pod \"ceilometer-0\" (UID: \"41de8a7f-850a-4f6e-8623-e0cdbcdf79e6\") " pod="openstack/ceilometer-0" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.269515 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41de8a7f-850a-4f6e-8623-e0cdbcdf79e6-log-httpd\") pod \"ceilometer-0\" (UID: \"41de8a7f-850a-4f6e-8623-e0cdbcdf79e6\") " pod="openstack/ceilometer-0" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.269651 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30740ab8-3540-4d4e-a677-2deac6c1b280-dns-svc\") pod \"dnsmasq-dns-68dcc9cf6f-nrqr2\" (UID: \"30740ab8-3540-4d4e-a677-2deac6c1b280\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-nrqr2" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.269712 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmj96\" (UniqueName: \"kubernetes.io/projected/41de8a7f-850a-4f6e-8623-e0cdbcdf79e6-kube-api-access-bmj96\") pod \"ceilometer-0\" (UID: \"41de8a7f-850a-4f6e-8623-e0cdbcdf79e6\") " pod="openstack/ceilometer-0" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.269742 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30740ab8-3540-4d4e-a677-2deac6c1b280-config\") pod \"dnsmasq-dns-68dcc9cf6f-nrqr2\" (UID: \"30740ab8-3540-4d4e-a677-2deac6c1b280\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-nrqr2" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.269778 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wzrb\" (UniqueName: \"kubernetes.io/projected/30740ab8-3540-4d4e-a677-2deac6c1b280-kube-api-access-8wzrb\") pod \"dnsmasq-dns-68dcc9cf6f-nrqr2\" (UID: \"30740ab8-3540-4d4e-a677-2deac6c1b280\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-nrqr2" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.283626 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41de8a7f-850a-4f6e-8623-e0cdbcdf79e6-log-httpd\") pod \"ceilometer-0\" (UID: \"41de8a7f-850a-4f6e-8623-e0cdbcdf79e6\") " pod="openstack/ceilometer-0" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.285744 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41de8a7f-850a-4f6e-8623-e0cdbcdf79e6-run-httpd\") pod \"ceilometer-0\" (UID: \"41de8a7f-850a-4f6e-8623-e0cdbcdf79e6\") " pod="openstack/ceilometer-0" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.287338 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc184ac1-7e14-435e-898d-93e19dab6615-scripts\") pod \"placement-db-sync-ff9wl\" (UID: \"dc184ac1-7e14-435e-898d-93e19dab6615\") " pod="openstack/placement-db-sync-ff9wl" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.316774 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc184ac1-7e14-435e-898d-93e19dab6615-combined-ca-bundle\") pod \"placement-db-sync-ff9wl\" (UID: \"dc184ac1-7e14-435e-898d-93e19dab6615\") " pod="openstack/placement-db-sync-ff9wl" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.323722 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fs7wq\" (UniqueName: \"kubernetes.io/projected/dc184ac1-7e14-435e-898d-93e19dab6615-kube-api-access-fs7wq\") pod \"placement-db-sync-ff9wl\" (UID: \"dc184ac1-7e14-435e-898d-93e19dab6615\") " pod="openstack/placement-db-sync-ff9wl" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.325852 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc184ac1-7e14-435e-898d-93e19dab6615-config-data\") pod \"placement-db-sync-ff9wl\" (UID: \"dc184ac1-7e14-435e-898d-93e19dab6615\") " pod="openstack/placement-db-sync-ff9wl" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.332637 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/41de8a7f-850a-4f6e-8623-e0cdbcdf79e6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"41de8a7f-850a-4f6e-8623-e0cdbcdf79e6\") " pod="openstack/ceilometer-0" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.360246 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41de8a7f-850a-4f6e-8623-e0cdbcdf79e6-scripts\") pod \"ceilometer-0\" (UID: \"41de8a7f-850a-4f6e-8623-e0cdbcdf79e6\") " pod="openstack/ceilometer-0" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.360567 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41de8a7f-850a-4f6e-8623-e0cdbcdf79e6-config-data\") pod \"ceilometer-0\" (UID: \"41de8a7f-850a-4f6e-8623-e0cdbcdf79e6\") " pod="openstack/ceilometer-0" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.361445 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmj96\" (UniqueName: \"kubernetes.io/projected/41de8a7f-850a-4f6e-8623-e0cdbcdf79e6-kube-api-access-bmj96\") pod \"ceilometer-0\" (UID: \"41de8a7f-850a-4f6e-8623-e0cdbcdf79e6\") " pod="openstack/ceilometer-0" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.361721 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41de8a7f-850a-4f6e-8623-e0cdbcdf79e6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"41de8a7f-850a-4f6e-8623-e0cdbcdf79e6\") " pod="openstack/ceilometer-0" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.373280 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30740ab8-3540-4d4e-a677-2deac6c1b280-dns-svc\") pod \"dnsmasq-dns-68dcc9cf6f-nrqr2\" (UID: \"30740ab8-3540-4d4e-a677-2deac6c1b280\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-nrqr2" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.373335 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30740ab8-3540-4d4e-a677-2deac6c1b280-config\") pod \"dnsmasq-dns-68dcc9cf6f-nrqr2\" (UID: \"30740ab8-3540-4d4e-a677-2deac6c1b280\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-nrqr2" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.373358 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wzrb\" (UniqueName: \"kubernetes.io/projected/30740ab8-3540-4d4e-a677-2deac6c1b280-kube-api-access-8wzrb\") pod \"dnsmasq-dns-68dcc9cf6f-nrqr2\" (UID: \"30740ab8-3540-4d4e-a677-2deac6c1b280\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-nrqr2" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.373431 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/30740ab8-3540-4d4e-a677-2deac6c1b280-ovsdbserver-sb\") pod \"dnsmasq-dns-68dcc9cf6f-nrqr2\" (UID: \"30740ab8-3540-4d4e-a677-2deac6c1b280\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-nrqr2" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.373449 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/30740ab8-3540-4d4e-a677-2deac6c1b280-ovsdbserver-nb\") pod \"dnsmasq-dns-68dcc9cf6f-nrqr2\" (UID: \"30740ab8-3540-4d4e-a677-2deac6c1b280\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-nrqr2" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.375618 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30740ab8-3540-4d4e-a677-2deac6c1b280-dns-svc\") pod \"dnsmasq-dns-68dcc9cf6f-nrqr2\" (UID: \"30740ab8-3540-4d4e-a677-2deac6c1b280\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-nrqr2" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.376126 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30740ab8-3540-4d4e-a677-2deac6c1b280-config\") pod \"dnsmasq-dns-68dcc9cf6f-nrqr2\" (UID: \"30740ab8-3540-4d4e-a677-2deac6c1b280\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-nrqr2" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.378247 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/30740ab8-3540-4d4e-a677-2deac6c1b280-ovsdbserver-nb\") pod \"dnsmasq-dns-68dcc9cf6f-nrqr2\" (UID: \"30740ab8-3540-4d4e-a677-2deac6c1b280\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-nrqr2" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.378403 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/30740ab8-3540-4d4e-a677-2deac6c1b280-ovsdbserver-sb\") pod \"dnsmasq-dns-68dcc9cf6f-nrqr2\" (UID: \"30740ab8-3540-4d4e-a677-2deac6c1b280\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-nrqr2" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.391928 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-nrqr2"] Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.393821 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-xqtnh" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.403509 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wzrb\" (UniqueName: \"kubernetes.io/projected/30740ab8-3540-4d4e-a677-2deac6c1b280-kube-api-access-8wzrb\") pod \"dnsmasq-dns-68dcc9cf6f-nrqr2\" (UID: \"30740ab8-3540-4d4e-a677-2deac6c1b280\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-nrqr2" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.411490 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68dcc9cf6f-nrqr2" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.416797 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-ff9wl" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.465230 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.849177 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-vkkvk"] Jan 03 06:04:00 crc kubenswrapper[4854]: I0103 06:04:00.913047 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-pscs7"] Jan 03 06:04:01 crc kubenswrapper[4854]: W0103 06:04:01.139224 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f46296d_5d5c_4aa8_94e1_e8e5951da088.slice/crio-41c5188e210eaa59658606d5f57f53557732ef08e19a9aeb1dccefc4354822fc WatchSource:0}: Error finding container 41c5188e210eaa59658606d5f57f53557732ef08e19a9aeb1dccefc4354822fc: Status 404 returned error can't find the container with id 41c5188e210eaa59658606d5f57f53557732ef08e19a9aeb1dccefc4354822fc Jan 03 06:04:01 crc kubenswrapper[4854]: I0103 06:04:01.145942 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-9rnh5"] Jan 03 06:04:01 crc kubenswrapper[4854]: I0103 06:04:01.296988 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-9rnh5" event={"ID":"8f46296d-5d5c-4aa8-94e1-e8e5951da088","Type":"ContainerStarted","Data":"41c5188e210eaa59658606d5f57f53557732ef08e19a9aeb1dccefc4354822fc"} Jan 03 06:04:01 crc kubenswrapper[4854]: I0103 06:04:01.298705 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f877ddd87-vkkvk" event={"ID":"a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2","Type":"ContainerStarted","Data":"99c12fb3fba4af204bbe10803340e6e1c8b66477341c0f6c0b4c4e8ab46a4d9f"} Jan 03 06:04:01 crc kubenswrapper[4854]: I0103 06:04:01.302567 4854 generic.go:334] "Generic (PLEG): container finished" podID="79760a75-c798-415d-be02-dd3a6a9c74ee" containerID="dfa07bc1646e00894267d072cfbd8cafc77ee3509d5e1b1db5a4a341d00721f4" exitCode=0 Jan 03 06:04:01 crc kubenswrapper[4854]: I0103 06:04:01.302638 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"79760a75-c798-415d-be02-dd3a6a9c74ee","Type":"ContainerDied","Data":"dfa07bc1646e00894267d072cfbd8cafc77ee3509d5e1b1db5a4a341d00721f4"} Jan 03 06:04:01 crc kubenswrapper[4854]: I0103 06:04:01.304933 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pscs7" event={"ID":"70382168-87d2-405b-9ee0-8a3969573750","Type":"ContainerStarted","Data":"1395e5198c492d0fbf97141c6644c0d7374fcace22b1a393c34c4cb99f4c844c"} Jan 03 06:04:01 crc kubenswrapper[4854]: I0103 06:04:01.743381 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-lk7dp"] Jan 03 06:04:01 crc kubenswrapper[4854]: I0103 06:04:01.760264 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-xqtnh"] Jan 03 06:04:01 crc kubenswrapper[4854]: I0103 06:04:01.789948 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-ff9wl"] Jan 03 06:04:01 crc kubenswrapper[4854]: I0103 06:04:01.826032 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-sd52b"] Jan 03 06:04:01 crc kubenswrapper[4854]: I0103 06:04:01.894872 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-nrqr2"] Jan 03 06:04:01 crc kubenswrapper[4854]: I0103 06:04:01.954439 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:04:02 crc kubenswrapper[4854]: I0103 06:04:02.032261 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 03 06:04:02 crc kubenswrapper[4854]: I0103 06:04:02.079038 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:04:02 crc kubenswrapper[4854]: I0103 06:04:02.322478 4854 generic.go:334] "Generic (PLEG): container finished" podID="30740ab8-3540-4d4e-a677-2deac6c1b280" containerID="756321ee3613bf6dc8c53edf1834b609201867cbaa2907214fed49e2f200da6b" exitCode=0 Jan 03 06:04:02 crc kubenswrapper[4854]: I0103 06:04:02.322742 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68dcc9cf6f-nrqr2" event={"ID":"30740ab8-3540-4d4e-a677-2deac6c1b280","Type":"ContainerDied","Data":"756321ee3613bf6dc8c53edf1834b609201867cbaa2907214fed49e2f200da6b"} Jan 03 06:04:02 crc kubenswrapper[4854]: I0103 06:04:02.322790 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68dcc9cf6f-nrqr2" event={"ID":"30740ab8-3540-4d4e-a677-2deac6c1b280","Type":"ContainerStarted","Data":"33a5dc663dd8b85bfebdb19d8004216f20379cb840acc92502fd0ee80e94e366"} Jan 03 06:04:02 crc kubenswrapper[4854]: I0103 06:04:02.326775 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-ff9wl" event={"ID":"dc184ac1-7e14-435e-898d-93e19dab6615","Type":"ContainerStarted","Data":"6babe08ecd37600028d31a914f298550bace0f9de4371b98024b79f67462276e"} Jan 03 06:04:02 crc kubenswrapper[4854]: I0103 06:04:02.332486 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"79760a75-c798-415d-be02-dd3a6a9c74ee","Type":"ContainerStarted","Data":"e2af7467dae280858c2f304d5e6bd72712fff83f213ca117622d0be2839f6d64"} Jan 03 06:04:02 crc kubenswrapper[4854]: I0103 06:04:02.334132 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41de8a7f-850a-4f6e-8623-e0cdbcdf79e6","Type":"ContainerStarted","Data":"ecc2df736ced624a2de286a9ef0f2a3e871434d7dc7261daabf050a5b1a2966f"} Jan 03 06:04:02 crc kubenswrapper[4854]: I0103 06:04:02.335492 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-lk7dp" event={"ID":"cc6ba4e7-fb71-40f0-b478-e12ed5fc5ae4","Type":"ContainerStarted","Data":"83a55ddfeddef356295ac2aa6c98fa9b1759f1f9a323231eeade45af5c64f724"} Jan 03 06:04:02 crc kubenswrapper[4854]: I0103 06:04:02.347857 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pscs7" event={"ID":"70382168-87d2-405b-9ee0-8a3969573750","Type":"ContainerStarted","Data":"0f058ad58c99658a32860eff06f299498f5189ce237c486e3b40dae3d2b46db0"} Jan 03 06:04:02 crc kubenswrapper[4854]: I0103 06:04:02.359467 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f6a47ad8-d256-453c-910a-1506c8f73657","Type":"ContainerStarted","Data":"d92bd691b17f6fd31191921ce7b93a38ce83a68b745cfaff084e515ee4cf9a1e"} Jan 03 06:04:02 crc kubenswrapper[4854]: I0103 06:04:02.364703 4854 generic.go:334] "Generic (PLEG): container finished" podID="a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2" containerID="b431d735ed43f9d23dabd4d73ec657d003e2f5b012fedfc174aaa3ae08924a7f" exitCode=0 Jan 03 06:04:02 crc kubenswrapper[4854]: I0103 06:04:02.364783 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f877ddd87-vkkvk" event={"ID":"a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2","Type":"ContainerDied","Data":"b431d735ed43f9d23dabd4d73ec657d003e2f5b012fedfc174aaa3ae08924a7f"} Jan 03 06:04:02 crc kubenswrapper[4854]: I0103 06:04:02.372067 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-sd52b" event={"ID":"ca061deb-f600-49db-8ac3-6213e22b2f76","Type":"ContainerStarted","Data":"023fa117f5cafd138b401ad05378ebde7011a33485e2c9cb6e4a43af0698a536"} Jan 03 06:04:02 crc kubenswrapper[4854]: I0103 06:04:02.375516 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-xqtnh" event={"ID":"4fe19914-d9c1-4a1d-bba5-77167bca38f2","Type":"ContainerStarted","Data":"eb3520fc3c3653658357c578dc1ab6472976eef6377fb81043938c28784b4dce"} Jan 03 06:04:02 crc kubenswrapper[4854]: I0103 06:04:02.375543 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-xqtnh" event={"ID":"4fe19914-d9c1-4a1d-bba5-77167bca38f2","Type":"ContainerStarted","Data":"7f5d8c2f8fcbfc85a4daab24f3f1f2d0c6eeb1adb14f10a0ce2d1ce5d87ec382"} Jan 03 06:04:02 crc kubenswrapper[4854]: I0103 06:04:02.379043 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-pscs7" podStartSLOduration=3.379026952 podStartE2EDuration="3.379026952s" podCreationTimestamp="2026-01-03 06:03:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:04:02.377186716 +0000 UTC m=+1420.703763298" watchObservedRunningTime="2026-01-03 06:04:02.379026952 +0000 UTC m=+1420.705603514" Jan 03 06:04:02 crc kubenswrapper[4854]: I0103 06:04:02.464577 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-xqtnh" podStartSLOduration=3.464557772 podStartE2EDuration="3.464557772s" podCreationTimestamp="2026-01-03 06:03:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:04:02.440783175 +0000 UTC m=+1420.767359747" watchObservedRunningTime="2026-01-03 06:04:02.464557772 +0000 UTC m=+1420.791134344" Jan 03 06:04:03 crc kubenswrapper[4854]: I0103 06:04:03.013913 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-vkkvk" Jan 03 06:04:03 crc kubenswrapper[4854]: I0103 06:04:03.160022 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2-dns-svc\") pod \"a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2\" (UID: \"a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2\") " Jan 03 06:04:03 crc kubenswrapper[4854]: I0103 06:04:03.160087 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2-config\") pod \"a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2\" (UID: \"a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2\") " Jan 03 06:04:03 crc kubenswrapper[4854]: I0103 06:04:03.160287 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2-ovsdbserver-sb\") pod \"a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2\" (UID: \"a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2\") " Jan 03 06:04:03 crc kubenswrapper[4854]: I0103 06:04:03.160306 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2-ovsdbserver-nb\") pod \"a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2\" (UID: \"a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2\") " Jan 03 06:04:03 crc kubenswrapper[4854]: I0103 06:04:03.160402 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rv9w8\" (UniqueName: \"kubernetes.io/projected/a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2-kube-api-access-rv9w8\") pod \"a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2\" (UID: \"a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2\") " Jan 03 06:04:03 crc kubenswrapper[4854]: I0103 06:04:03.165203 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2-kube-api-access-rv9w8" (OuterVolumeSpecName: "kube-api-access-rv9w8") pod "a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2" (UID: "a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2"). InnerVolumeSpecName "kube-api-access-rv9w8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:04:03 crc kubenswrapper[4854]: I0103 06:04:03.202363 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2" (UID: "a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:04:03 crc kubenswrapper[4854]: I0103 06:04:03.204464 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2-config" (OuterVolumeSpecName: "config") pod "a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2" (UID: "a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:04:03 crc kubenswrapper[4854]: I0103 06:04:03.214715 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2" (UID: "a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:04:03 crc kubenswrapper[4854]: I0103 06:04:03.216900 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2" (UID: "a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:04:03 crc kubenswrapper[4854]: I0103 06:04:03.264312 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rv9w8\" (UniqueName: \"kubernetes.io/projected/a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2-kube-api-access-rv9w8\") on node \"crc\" DevicePath \"\"" Jan 03 06:04:03 crc kubenswrapper[4854]: I0103 06:04:03.264563 4854 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 03 06:04:03 crc kubenswrapper[4854]: I0103 06:04:03.264665 4854 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2-config\") on node \"crc\" DevicePath \"\"" Jan 03 06:04:03 crc kubenswrapper[4854]: I0103 06:04:03.264737 4854 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 03 06:04:03 crc kubenswrapper[4854]: I0103 06:04:03.264794 4854 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 03 06:04:03 crc kubenswrapper[4854]: I0103 06:04:03.500646 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f877ddd87-vkkvk" event={"ID":"a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2","Type":"ContainerDied","Data":"99c12fb3fba4af204bbe10803340e6e1c8b66477341c0f6c0b4c4e8ab46a4d9f"} Jan 03 06:04:03 crc kubenswrapper[4854]: I0103 06:04:03.500702 4854 scope.go:117] "RemoveContainer" containerID="b431d735ed43f9d23dabd4d73ec657d003e2f5b012fedfc174aaa3ae08924a7f" Jan 03 06:04:03 crc kubenswrapper[4854]: I0103 06:04:03.500835 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-vkkvk" Jan 03 06:04:03 crc kubenswrapper[4854]: I0103 06:04:03.520404 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68dcc9cf6f-nrqr2" event={"ID":"30740ab8-3540-4d4e-a677-2deac6c1b280","Type":"ContainerStarted","Data":"e4d2c73025d878d75de50e1c68fe008f5cb923378ffa7fd84739cd20ac45b8e6"} Jan 03 06:04:03 crc kubenswrapper[4854]: I0103 06:04:03.552451 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-68dcc9cf6f-nrqr2" podStartSLOduration=4.552426382 podStartE2EDuration="4.552426382s" podCreationTimestamp="2026-01-03 06:03:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:04:03.548499245 +0000 UTC m=+1421.875075827" watchObservedRunningTime="2026-01-03 06:04:03.552426382 +0000 UTC m=+1421.879002954" Jan 03 06:04:03 crc kubenswrapper[4854]: I0103 06:04:03.598615 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-vkkvk"] Jan 03 06:04:03 crc kubenswrapper[4854]: I0103 06:04:03.607599 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-vkkvk"] Jan 03 06:04:04 crc kubenswrapper[4854]: I0103 06:04:04.156034 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2" path="/var/lib/kubelet/pods/a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2/volumes" Jan 03 06:04:04 crc kubenswrapper[4854]: I0103 06:04:04.565531 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-v8pxd" event={"ID":"9acf61c2-85c5-4ba2-9f4b-0778c961a268","Type":"ContainerStarted","Data":"52b8654c0bbaccf80b54446637937ba13332e3f5312bc9670ce3a8571a939151"} Jan 03 06:04:04 crc kubenswrapper[4854]: I0103 06:04:04.582458 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-68dcc9cf6f-nrqr2" Jan 03 06:04:04 crc kubenswrapper[4854]: I0103 06:04:04.610033 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-v8pxd" podStartSLOduration=3.480703391 podStartE2EDuration="36.610003295s" podCreationTimestamp="2026-01-03 06:03:28 +0000 UTC" firstStartedPulling="2026-01-03 06:03:29.738983189 +0000 UTC m=+1388.065559761" lastFinishedPulling="2026-01-03 06:04:02.868283093 +0000 UTC m=+1421.194859665" observedRunningTime="2026-01-03 06:04:04.607047752 +0000 UTC m=+1422.933624324" watchObservedRunningTime="2026-01-03 06:04:04.610003295 +0000 UTC m=+1422.936579867" Jan 03 06:04:05 crc kubenswrapper[4854]: I0103 06:04:05.632417 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"79760a75-c798-415d-be02-dd3a6a9c74ee","Type":"ContainerStarted","Data":"12c06d7909dc7a7d34ac2ccd94e549dbfa84a4743cdebe7fca85ba7e5f144c56"} Jan 03 06:04:05 crc kubenswrapper[4854]: I0103 06:04:05.632807 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"79760a75-c798-415d-be02-dd3a6a9c74ee","Type":"ContainerStarted","Data":"eba01cf8998719126bd6e64f8dfbdf65e63dec71dd57486f29f3410845c1673f"} Jan 03 06:04:05 crc kubenswrapper[4854]: I0103 06:04:05.680291 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=18.680276691 podStartE2EDuration="18.680276691s" podCreationTimestamp="2026-01-03 06:03:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:04:05.678684872 +0000 UTC m=+1424.005261454" watchObservedRunningTime="2026-01-03 06:04:05.680276691 +0000 UTC m=+1424.006853263" Jan 03 06:04:06 crc kubenswrapper[4854]: I0103 06:04:06.666883 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f6a47ad8-d256-453c-910a-1506c8f73657","Type":"ContainerStarted","Data":"3e2372c778e2d76b566a55619d25841e7555fd1551a10b2742ac91e03dcfee03"} Jan 03 06:04:06 crc kubenswrapper[4854]: I0103 06:04:06.671973 4854 generic.go:334] "Generic (PLEG): container finished" podID="70382168-87d2-405b-9ee0-8a3969573750" containerID="0f058ad58c99658a32860eff06f299498f5189ce237c486e3b40dae3d2b46db0" exitCode=0 Jan 03 06:04:06 crc kubenswrapper[4854]: I0103 06:04:06.673260 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pscs7" event={"ID":"70382168-87d2-405b-9ee0-8a3969573750","Type":"ContainerDied","Data":"0f058ad58c99658a32860eff06f299498f5189ce237c486e3b40dae3d2b46db0"} Jan 03 06:04:08 crc kubenswrapper[4854]: I0103 06:04:08.750092 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 03 06:04:09 crc kubenswrapper[4854]: I0103 06:04:09.236159 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pscs7" Jan 03 06:04:09 crc kubenswrapper[4854]: I0103 06:04:09.361656 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6bf6\" (UniqueName: \"kubernetes.io/projected/70382168-87d2-405b-9ee0-8a3969573750-kube-api-access-d6bf6\") pod \"70382168-87d2-405b-9ee0-8a3969573750\" (UID: \"70382168-87d2-405b-9ee0-8a3969573750\") " Jan 03 06:04:09 crc kubenswrapper[4854]: I0103 06:04:09.361956 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70382168-87d2-405b-9ee0-8a3969573750-scripts\") pod \"70382168-87d2-405b-9ee0-8a3969573750\" (UID: \"70382168-87d2-405b-9ee0-8a3969573750\") " Jan 03 06:04:09 crc kubenswrapper[4854]: I0103 06:04:09.361992 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/70382168-87d2-405b-9ee0-8a3969573750-fernet-keys\") pod \"70382168-87d2-405b-9ee0-8a3969573750\" (UID: \"70382168-87d2-405b-9ee0-8a3969573750\") " Jan 03 06:04:09 crc kubenswrapper[4854]: I0103 06:04:09.362030 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70382168-87d2-405b-9ee0-8a3969573750-combined-ca-bundle\") pod \"70382168-87d2-405b-9ee0-8a3969573750\" (UID: \"70382168-87d2-405b-9ee0-8a3969573750\") " Jan 03 06:04:09 crc kubenswrapper[4854]: I0103 06:04:09.362145 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70382168-87d2-405b-9ee0-8a3969573750-config-data\") pod \"70382168-87d2-405b-9ee0-8a3969573750\" (UID: \"70382168-87d2-405b-9ee0-8a3969573750\") " Jan 03 06:04:09 crc kubenswrapper[4854]: I0103 06:04:09.362207 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/70382168-87d2-405b-9ee0-8a3969573750-credential-keys\") pod \"70382168-87d2-405b-9ee0-8a3969573750\" (UID: \"70382168-87d2-405b-9ee0-8a3969573750\") " Jan 03 06:04:09 crc kubenswrapper[4854]: I0103 06:04:09.368897 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70382168-87d2-405b-9ee0-8a3969573750-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "70382168-87d2-405b-9ee0-8a3969573750" (UID: "70382168-87d2-405b-9ee0-8a3969573750"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:04:09 crc kubenswrapper[4854]: I0103 06:04:09.369727 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70382168-87d2-405b-9ee0-8a3969573750-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "70382168-87d2-405b-9ee0-8a3969573750" (UID: "70382168-87d2-405b-9ee0-8a3969573750"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:04:09 crc kubenswrapper[4854]: I0103 06:04:09.370685 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70382168-87d2-405b-9ee0-8a3969573750-kube-api-access-d6bf6" (OuterVolumeSpecName: "kube-api-access-d6bf6") pod "70382168-87d2-405b-9ee0-8a3969573750" (UID: "70382168-87d2-405b-9ee0-8a3969573750"). InnerVolumeSpecName "kube-api-access-d6bf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:04:09 crc kubenswrapper[4854]: I0103 06:04:09.376930 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70382168-87d2-405b-9ee0-8a3969573750-scripts" (OuterVolumeSpecName: "scripts") pod "70382168-87d2-405b-9ee0-8a3969573750" (UID: "70382168-87d2-405b-9ee0-8a3969573750"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:04:09 crc kubenswrapper[4854]: I0103 06:04:09.401263 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70382168-87d2-405b-9ee0-8a3969573750-config-data" (OuterVolumeSpecName: "config-data") pod "70382168-87d2-405b-9ee0-8a3969573750" (UID: "70382168-87d2-405b-9ee0-8a3969573750"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:04:09 crc kubenswrapper[4854]: I0103 06:04:09.404360 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70382168-87d2-405b-9ee0-8a3969573750-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "70382168-87d2-405b-9ee0-8a3969573750" (UID: "70382168-87d2-405b-9ee0-8a3969573750"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:04:09 crc kubenswrapper[4854]: I0103 06:04:09.466517 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70382168-87d2-405b-9ee0-8a3969573750-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 06:04:09 crc kubenswrapper[4854]: I0103 06:04:09.466569 4854 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/70382168-87d2-405b-9ee0-8a3969573750-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 03 06:04:09 crc kubenswrapper[4854]: I0103 06:04:09.466588 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6bf6\" (UniqueName: \"kubernetes.io/projected/70382168-87d2-405b-9ee0-8a3969573750-kube-api-access-d6bf6\") on node \"crc\" DevicePath \"\"" Jan 03 06:04:09 crc kubenswrapper[4854]: I0103 06:04:09.466601 4854 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70382168-87d2-405b-9ee0-8a3969573750-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:04:09 crc kubenswrapper[4854]: I0103 06:04:09.466610 4854 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/70382168-87d2-405b-9ee0-8a3969573750-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 03 06:04:09 crc kubenswrapper[4854]: I0103 06:04:09.466619 4854 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70382168-87d2-405b-9ee0-8a3969573750-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:04:09 crc kubenswrapper[4854]: I0103 06:04:09.713962 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pscs7" event={"ID":"70382168-87d2-405b-9ee0-8a3969573750","Type":"ContainerDied","Data":"1395e5198c492d0fbf97141c6644c0d7374fcace22b1a393c34c4cb99f4c844c"} Jan 03 06:04:09 crc kubenswrapper[4854]: I0103 06:04:09.713999 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1395e5198c492d0fbf97141c6644c0d7374fcace22b1a393c34c4cb99f4c844c" Jan 03 06:04:09 crc kubenswrapper[4854]: I0103 06:04:09.714053 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pscs7" Jan 03 06:04:10 crc kubenswrapper[4854]: I0103 06:04:10.423187 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-68dcc9cf6f-nrqr2" Jan 03 06:04:10 crc kubenswrapper[4854]: I0103 06:04:10.424636 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-pscs7"] Jan 03 06:04:10 crc kubenswrapper[4854]: I0103 06:04:10.439869 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-pscs7"] Jan 03 06:04:10 crc kubenswrapper[4854]: I0103 06:04:10.491957 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-wtw2t"] Jan 03 06:04:10 crc kubenswrapper[4854]: I0103 06:04:10.492260 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-wtw2t" podUID="6e3f49c8-b025-4f3c-b356-847e0286a103" containerName="dnsmasq-dns" containerID="cri-o://b7d8550b767b745c10631f8f4cfd712f0fbf747774b54c5ddba943f86791c42c" gracePeriod=10 Jan 03 06:04:10 crc kubenswrapper[4854]: I0103 06:04:10.574296 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-86vzw"] Jan 03 06:04:10 crc kubenswrapper[4854]: E0103 06:04:10.574989 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70382168-87d2-405b-9ee0-8a3969573750" containerName="keystone-bootstrap" Jan 03 06:04:10 crc kubenswrapper[4854]: I0103 06:04:10.575010 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="70382168-87d2-405b-9ee0-8a3969573750" containerName="keystone-bootstrap" Jan 03 06:04:10 crc kubenswrapper[4854]: E0103 06:04:10.575028 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2" containerName="init" Jan 03 06:04:10 crc kubenswrapper[4854]: I0103 06:04:10.575037 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2" containerName="init" Jan 03 06:04:10 crc kubenswrapper[4854]: I0103 06:04:10.575314 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0e0538f-2e4f-4738-8b00-14fe9cb6f2c2" containerName="init" Jan 03 06:04:10 crc kubenswrapper[4854]: I0103 06:04:10.575337 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="70382168-87d2-405b-9ee0-8a3969573750" containerName="keystone-bootstrap" Jan 03 06:04:10 crc kubenswrapper[4854]: I0103 06:04:10.576073 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-86vzw" Jan 03 06:04:10 crc kubenswrapper[4854]: I0103 06:04:10.580271 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 03 06:04:10 crc kubenswrapper[4854]: I0103 06:04:10.580482 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 03 06:04:10 crc kubenswrapper[4854]: I0103 06:04:10.580880 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 03 06:04:10 crc kubenswrapper[4854]: I0103 06:04:10.581173 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-c82z5" Jan 03 06:04:10 crc kubenswrapper[4854]: I0103 06:04:10.584273 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 03 06:04:10 crc kubenswrapper[4854]: I0103 06:04:10.590574 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-86vzw"] Jan 03 06:04:10 crc kubenswrapper[4854]: I0103 06:04:10.707512 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8beded3-7a32-47a0-a12a-e346422e7323-config-data\") pod \"keystone-bootstrap-86vzw\" (UID: \"c8beded3-7a32-47a0-a12a-e346422e7323\") " pod="openstack/keystone-bootstrap-86vzw" Jan 03 06:04:10 crc kubenswrapper[4854]: I0103 06:04:10.707820 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8beded3-7a32-47a0-a12a-e346422e7323-scripts\") pod \"keystone-bootstrap-86vzw\" (UID: \"c8beded3-7a32-47a0-a12a-e346422e7323\") " pod="openstack/keystone-bootstrap-86vzw" Jan 03 06:04:10 crc kubenswrapper[4854]: I0103 06:04:10.707865 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8beded3-7a32-47a0-a12a-e346422e7323-combined-ca-bundle\") pod \"keystone-bootstrap-86vzw\" (UID: \"c8beded3-7a32-47a0-a12a-e346422e7323\") " pod="openstack/keystone-bootstrap-86vzw" Jan 03 06:04:10 crc kubenswrapper[4854]: I0103 06:04:10.707889 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c8beded3-7a32-47a0-a12a-e346422e7323-fernet-keys\") pod \"keystone-bootstrap-86vzw\" (UID: \"c8beded3-7a32-47a0-a12a-e346422e7323\") " pod="openstack/keystone-bootstrap-86vzw" Jan 03 06:04:10 crc kubenswrapper[4854]: I0103 06:04:10.707919 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzfhq\" (UniqueName: \"kubernetes.io/projected/c8beded3-7a32-47a0-a12a-e346422e7323-kube-api-access-fzfhq\") pod \"keystone-bootstrap-86vzw\" (UID: \"c8beded3-7a32-47a0-a12a-e346422e7323\") " pod="openstack/keystone-bootstrap-86vzw" Jan 03 06:04:10 crc kubenswrapper[4854]: I0103 06:04:10.707968 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c8beded3-7a32-47a0-a12a-e346422e7323-credential-keys\") pod \"keystone-bootstrap-86vzw\" (UID: \"c8beded3-7a32-47a0-a12a-e346422e7323\") " pod="openstack/keystone-bootstrap-86vzw" Jan 03 06:04:10 crc kubenswrapper[4854]: I0103 06:04:10.734216 4854 generic.go:334] "Generic (PLEG): container finished" podID="6e3f49c8-b025-4f3c-b356-847e0286a103" containerID="b7d8550b767b745c10631f8f4cfd712f0fbf747774b54c5ddba943f86791c42c" exitCode=0 Jan 03 06:04:10 crc kubenswrapper[4854]: I0103 06:04:10.734257 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-wtw2t" event={"ID":"6e3f49c8-b025-4f3c-b356-847e0286a103","Type":"ContainerDied","Data":"b7d8550b767b745c10631f8f4cfd712f0fbf747774b54c5ddba943f86791c42c"} Jan 03 06:04:10 crc kubenswrapper[4854]: I0103 06:04:10.810270 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8beded3-7a32-47a0-a12a-e346422e7323-config-data\") pod \"keystone-bootstrap-86vzw\" (UID: \"c8beded3-7a32-47a0-a12a-e346422e7323\") " pod="openstack/keystone-bootstrap-86vzw" Jan 03 06:04:10 crc kubenswrapper[4854]: I0103 06:04:10.810322 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8beded3-7a32-47a0-a12a-e346422e7323-scripts\") pod \"keystone-bootstrap-86vzw\" (UID: \"c8beded3-7a32-47a0-a12a-e346422e7323\") " pod="openstack/keystone-bootstrap-86vzw" Jan 03 06:04:10 crc kubenswrapper[4854]: I0103 06:04:10.810372 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8beded3-7a32-47a0-a12a-e346422e7323-combined-ca-bundle\") pod \"keystone-bootstrap-86vzw\" (UID: \"c8beded3-7a32-47a0-a12a-e346422e7323\") " pod="openstack/keystone-bootstrap-86vzw" Jan 03 06:04:10 crc kubenswrapper[4854]: I0103 06:04:10.810399 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c8beded3-7a32-47a0-a12a-e346422e7323-fernet-keys\") pod \"keystone-bootstrap-86vzw\" (UID: \"c8beded3-7a32-47a0-a12a-e346422e7323\") " pod="openstack/keystone-bootstrap-86vzw" Jan 03 06:04:10 crc kubenswrapper[4854]: I0103 06:04:10.810420 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzfhq\" (UniqueName: \"kubernetes.io/projected/c8beded3-7a32-47a0-a12a-e346422e7323-kube-api-access-fzfhq\") pod \"keystone-bootstrap-86vzw\" (UID: \"c8beded3-7a32-47a0-a12a-e346422e7323\") " pod="openstack/keystone-bootstrap-86vzw" Jan 03 06:04:10 crc kubenswrapper[4854]: I0103 06:04:10.810467 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c8beded3-7a32-47a0-a12a-e346422e7323-credential-keys\") pod \"keystone-bootstrap-86vzw\" (UID: \"c8beded3-7a32-47a0-a12a-e346422e7323\") " pod="openstack/keystone-bootstrap-86vzw" Jan 03 06:04:10 crc kubenswrapper[4854]: I0103 06:04:10.816633 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c8beded3-7a32-47a0-a12a-e346422e7323-credential-keys\") pod \"keystone-bootstrap-86vzw\" (UID: \"c8beded3-7a32-47a0-a12a-e346422e7323\") " pod="openstack/keystone-bootstrap-86vzw" Jan 03 06:04:10 crc kubenswrapper[4854]: I0103 06:04:10.816814 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8beded3-7a32-47a0-a12a-e346422e7323-combined-ca-bundle\") pod \"keystone-bootstrap-86vzw\" (UID: \"c8beded3-7a32-47a0-a12a-e346422e7323\") " pod="openstack/keystone-bootstrap-86vzw" Jan 03 06:04:10 crc kubenswrapper[4854]: I0103 06:04:10.817060 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8beded3-7a32-47a0-a12a-e346422e7323-config-data\") pod \"keystone-bootstrap-86vzw\" (UID: \"c8beded3-7a32-47a0-a12a-e346422e7323\") " pod="openstack/keystone-bootstrap-86vzw" Jan 03 06:04:10 crc kubenswrapper[4854]: I0103 06:04:10.817383 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c8beded3-7a32-47a0-a12a-e346422e7323-fernet-keys\") pod \"keystone-bootstrap-86vzw\" (UID: \"c8beded3-7a32-47a0-a12a-e346422e7323\") " pod="openstack/keystone-bootstrap-86vzw" Jan 03 06:04:10 crc kubenswrapper[4854]: I0103 06:04:10.826466 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8beded3-7a32-47a0-a12a-e346422e7323-scripts\") pod \"keystone-bootstrap-86vzw\" (UID: \"c8beded3-7a32-47a0-a12a-e346422e7323\") " pod="openstack/keystone-bootstrap-86vzw" Jan 03 06:04:10 crc kubenswrapper[4854]: I0103 06:04:10.830111 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzfhq\" (UniqueName: \"kubernetes.io/projected/c8beded3-7a32-47a0-a12a-e346422e7323-kube-api-access-fzfhq\") pod \"keystone-bootstrap-86vzw\" (UID: \"c8beded3-7a32-47a0-a12a-e346422e7323\") " pod="openstack/keystone-bootstrap-86vzw" Jan 03 06:04:10 crc kubenswrapper[4854]: I0103 06:04:10.906313 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-86vzw" Jan 03 06:04:11 crc kubenswrapper[4854]: I0103 06:04:11.755567 4854 patch_prober.go:28] interesting pod/machine-config-daemon-qdhfx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 06:04:11 crc kubenswrapper[4854]: I0103 06:04:11.755983 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 06:04:12 crc kubenswrapper[4854]: I0103 06:04:12.129483 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70382168-87d2-405b-9ee0-8a3969573750" path="/var/lib/kubelet/pods/70382168-87d2-405b-9ee0-8a3969573750/volumes" Jan 03 06:04:14 crc kubenswrapper[4854]: I0103 06:04:14.894223 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-wtw2t" podUID="6e3f49c8-b025-4f3c-b356-847e0286a103" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.153:5353: connect: connection refused" Jan 03 06:04:18 crc kubenswrapper[4854]: I0103 06:04:18.746602 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 03 06:04:18 crc kubenswrapper[4854]: I0103 06:04:18.753204 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 03 06:04:18 crc kubenswrapper[4854]: I0103 06:04:18.834663 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 03 06:04:19 crc kubenswrapper[4854]: I0103 06:04:19.893937 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-wtw2t" podUID="6e3f49c8-b025-4f3c-b356-847e0286a103" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.153:5353: connect: connection refused" Jan 03 06:04:24 crc kubenswrapper[4854]: I0103 06:04:24.893998 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-wtw2t" podUID="6e3f49c8-b025-4f3c-b356-847e0286a103" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.153:5353: connect: connection refused" Jan 03 06:04:24 crc kubenswrapper[4854]: I0103 06:04:24.894656 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-wtw2t" Jan 03 06:04:30 crc kubenswrapper[4854]: I0103 06:04:30.965113 4854 generic.go:334] "Generic (PLEG): container finished" podID="9acf61c2-85c5-4ba2-9f4b-0778c961a268" containerID="52b8654c0bbaccf80b54446637937ba13332e3f5312bc9670ce3a8571a939151" exitCode=0 Jan 03 06:04:30 crc kubenswrapper[4854]: I0103 06:04:30.965182 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-v8pxd" event={"ID":"9acf61c2-85c5-4ba2-9f4b-0778c961a268","Type":"ContainerDied","Data":"52b8654c0bbaccf80b54446637937ba13332e3f5312bc9670ce3a8571a939151"} Jan 03 06:04:34 crc kubenswrapper[4854]: I0103 06:04:34.893734 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-wtw2t" podUID="6e3f49c8-b025-4f3c-b356-847e0286a103" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.153:5353: i/o timeout" Jan 03 06:04:37 crc kubenswrapper[4854]: I0103 06:04:37.027235 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-v8pxd" event={"ID":"9acf61c2-85c5-4ba2-9f4b-0778c961a268","Type":"ContainerDied","Data":"9dfefc4a57e2033d1660866effe3614369edfdb2238576eed082b3834e3a12bd"} Jan 03 06:04:37 crc kubenswrapper[4854]: I0103 06:04:37.027580 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9dfefc4a57e2033d1660866effe3614369edfdb2238576eed082b3834e3a12bd" Jan 03 06:04:37 crc kubenswrapper[4854]: I0103 06:04:37.030711 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-wtw2t" event={"ID":"6e3f49c8-b025-4f3c-b356-847e0286a103","Type":"ContainerDied","Data":"962a019d9d1075c60d4ad9fa8502f53d59fa2b6c80ecebe9f348e81e1bef1dd3"} Jan 03 06:04:37 crc kubenswrapper[4854]: I0103 06:04:37.030768 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="962a019d9d1075c60d4ad9fa8502f53d59fa2b6c80ecebe9f348e81e1bef1dd3" Jan 03 06:04:37 crc kubenswrapper[4854]: I0103 06:04:37.065335 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-v8pxd" Jan 03 06:04:37 crc kubenswrapper[4854]: I0103 06:04:37.076033 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-wtw2t" Jan 03 06:04:37 crc kubenswrapper[4854]: I0103 06:04:37.192557 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9acf61c2-85c5-4ba2-9f4b-0778c961a268-config-data\") pod \"9acf61c2-85c5-4ba2-9f4b-0778c961a268\" (UID: \"9acf61c2-85c5-4ba2-9f4b-0778c961a268\") " Jan 03 06:04:37 crc kubenswrapper[4854]: I0103 06:04:37.192656 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e3f49c8-b025-4f3c-b356-847e0286a103-config\") pod \"6e3f49c8-b025-4f3c-b356-847e0286a103\" (UID: \"6e3f49c8-b025-4f3c-b356-847e0286a103\") " Jan 03 06:04:37 crc kubenswrapper[4854]: I0103 06:04:37.192683 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6e3f49c8-b025-4f3c-b356-847e0286a103-ovsdbserver-sb\") pod \"6e3f49c8-b025-4f3c-b356-847e0286a103\" (UID: \"6e3f49c8-b025-4f3c-b356-847e0286a103\") " Jan 03 06:04:37 crc kubenswrapper[4854]: I0103 06:04:37.192754 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e3f49c8-b025-4f3c-b356-847e0286a103-dns-svc\") pod \"6e3f49c8-b025-4f3c-b356-847e0286a103\" (UID: \"6e3f49c8-b025-4f3c-b356-847e0286a103\") " Jan 03 06:04:37 crc kubenswrapper[4854]: I0103 06:04:37.192833 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9acf61c2-85c5-4ba2-9f4b-0778c961a268-db-sync-config-data\") pod \"9acf61c2-85c5-4ba2-9f4b-0778c961a268\" (UID: \"9acf61c2-85c5-4ba2-9f4b-0778c961a268\") " Jan 03 06:04:37 crc kubenswrapper[4854]: I0103 06:04:37.192892 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clkht\" (UniqueName: \"kubernetes.io/projected/9acf61c2-85c5-4ba2-9f4b-0778c961a268-kube-api-access-clkht\") pod \"9acf61c2-85c5-4ba2-9f4b-0778c961a268\" (UID: \"9acf61c2-85c5-4ba2-9f4b-0778c961a268\") " Jan 03 06:04:37 crc kubenswrapper[4854]: I0103 06:04:37.192925 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9acf61c2-85c5-4ba2-9f4b-0778c961a268-combined-ca-bundle\") pod \"9acf61c2-85c5-4ba2-9f4b-0778c961a268\" (UID: \"9acf61c2-85c5-4ba2-9f4b-0778c961a268\") " Jan 03 06:04:37 crc kubenswrapper[4854]: I0103 06:04:37.193073 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8bk9\" (UniqueName: \"kubernetes.io/projected/6e3f49c8-b025-4f3c-b356-847e0286a103-kube-api-access-w8bk9\") pod \"6e3f49c8-b025-4f3c-b356-847e0286a103\" (UID: \"6e3f49c8-b025-4f3c-b356-847e0286a103\") " Jan 03 06:04:37 crc kubenswrapper[4854]: I0103 06:04:37.193108 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6e3f49c8-b025-4f3c-b356-847e0286a103-ovsdbserver-nb\") pod \"6e3f49c8-b025-4f3c-b356-847e0286a103\" (UID: \"6e3f49c8-b025-4f3c-b356-847e0286a103\") " Jan 03 06:04:37 crc kubenswrapper[4854]: I0103 06:04:37.202312 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9acf61c2-85c5-4ba2-9f4b-0778c961a268-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "9acf61c2-85c5-4ba2-9f4b-0778c961a268" (UID: "9acf61c2-85c5-4ba2-9f4b-0778c961a268"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:04:37 crc kubenswrapper[4854]: I0103 06:04:37.202343 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e3f49c8-b025-4f3c-b356-847e0286a103-kube-api-access-w8bk9" (OuterVolumeSpecName: "kube-api-access-w8bk9") pod "6e3f49c8-b025-4f3c-b356-847e0286a103" (UID: "6e3f49c8-b025-4f3c-b356-847e0286a103"). InnerVolumeSpecName "kube-api-access-w8bk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:04:37 crc kubenswrapper[4854]: I0103 06:04:37.202744 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9acf61c2-85c5-4ba2-9f4b-0778c961a268-kube-api-access-clkht" (OuterVolumeSpecName: "kube-api-access-clkht") pod "9acf61c2-85c5-4ba2-9f4b-0778c961a268" (UID: "9acf61c2-85c5-4ba2-9f4b-0778c961a268"). InnerVolumeSpecName "kube-api-access-clkht". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:04:37 crc kubenswrapper[4854]: I0103 06:04:37.236351 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9acf61c2-85c5-4ba2-9f4b-0778c961a268-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9acf61c2-85c5-4ba2-9f4b-0778c961a268" (UID: "9acf61c2-85c5-4ba2-9f4b-0778c961a268"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:04:37 crc kubenswrapper[4854]: I0103 06:04:37.263833 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e3f49c8-b025-4f3c-b356-847e0286a103-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6e3f49c8-b025-4f3c-b356-847e0286a103" (UID: "6e3f49c8-b025-4f3c-b356-847e0286a103"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:04:37 crc kubenswrapper[4854]: I0103 06:04:37.266474 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9acf61c2-85c5-4ba2-9f4b-0778c961a268-config-data" (OuterVolumeSpecName: "config-data") pod "9acf61c2-85c5-4ba2-9f4b-0778c961a268" (UID: "9acf61c2-85c5-4ba2-9f4b-0778c961a268"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:04:37 crc kubenswrapper[4854]: I0103 06:04:37.267065 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e3f49c8-b025-4f3c-b356-847e0286a103-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6e3f49c8-b025-4f3c-b356-847e0286a103" (UID: "6e3f49c8-b025-4f3c-b356-847e0286a103"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:04:37 crc kubenswrapper[4854]: I0103 06:04:37.267176 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e3f49c8-b025-4f3c-b356-847e0286a103-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6e3f49c8-b025-4f3c-b356-847e0286a103" (UID: "6e3f49c8-b025-4f3c-b356-847e0286a103"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:04:37 crc kubenswrapper[4854]: I0103 06:04:37.273778 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e3f49c8-b025-4f3c-b356-847e0286a103-config" (OuterVolumeSpecName: "config") pod "6e3f49c8-b025-4f3c-b356-847e0286a103" (UID: "6e3f49c8-b025-4f3c-b356-847e0286a103"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:04:37 crc kubenswrapper[4854]: I0103 06:04:37.295974 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clkht\" (UniqueName: \"kubernetes.io/projected/9acf61c2-85c5-4ba2-9f4b-0778c961a268-kube-api-access-clkht\") on node \"crc\" DevicePath \"\"" Jan 03 06:04:37 crc kubenswrapper[4854]: I0103 06:04:37.296016 4854 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9acf61c2-85c5-4ba2-9f4b-0778c961a268-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:04:37 crc kubenswrapper[4854]: I0103 06:04:37.296026 4854 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6e3f49c8-b025-4f3c-b356-847e0286a103-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 03 06:04:37 crc kubenswrapper[4854]: I0103 06:04:37.296035 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8bk9\" (UniqueName: \"kubernetes.io/projected/6e3f49c8-b025-4f3c-b356-847e0286a103-kube-api-access-w8bk9\") on node \"crc\" DevicePath \"\"" Jan 03 06:04:37 crc kubenswrapper[4854]: I0103 06:04:37.296045 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9acf61c2-85c5-4ba2-9f4b-0778c961a268-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 06:04:37 crc kubenswrapper[4854]: I0103 06:04:37.296054 4854 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6e3f49c8-b025-4f3c-b356-847e0286a103-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 03 06:04:37 crc kubenswrapper[4854]: I0103 06:04:37.296064 4854 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e3f49c8-b025-4f3c-b356-847e0286a103-config\") on node \"crc\" DevicePath \"\"" Jan 03 06:04:37 crc kubenswrapper[4854]: I0103 06:04:37.296072 4854 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e3f49c8-b025-4f3c-b356-847e0286a103-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 03 06:04:37 crc kubenswrapper[4854]: I0103 06:04:37.296111 4854 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9acf61c2-85c5-4ba2-9f4b-0778c961a268-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 06:04:37 crc kubenswrapper[4854]: E0103 06:04:37.483686 4854 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Jan 03 06:04:37 crc kubenswrapper[4854]: E0103 06:04:37.483877 4854 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vrnjg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-9rnh5_openstack(8f46296d-5d5c-4aa8-94e1-e8e5951da088): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 03 06:04:37 crc kubenswrapper[4854]: E0103 06:04:37.485647 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-9rnh5" podUID="8f46296d-5d5c-4aa8-94e1-e8e5951da088" Jan 03 06:04:37 crc kubenswrapper[4854]: E0103 06:04:37.966862 4854 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 03 06:04:37 crc kubenswrapper[4854]: E0103 06:04:37.967260 4854 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pk9wm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-lk7dp_openstack(cc6ba4e7-fb71-40f0-b478-e12ed5fc5ae4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 03 06:04:37 crc kubenswrapper[4854]: E0103 06:04:37.969037 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-lk7dp" podUID="cc6ba4e7-fb71-40f0-b478-e12ed5fc5ae4" Jan 03 06:04:38 crc kubenswrapper[4854]: I0103 06:04:38.039470 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-wtw2t" Jan 03 06:04:38 crc kubenswrapper[4854]: I0103 06:04:38.039493 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-v8pxd" Jan 03 06:04:38 crc kubenswrapper[4854]: E0103 06:04:38.041227 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-lk7dp" podUID="cc6ba4e7-fb71-40f0-b478-e12ed5fc5ae4" Jan 03 06:04:38 crc kubenswrapper[4854]: E0103 06:04:38.042353 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-9rnh5" podUID="8f46296d-5d5c-4aa8-94e1-e8e5951da088" Jan 03 06:04:38 crc kubenswrapper[4854]: I0103 06:04:38.111452 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-wtw2t"] Jan 03 06:04:38 crc kubenswrapper[4854]: I0103 06:04:38.155103 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-wtw2t"] Jan 03 06:04:39 crc kubenswrapper[4854]: I0103 06:04:39.084972 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f84976bdf-nfn7r"] Jan 03 06:04:39 crc kubenswrapper[4854]: E0103 06:04:39.085620 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e3f49c8-b025-4f3c-b356-847e0286a103" containerName="init" Jan 03 06:04:39 crc kubenswrapper[4854]: I0103 06:04:39.085632 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e3f49c8-b025-4f3c-b356-847e0286a103" containerName="init" Jan 03 06:04:39 crc kubenswrapper[4854]: E0103 06:04:39.085648 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e3f49c8-b025-4f3c-b356-847e0286a103" containerName="dnsmasq-dns" Jan 03 06:04:39 crc kubenswrapper[4854]: I0103 06:04:39.085653 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e3f49c8-b025-4f3c-b356-847e0286a103" containerName="dnsmasq-dns" Jan 03 06:04:39 crc kubenswrapper[4854]: E0103 06:04:39.085666 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9acf61c2-85c5-4ba2-9f4b-0778c961a268" containerName="glance-db-sync" Jan 03 06:04:39 crc kubenswrapper[4854]: I0103 06:04:39.085673 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="9acf61c2-85c5-4ba2-9f4b-0778c961a268" containerName="glance-db-sync" Jan 03 06:04:39 crc kubenswrapper[4854]: I0103 06:04:39.085849 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e3f49c8-b025-4f3c-b356-847e0286a103" containerName="dnsmasq-dns" Jan 03 06:04:39 crc kubenswrapper[4854]: I0103 06:04:39.085874 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="9acf61c2-85c5-4ba2-9f4b-0778c961a268" containerName="glance-db-sync" Jan 03 06:04:39 crc kubenswrapper[4854]: I0103 06:04:39.090729 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84976bdf-nfn7r" Jan 03 06:04:39 crc kubenswrapper[4854]: I0103 06:04:39.096323 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f24e646-0b5a-449a-bce4-35b97195975d-config\") pod \"dnsmasq-dns-f84976bdf-nfn7r\" (UID: \"8f24e646-0b5a-449a-bce4-35b97195975d\") " pod="openstack/dnsmasq-dns-f84976bdf-nfn7r" Jan 03 06:04:39 crc kubenswrapper[4854]: I0103 06:04:39.096410 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f24e646-0b5a-449a-bce4-35b97195975d-dns-svc\") pod \"dnsmasq-dns-f84976bdf-nfn7r\" (UID: \"8f24e646-0b5a-449a-bce4-35b97195975d\") " pod="openstack/dnsmasq-dns-f84976bdf-nfn7r" Jan 03 06:04:39 crc kubenswrapper[4854]: I0103 06:04:39.096520 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8f24e646-0b5a-449a-bce4-35b97195975d-ovsdbserver-nb\") pod \"dnsmasq-dns-f84976bdf-nfn7r\" (UID: \"8f24e646-0b5a-449a-bce4-35b97195975d\") " pod="openstack/dnsmasq-dns-f84976bdf-nfn7r" Jan 03 06:04:39 crc kubenswrapper[4854]: I0103 06:04:39.096558 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8f24e646-0b5a-449a-bce4-35b97195975d-ovsdbserver-sb\") pod \"dnsmasq-dns-f84976bdf-nfn7r\" (UID: \"8f24e646-0b5a-449a-bce4-35b97195975d\") " pod="openstack/dnsmasq-dns-f84976bdf-nfn7r" Jan 03 06:04:39 crc kubenswrapper[4854]: I0103 06:04:39.096580 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dj6t\" (UniqueName: \"kubernetes.io/projected/8f24e646-0b5a-449a-bce4-35b97195975d-kube-api-access-7dj6t\") pod \"dnsmasq-dns-f84976bdf-nfn7r\" (UID: \"8f24e646-0b5a-449a-bce4-35b97195975d\") " pod="openstack/dnsmasq-dns-f84976bdf-nfn7r" Jan 03 06:04:39 crc kubenswrapper[4854]: I0103 06:04:39.140143 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f84976bdf-nfn7r"] Jan 03 06:04:39 crc kubenswrapper[4854]: I0103 06:04:39.197942 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f24e646-0b5a-449a-bce4-35b97195975d-config\") pod \"dnsmasq-dns-f84976bdf-nfn7r\" (UID: \"8f24e646-0b5a-449a-bce4-35b97195975d\") " pod="openstack/dnsmasq-dns-f84976bdf-nfn7r" Jan 03 06:04:39 crc kubenswrapper[4854]: I0103 06:04:39.198112 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f24e646-0b5a-449a-bce4-35b97195975d-dns-svc\") pod \"dnsmasq-dns-f84976bdf-nfn7r\" (UID: \"8f24e646-0b5a-449a-bce4-35b97195975d\") " pod="openstack/dnsmasq-dns-f84976bdf-nfn7r" Jan 03 06:04:39 crc kubenswrapper[4854]: I0103 06:04:39.198275 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8f24e646-0b5a-449a-bce4-35b97195975d-ovsdbserver-nb\") pod \"dnsmasq-dns-f84976bdf-nfn7r\" (UID: \"8f24e646-0b5a-449a-bce4-35b97195975d\") " pod="openstack/dnsmasq-dns-f84976bdf-nfn7r" Jan 03 06:04:39 crc kubenswrapper[4854]: I0103 06:04:39.198346 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8f24e646-0b5a-449a-bce4-35b97195975d-ovsdbserver-sb\") pod \"dnsmasq-dns-f84976bdf-nfn7r\" (UID: \"8f24e646-0b5a-449a-bce4-35b97195975d\") " pod="openstack/dnsmasq-dns-f84976bdf-nfn7r" Jan 03 06:04:39 crc kubenswrapper[4854]: I0103 06:04:39.198370 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dj6t\" (UniqueName: \"kubernetes.io/projected/8f24e646-0b5a-449a-bce4-35b97195975d-kube-api-access-7dj6t\") pod \"dnsmasq-dns-f84976bdf-nfn7r\" (UID: \"8f24e646-0b5a-449a-bce4-35b97195975d\") " pod="openstack/dnsmasq-dns-f84976bdf-nfn7r" Jan 03 06:04:39 crc kubenswrapper[4854]: I0103 06:04:39.199138 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f24e646-0b5a-449a-bce4-35b97195975d-config\") pod \"dnsmasq-dns-f84976bdf-nfn7r\" (UID: \"8f24e646-0b5a-449a-bce4-35b97195975d\") " pod="openstack/dnsmasq-dns-f84976bdf-nfn7r" Jan 03 06:04:39 crc kubenswrapper[4854]: I0103 06:04:39.199290 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8f24e646-0b5a-449a-bce4-35b97195975d-ovsdbserver-nb\") pod \"dnsmasq-dns-f84976bdf-nfn7r\" (UID: \"8f24e646-0b5a-449a-bce4-35b97195975d\") " pod="openstack/dnsmasq-dns-f84976bdf-nfn7r" Jan 03 06:04:39 crc kubenswrapper[4854]: I0103 06:04:39.210208 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f24e646-0b5a-449a-bce4-35b97195975d-dns-svc\") pod \"dnsmasq-dns-f84976bdf-nfn7r\" (UID: \"8f24e646-0b5a-449a-bce4-35b97195975d\") " pod="openstack/dnsmasq-dns-f84976bdf-nfn7r" Jan 03 06:04:39 crc kubenswrapper[4854]: I0103 06:04:39.210497 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8f24e646-0b5a-449a-bce4-35b97195975d-ovsdbserver-sb\") pod \"dnsmasq-dns-f84976bdf-nfn7r\" (UID: \"8f24e646-0b5a-449a-bce4-35b97195975d\") " pod="openstack/dnsmasq-dns-f84976bdf-nfn7r" Jan 03 06:04:39 crc kubenswrapper[4854]: I0103 06:04:39.261462 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dj6t\" (UniqueName: \"kubernetes.io/projected/8f24e646-0b5a-449a-bce4-35b97195975d-kube-api-access-7dj6t\") pod \"dnsmasq-dns-f84976bdf-nfn7r\" (UID: \"8f24e646-0b5a-449a-bce4-35b97195975d\") " pod="openstack/dnsmasq-dns-f84976bdf-nfn7r" Jan 03 06:04:39 crc kubenswrapper[4854]: I0103 06:04:39.429822 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84976bdf-nfn7r" Jan 03 06:04:39 crc kubenswrapper[4854]: I0103 06:04:39.895000 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-wtw2t" podUID="6e3f49c8-b025-4f3c-b356-847e0286a103" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.153:5353: i/o timeout" Jan 03 06:04:39 crc kubenswrapper[4854]: I0103 06:04:39.999399 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 03 06:04:40 crc kubenswrapper[4854]: I0103 06:04:40.008522 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 03 06:04:40 crc kubenswrapper[4854]: I0103 06:04:40.021817 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-sgxt6" Jan 03 06:04:40 crc kubenswrapper[4854]: I0103 06:04:40.022512 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 03 06:04:40 crc kubenswrapper[4854]: I0103 06:04:40.027014 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 03 06:04:40 crc kubenswrapper[4854]: I0103 06:04:40.042397 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 03 06:04:40 crc kubenswrapper[4854]: I0103 06:04:40.130836 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e\") " pod="openstack/glance-default-external-api-0" Jan 03 06:04:40 crc kubenswrapper[4854]: I0103 06:04:40.130988 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p72zz\" (UniqueName: \"kubernetes.io/projected/133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e-kube-api-access-p72zz\") pod \"glance-default-external-api-0\" (UID: \"133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e\") " pod="openstack/glance-default-external-api-0" Jan 03 06:04:40 crc kubenswrapper[4854]: I0103 06:04:40.131023 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e-config-data\") pod \"glance-default-external-api-0\" (UID: \"133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e\") " pod="openstack/glance-default-external-api-0" Jan 03 06:04:40 crc kubenswrapper[4854]: I0103 06:04:40.131109 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e\") " pod="openstack/glance-default-external-api-0" Jan 03 06:04:40 crc kubenswrapper[4854]: I0103 06:04:40.131135 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e-logs\") pod \"glance-default-external-api-0\" (UID: \"133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e\") " pod="openstack/glance-default-external-api-0" Jan 03 06:04:40 crc kubenswrapper[4854]: I0103 06:04:40.131183 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e-scripts\") pod \"glance-default-external-api-0\" (UID: \"133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e\") " pod="openstack/glance-default-external-api-0" Jan 03 06:04:40 crc kubenswrapper[4854]: I0103 06:04:40.131207 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-eb48dc05-44bc-494a-9a3b-52570d27764e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eb48dc05-44bc-494a-9a3b-52570d27764e\") pod \"glance-default-external-api-0\" (UID: \"133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e\") " pod="openstack/glance-default-external-api-0" Jan 03 06:04:40 crc kubenswrapper[4854]: I0103 06:04:40.139822 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e3f49c8-b025-4f3c-b356-847e0286a103" path="/var/lib/kubelet/pods/6e3f49c8-b025-4f3c-b356-847e0286a103/volumes" Jan 03 06:04:40 crc kubenswrapper[4854]: I0103 06:04:40.227784 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 03 06:04:40 crc kubenswrapper[4854]: I0103 06:04:40.230186 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 03 06:04:40 crc kubenswrapper[4854]: I0103 06:04:40.231934 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 03 06:04:40 crc kubenswrapper[4854]: I0103 06:04:40.233306 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e-logs\") pod \"glance-default-external-api-0\" (UID: \"133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e\") " pod="openstack/glance-default-external-api-0" Jan 03 06:04:40 crc kubenswrapper[4854]: I0103 06:04:40.233331 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e\") " pod="openstack/glance-default-external-api-0" Jan 03 06:04:40 crc kubenswrapper[4854]: I0103 06:04:40.233392 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-eb48dc05-44bc-494a-9a3b-52570d27764e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eb48dc05-44bc-494a-9a3b-52570d27764e\") pod \"glance-default-external-api-0\" (UID: \"133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e\") " pod="openstack/glance-default-external-api-0" Jan 03 06:04:40 crc kubenswrapper[4854]: I0103 06:04:40.233410 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e-scripts\") pod \"glance-default-external-api-0\" (UID: \"133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e\") " pod="openstack/glance-default-external-api-0" Jan 03 06:04:40 crc kubenswrapper[4854]: I0103 06:04:40.233557 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e\") " pod="openstack/glance-default-external-api-0" Jan 03 06:04:40 crc kubenswrapper[4854]: I0103 06:04:40.233694 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p72zz\" (UniqueName: \"kubernetes.io/projected/133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e-kube-api-access-p72zz\") pod \"glance-default-external-api-0\" (UID: \"133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e\") " pod="openstack/glance-default-external-api-0" Jan 03 06:04:40 crc kubenswrapper[4854]: I0103 06:04:40.233721 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e-config-data\") pod \"glance-default-external-api-0\" (UID: \"133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e\") " pod="openstack/glance-default-external-api-0" Jan 03 06:04:40 crc kubenswrapper[4854]: I0103 06:04:40.234992 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e-logs\") pod \"glance-default-external-api-0\" (UID: \"133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e\") " pod="openstack/glance-default-external-api-0" Jan 03 06:04:40 crc kubenswrapper[4854]: I0103 06:04:40.235237 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e\") " pod="openstack/glance-default-external-api-0" Jan 03 06:04:40 crc kubenswrapper[4854]: I0103 06:04:40.239327 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e\") " pod="openstack/glance-default-external-api-0" Jan 03 06:04:40 crc kubenswrapper[4854]: I0103 06:04:40.239711 4854 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 03 06:04:40 crc kubenswrapper[4854]: I0103 06:04:40.239764 4854 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-eb48dc05-44bc-494a-9a3b-52570d27764e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eb48dc05-44bc-494a-9a3b-52570d27764e\") pod \"glance-default-external-api-0\" (UID: \"133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c9f88882bd3572929d1777fc402cfbfc71f661649bff4d337785efef0a76426b/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 03 06:04:40 crc kubenswrapper[4854]: I0103 06:04:40.240354 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 03 06:04:40 crc kubenswrapper[4854]: I0103 06:04:40.240565 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e-config-data\") pod \"glance-default-external-api-0\" (UID: \"133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e\") " pod="openstack/glance-default-external-api-0" Jan 03 06:04:40 crc kubenswrapper[4854]: I0103 06:04:40.244718 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e-scripts\") pod \"glance-default-external-api-0\" (UID: \"133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e\") " pod="openstack/glance-default-external-api-0" Jan 03 06:04:40 crc kubenswrapper[4854]: I0103 06:04:40.273012 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p72zz\" (UniqueName: \"kubernetes.io/projected/133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e-kube-api-access-p72zz\") pod \"glance-default-external-api-0\" (UID: \"133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e\") " pod="openstack/glance-default-external-api-0" Jan 03 06:04:40 crc kubenswrapper[4854]: I0103 06:04:40.317819 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-eb48dc05-44bc-494a-9a3b-52570d27764e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eb48dc05-44bc-494a-9a3b-52570d27764e\") pod \"glance-default-external-api-0\" (UID: \"133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e\") " pod="openstack/glance-default-external-api-0" Jan 03 06:04:40 crc kubenswrapper[4854]: I0103 06:04:40.336223 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35c1899a-114d-482d-9798-89c7c65fb40b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"35c1899a-114d-482d-9798-89c7c65fb40b\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:04:40 crc kubenswrapper[4854]: I0103 06:04:40.336269 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35c1899a-114d-482d-9798-89c7c65fb40b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"35c1899a-114d-482d-9798-89c7c65fb40b\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:04:40 crc kubenswrapper[4854]: I0103 06:04:40.336354 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35c1899a-114d-482d-9798-89c7c65fb40b-logs\") pod \"glance-default-internal-api-0\" (UID: \"35c1899a-114d-482d-9798-89c7c65fb40b\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:04:40 crc kubenswrapper[4854]: I0103 06:04:40.336394 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35c1899a-114d-482d-9798-89c7c65fb40b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"35c1899a-114d-482d-9798-89c7c65fb40b\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:04:40 crc kubenswrapper[4854]: I0103 06:04:40.336421 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/35c1899a-114d-482d-9798-89c7c65fb40b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"35c1899a-114d-482d-9798-89c7c65fb40b\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:04:40 crc kubenswrapper[4854]: I0103 06:04:40.336442 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx874\" (UniqueName: \"kubernetes.io/projected/35c1899a-114d-482d-9798-89c7c65fb40b-kube-api-access-mx874\") pod \"glance-default-internal-api-0\" (UID: \"35c1899a-114d-482d-9798-89c7c65fb40b\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:04:40 crc kubenswrapper[4854]: I0103 06:04:40.336481 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-51516975-721e-4e12-b0dd-7f07c321db4e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-51516975-721e-4e12-b0dd-7f07c321db4e\") pod \"glance-default-internal-api-0\" (UID: \"35c1899a-114d-482d-9798-89c7c65fb40b\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:04:40 crc kubenswrapper[4854]: I0103 06:04:40.374159 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 03 06:04:40 crc kubenswrapper[4854]: I0103 06:04:40.438559 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35c1899a-114d-482d-9798-89c7c65fb40b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"35c1899a-114d-482d-9798-89c7c65fb40b\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:04:40 crc kubenswrapper[4854]: I0103 06:04:40.438617 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35c1899a-114d-482d-9798-89c7c65fb40b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"35c1899a-114d-482d-9798-89c7c65fb40b\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:04:40 crc kubenswrapper[4854]: I0103 06:04:40.438691 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35c1899a-114d-482d-9798-89c7c65fb40b-logs\") pod \"glance-default-internal-api-0\" (UID: \"35c1899a-114d-482d-9798-89c7c65fb40b\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:04:40 crc kubenswrapper[4854]: I0103 06:04:40.438736 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35c1899a-114d-482d-9798-89c7c65fb40b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"35c1899a-114d-482d-9798-89c7c65fb40b\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:04:40 crc kubenswrapper[4854]: I0103 06:04:40.439402 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/35c1899a-114d-482d-9798-89c7c65fb40b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"35c1899a-114d-482d-9798-89c7c65fb40b\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:04:40 crc kubenswrapper[4854]: I0103 06:04:40.439435 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx874\" (UniqueName: \"kubernetes.io/projected/35c1899a-114d-482d-9798-89c7c65fb40b-kube-api-access-mx874\") pod \"glance-default-internal-api-0\" (UID: \"35c1899a-114d-482d-9798-89c7c65fb40b\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:04:40 crc kubenswrapper[4854]: I0103 06:04:40.439458 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35c1899a-114d-482d-9798-89c7c65fb40b-logs\") pod \"glance-default-internal-api-0\" (UID: \"35c1899a-114d-482d-9798-89c7c65fb40b\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:04:40 crc kubenswrapper[4854]: I0103 06:04:40.439485 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-51516975-721e-4e12-b0dd-7f07c321db4e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-51516975-721e-4e12-b0dd-7f07c321db4e\") pod \"glance-default-internal-api-0\" (UID: \"35c1899a-114d-482d-9798-89c7c65fb40b\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:04:40 crc kubenswrapper[4854]: I0103 06:04:40.439765 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/35c1899a-114d-482d-9798-89c7c65fb40b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"35c1899a-114d-482d-9798-89c7c65fb40b\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:04:40 crc kubenswrapper[4854]: I0103 06:04:40.442663 4854 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 03 06:04:40 crc kubenswrapper[4854]: I0103 06:04:40.442703 4854 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-51516975-721e-4e12-b0dd-7f07c321db4e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-51516975-721e-4e12-b0dd-7f07c321db4e\") pod \"glance-default-internal-api-0\" (UID: \"35c1899a-114d-482d-9798-89c7c65fb40b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0b13373a9549fe2a930d42f4a05d9c8d5f6308eb7dd62d88a464a85534d03fb8/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 03 06:04:40 crc kubenswrapper[4854]: I0103 06:04:40.444275 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35c1899a-114d-482d-9798-89c7c65fb40b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"35c1899a-114d-482d-9798-89c7c65fb40b\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:04:40 crc kubenswrapper[4854]: I0103 06:04:40.447602 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35c1899a-114d-482d-9798-89c7c65fb40b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"35c1899a-114d-482d-9798-89c7c65fb40b\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:04:40 crc kubenswrapper[4854]: I0103 06:04:40.447907 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35c1899a-114d-482d-9798-89c7c65fb40b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"35c1899a-114d-482d-9798-89c7c65fb40b\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:04:40 crc kubenswrapper[4854]: I0103 06:04:40.461749 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx874\" (UniqueName: \"kubernetes.io/projected/35c1899a-114d-482d-9798-89c7c65fb40b-kube-api-access-mx874\") pod \"glance-default-internal-api-0\" (UID: \"35c1899a-114d-482d-9798-89c7c65fb40b\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:04:40 crc kubenswrapper[4854]: I0103 06:04:40.508473 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-51516975-721e-4e12-b0dd-7f07c321db4e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-51516975-721e-4e12-b0dd-7f07c321db4e\") pod \"glance-default-internal-api-0\" (UID: \"35c1899a-114d-482d-9798-89c7c65fb40b\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:04:40 crc kubenswrapper[4854]: I0103 06:04:40.651152 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 03 06:04:40 crc kubenswrapper[4854]: E0103 06:04:40.950379 4854 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 03 06:04:40 crc kubenswrapper[4854]: E0103 06:04:40.950807 4854 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bc7nb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-sd52b_openstack(ca061deb-f600-49db-8ac3-6213e22b2f76): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 03 06:04:40 crc kubenswrapper[4854]: E0103 06:04:40.952222 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-sd52b" podUID="ca061deb-f600-49db-8ac3-6213e22b2f76" Jan 03 06:04:41 crc kubenswrapper[4854]: I0103 06:04:41.096555 4854 generic.go:334] "Generic (PLEG): container finished" podID="4fe19914-d9c1-4a1d-bba5-77167bca38f2" containerID="eb3520fc3c3653658357c578dc1ab6472976eef6377fb81043938c28784b4dce" exitCode=0 Jan 03 06:04:41 crc kubenswrapper[4854]: I0103 06:04:41.097577 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-xqtnh" event={"ID":"4fe19914-d9c1-4a1d-bba5-77167bca38f2","Type":"ContainerDied","Data":"eb3520fc3c3653658357c578dc1ab6472976eef6377fb81043938c28784b4dce"} Jan 03 06:04:41 crc kubenswrapper[4854]: E0103 06:04:41.110410 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-sd52b" podUID="ca061deb-f600-49db-8ac3-6213e22b2f76" Jan 03 06:04:41 crc kubenswrapper[4854]: I0103 06:04:41.652107 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-86vzw"] Jan 03 06:04:41 crc kubenswrapper[4854]: I0103 06:04:41.663042 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 03 06:04:41 crc kubenswrapper[4854]: I0103 06:04:41.757217 4854 patch_prober.go:28] interesting pod/machine-config-daemon-qdhfx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 06:04:41 crc kubenswrapper[4854]: I0103 06:04:41.757265 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 06:04:41 crc kubenswrapper[4854]: I0103 06:04:41.842981 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f84976bdf-nfn7r"] Jan 03 06:04:41 crc kubenswrapper[4854]: I0103 06:04:41.928940 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 03 06:04:42 crc kubenswrapper[4854]: I0103 06:04:42.152319 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41de8a7f-850a-4f6e-8623-e0cdbcdf79e6","Type":"ContainerStarted","Data":"0aed09d0236567b894aaca3c66501d8aa34d7c0933ab3953fd875848ea83a542"} Jan 03 06:04:42 crc kubenswrapper[4854]: I0103 06:04:42.152358 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f6a47ad8-d256-453c-910a-1506c8f73657","Type":"ContainerStarted","Data":"a5fdb7263b602867f9bfad7b66a3b41cf7281a46c7629cbc51364df77525b1f7"} Jan 03 06:04:42 crc kubenswrapper[4854]: I0103 06:04:42.152392 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f6a47ad8-d256-453c-910a-1506c8f73657","Type":"ContainerStarted","Data":"ab7638491fc127d4b1775eb974f89ed3df5c859d1b3d8649128ecf7e4e417b2c"} Jan 03 06:04:42 crc kubenswrapper[4854]: I0103 06:04:42.152404 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f6a47ad8-d256-453c-910a-1506c8f73657","Type":"ContainerStarted","Data":"dbc412effc620f7a0319c5cfd2d235671c26ce9ba4b53458bd07398ce9007cda"} Jan 03 06:04:42 crc kubenswrapper[4854]: I0103 06:04:42.154423 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84976bdf-nfn7r" event={"ID":"8f24e646-0b5a-449a-bce4-35b97195975d","Type":"ContainerStarted","Data":"448b4127b10bf6574a5d229c209b781beeb247d1bcc4cfd19ee1e31ba9e311cb"} Jan 03 06:04:42 crc kubenswrapper[4854]: I0103 06:04:42.157349 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e","Type":"ContainerStarted","Data":"77ec03512c68c4fe7d8dd2ab65edb9c82cf60d7da6026940b820bb14744bbe1f"} Jan 03 06:04:42 crc kubenswrapper[4854]: I0103 06:04:42.159299 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-86vzw" event={"ID":"c8beded3-7a32-47a0-a12a-e346422e7323","Type":"ContainerStarted","Data":"7ad3dc12935ce23f1c5caad7b24a62c1c73514796ef5fdcd0e70dae3c56ee113"} Jan 03 06:04:42 crc kubenswrapper[4854]: I0103 06:04:42.159352 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-86vzw" event={"ID":"c8beded3-7a32-47a0-a12a-e346422e7323","Type":"ContainerStarted","Data":"2daf96baec0fc83b527d6aca89150e852e8474e09a4d5fb5af2f37be042b954d"} Jan 03 06:04:42 crc kubenswrapper[4854]: I0103 06:04:42.172109 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-ff9wl" event={"ID":"dc184ac1-7e14-435e-898d-93e19dab6615","Type":"ContainerStarted","Data":"5b934e7550dec6ee13cc8c7fe7a463b3cbaad26a3d961beee14b114f14323ff8"} Jan 03 06:04:42 crc kubenswrapper[4854]: I0103 06:04:42.242806 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-86vzw" podStartSLOduration=32.242789581 podStartE2EDuration="32.242789581s" podCreationTimestamp="2026-01-03 06:04:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:04:42.234211073 +0000 UTC m=+1460.560787645" watchObservedRunningTime="2026-01-03 06:04:42.242789581 +0000 UTC m=+1460.569366153" Jan 03 06:04:42 crc kubenswrapper[4854]: I0103 06:04:42.283341 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-ff9wl" podStartSLOduration=4.177236955 podStartE2EDuration="43.283319865s" podCreationTimestamp="2026-01-03 06:03:59 +0000 UTC" firstStartedPulling="2026-01-03 06:04:01.776386054 +0000 UTC m=+1420.102962626" lastFinishedPulling="2026-01-03 06:04:40.882468964 +0000 UTC m=+1459.209045536" observedRunningTime="2026-01-03 06:04:42.254034704 +0000 UTC m=+1460.580611276" watchObservedRunningTime="2026-01-03 06:04:42.283319865 +0000 UTC m=+1460.609896437" Jan 03 06:04:42 crc kubenswrapper[4854]: I0103 06:04:42.573581 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 03 06:04:42 crc kubenswrapper[4854]: I0103 06:04:42.659519 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 03 06:04:42 crc kubenswrapper[4854]: I0103 06:04:42.751430 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-xqtnh" Jan 03 06:04:42 crc kubenswrapper[4854]: I0103 06:04:42.772166 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 03 06:04:42 crc kubenswrapper[4854]: I0103 06:04:42.818022 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skm6d\" (UniqueName: \"kubernetes.io/projected/4fe19914-d9c1-4a1d-bba5-77167bca38f2-kube-api-access-skm6d\") pod \"4fe19914-d9c1-4a1d-bba5-77167bca38f2\" (UID: \"4fe19914-d9c1-4a1d-bba5-77167bca38f2\") " Jan 03 06:04:42 crc kubenswrapper[4854]: I0103 06:04:42.818311 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fe19914-d9c1-4a1d-bba5-77167bca38f2-combined-ca-bundle\") pod \"4fe19914-d9c1-4a1d-bba5-77167bca38f2\" (UID: \"4fe19914-d9c1-4a1d-bba5-77167bca38f2\") " Jan 03 06:04:42 crc kubenswrapper[4854]: I0103 06:04:42.818452 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4fe19914-d9c1-4a1d-bba5-77167bca38f2-config\") pod \"4fe19914-d9c1-4a1d-bba5-77167bca38f2\" (UID: \"4fe19914-d9c1-4a1d-bba5-77167bca38f2\") " Jan 03 06:04:42 crc kubenswrapper[4854]: I0103 06:04:42.841105 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fe19914-d9c1-4a1d-bba5-77167bca38f2-kube-api-access-skm6d" (OuterVolumeSpecName: "kube-api-access-skm6d") pod "4fe19914-d9c1-4a1d-bba5-77167bca38f2" (UID: "4fe19914-d9c1-4a1d-bba5-77167bca38f2"). InnerVolumeSpecName "kube-api-access-skm6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:04:42 crc kubenswrapper[4854]: I0103 06:04:42.922656 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skm6d\" (UniqueName: \"kubernetes.io/projected/4fe19914-d9c1-4a1d-bba5-77167bca38f2-kube-api-access-skm6d\") on node \"crc\" DevicePath \"\"" Jan 03 06:04:42 crc kubenswrapper[4854]: I0103 06:04:42.925694 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fe19914-d9c1-4a1d-bba5-77167bca38f2-config" (OuterVolumeSpecName: "config") pod "4fe19914-d9c1-4a1d-bba5-77167bca38f2" (UID: "4fe19914-d9c1-4a1d-bba5-77167bca38f2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:04:42 crc kubenswrapper[4854]: I0103 06:04:42.927873 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fe19914-d9c1-4a1d-bba5-77167bca38f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4fe19914-d9c1-4a1d-bba5-77167bca38f2" (UID: "4fe19914-d9c1-4a1d-bba5-77167bca38f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:04:43 crc kubenswrapper[4854]: I0103 06:04:43.026805 4854 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/4fe19914-d9c1-4a1d-bba5-77167bca38f2-config\") on node \"crc\" DevicePath \"\"" Jan 03 06:04:43 crc kubenswrapper[4854]: I0103 06:04:43.026834 4854 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fe19914-d9c1-4a1d-bba5-77167bca38f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:04:43 crc kubenswrapper[4854]: I0103 06:04:43.188725 4854 generic.go:334] "Generic (PLEG): container finished" podID="8f24e646-0b5a-449a-bce4-35b97195975d" containerID="5b29b8cecbe3078cd76879a06e354aab32d3dc82be8d94482b28e17eef937b34" exitCode=0 Jan 03 06:04:43 crc kubenswrapper[4854]: I0103 06:04:43.188793 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84976bdf-nfn7r" event={"ID":"8f24e646-0b5a-449a-bce4-35b97195975d","Type":"ContainerDied","Data":"5b29b8cecbe3078cd76879a06e354aab32d3dc82be8d94482b28e17eef937b34"} Jan 03 06:04:43 crc kubenswrapper[4854]: I0103 06:04:43.195050 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e","Type":"ContainerStarted","Data":"d6b4b3d5aa97256c15631c081f1af1df3520b4ff236e92cc6ca50050a1790fd5"} Jan 03 06:04:43 crc kubenswrapper[4854]: I0103 06:04:43.225562 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"35c1899a-114d-482d-9798-89c7c65fb40b","Type":"ContainerStarted","Data":"447e847c097ce43bb900b656f3b7a64216b8835de5da70241969b41c80e78edf"} Jan 03 06:04:43 crc kubenswrapper[4854]: I0103 06:04:43.253366 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-xqtnh" Jan 03 06:04:43 crc kubenswrapper[4854]: I0103 06:04:43.253439 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-xqtnh" event={"ID":"4fe19914-d9c1-4a1d-bba5-77167bca38f2","Type":"ContainerDied","Data":"7f5d8c2f8fcbfc85a4daab24f3f1f2d0c6eeb1adb14f10a0ce2d1ce5d87ec382"} Jan 03 06:04:43 crc kubenswrapper[4854]: I0103 06:04:43.253463 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f5d8c2f8fcbfc85a4daab24f3f1f2d0c6eeb1adb14f10a0ce2d1ce5d87ec382" Jan 03 06:04:43 crc kubenswrapper[4854]: I0103 06:04:43.404522 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f84976bdf-nfn7r"] Jan 03 06:04:43 crc kubenswrapper[4854]: I0103 06:04:43.456324 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fb745b69-f55rh"] Jan 03 06:04:43 crc kubenswrapper[4854]: E0103 06:04:43.457919 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fe19914-d9c1-4a1d-bba5-77167bca38f2" containerName="neutron-db-sync" Jan 03 06:04:43 crc kubenswrapper[4854]: I0103 06:04:43.457940 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fe19914-d9c1-4a1d-bba5-77167bca38f2" containerName="neutron-db-sync" Jan 03 06:04:43 crc kubenswrapper[4854]: I0103 06:04:43.458209 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fe19914-d9c1-4a1d-bba5-77167bca38f2" containerName="neutron-db-sync" Jan 03 06:04:43 crc kubenswrapper[4854]: I0103 06:04:43.471072 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fb745b69-f55rh" Jan 03 06:04:43 crc kubenswrapper[4854]: I0103 06:04:43.494913 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fb745b69-f55rh"] Jan 03 06:04:43 crc kubenswrapper[4854]: I0103 06:04:43.568599 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-84c5455478-hczhs"] Jan 03 06:04:43 crc kubenswrapper[4854]: I0103 06:04:43.574999 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0cac862a-2a43-44ed-903a-8d7b09100ac3-ovsdbserver-nb\") pod \"dnsmasq-dns-fb745b69-f55rh\" (UID: \"0cac862a-2a43-44ed-903a-8d7b09100ac3\") " pod="openstack/dnsmasq-dns-fb745b69-f55rh" Jan 03 06:04:43 crc kubenswrapper[4854]: I0103 06:04:43.575100 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cac862a-2a43-44ed-903a-8d7b09100ac3-config\") pod \"dnsmasq-dns-fb745b69-f55rh\" (UID: \"0cac862a-2a43-44ed-903a-8d7b09100ac3\") " pod="openstack/dnsmasq-dns-fb745b69-f55rh" Jan 03 06:04:43 crc kubenswrapper[4854]: I0103 06:04:43.575789 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mnkg\" (UniqueName: \"kubernetes.io/projected/0cac862a-2a43-44ed-903a-8d7b09100ac3-kube-api-access-9mnkg\") pod \"dnsmasq-dns-fb745b69-f55rh\" (UID: \"0cac862a-2a43-44ed-903a-8d7b09100ac3\") " pod="openstack/dnsmasq-dns-fb745b69-f55rh" Jan 03 06:04:43 crc kubenswrapper[4854]: I0103 06:04:43.575938 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0cac862a-2a43-44ed-903a-8d7b09100ac3-dns-svc\") pod \"dnsmasq-dns-fb745b69-f55rh\" (UID: \"0cac862a-2a43-44ed-903a-8d7b09100ac3\") " pod="openstack/dnsmasq-dns-fb745b69-f55rh" Jan 03 06:04:43 crc kubenswrapper[4854]: I0103 06:04:43.575967 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0cac862a-2a43-44ed-903a-8d7b09100ac3-ovsdbserver-sb\") pod \"dnsmasq-dns-fb745b69-f55rh\" (UID: \"0cac862a-2a43-44ed-903a-8d7b09100ac3\") " pod="openstack/dnsmasq-dns-fb745b69-f55rh" Jan 03 06:04:43 crc kubenswrapper[4854]: I0103 06:04:43.579509 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-84c5455478-hczhs" Jan 03 06:04:43 crc kubenswrapper[4854]: I0103 06:04:43.582819 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 03 06:04:43 crc kubenswrapper[4854]: I0103 06:04:43.582904 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-vqhx8" Jan 03 06:04:43 crc kubenswrapper[4854]: I0103 06:04:43.583053 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 03 06:04:43 crc kubenswrapper[4854]: I0103 06:04:43.583637 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 03 06:04:43 crc kubenswrapper[4854]: I0103 06:04:43.612694 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-84c5455478-hczhs"] Jan 03 06:04:43 crc kubenswrapper[4854]: I0103 06:04:43.679435 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cac862a-2a43-44ed-903a-8d7b09100ac3-config\") pod \"dnsmasq-dns-fb745b69-f55rh\" (UID: \"0cac862a-2a43-44ed-903a-8d7b09100ac3\") " pod="openstack/dnsmasq-dns-fb745b69-f55rh" Jan 03 06:04:43 crc kubenswrapper[4854]: I0103 06:04:43.679496 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5nj8\" (UniqueName: \"kubernetes.io/projected/46254f53-deed-4254-801c-1db0db3ec56a-kube-api-access-t5nj8\") pod \"neutron-84c5455478-hczhs\" (UID: \"46254f53-deed-4254-801c-1db0db3ec56a\") " pod="openstack/neutron-84c5455478-hczhs" Jan 03 06:04:43 crc kubenswrapper[4854]: I0103 06:04:43.679531 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/46254f53-deed-4254-801c-1db0db3ec56a-httpd-config\") pod \"neutron-84c5455478-hczhs\" (UID: \"46254f53-deed-4254-801c-1db0db3ec56a\") " pod="openstack/neutron-84c5455478-hczhs" Jan 03 06:04:43 crc kubenswrapper[4854]: I0103 06:04:43.679593 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/46254f53-deed-4254-801c-1db0db3ec56a-ovndb-tls-certs\") pod \"neutron-84c5455478-hczhs\" (UID: \"46254f53-deed-4254-801c-1db0db3ec56a\") " pod="openstack/neutron-84c5455478-hczhs" Jan 03 06:04:43 crc kubenswrapper[4854]: I0103 06:04:43.679638 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/46254f53-deed-4254-801c-1db0db3ec56a-config\") pod \"neutron-84c5455478-hczhs\" (UID: \"46254f53-deed-4254-801c-1db0db3ec56a\") " pod="openstack/neutron-84c5455478-hczhs" Jan 03 06:04:43 crc kubenswrapper[4854]: I0103 06:04:43.679730 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mnkg\" (UniqueName: \"kubernetes.io/projected/0cac862a-2a43-44ed-903a-8d7b09100ac3-kube-api-access-9mnkg\") pod \"dnsmasq-dns-fb745b69-f55rh\" (UID: \"0cac862a-2a43-44ed-903a-8d7b09100ac3\") " pod="openstack/dnsmasq-dns-fb745b69-f55rh" Jan 03 06:04:43 crc kubenswrapper[4854]: I0103 06:04:43.679829 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0cac862a-2a43-44ed-903a-8d7b09100ac3-dns-svc\") pod \"dnsmasq-dns-fb745b69-f55rh\" (UID: \"0cac862a-2a43-44ed-903a-8d7b09100ac3\") " pod="openstack/dnsmasq-dns-fb745b69-f55rh" Jan 03 06:04:43 crc kubenswrapper[4854]: I0103 06:04:43.679880 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0cac862a-2a43-44ed-903a-8d7b09100ac3-ovsdbserver-sb\") pod \"dnsmasq-dns-fb745b69-f55rh\" (UID: \"0cac862a-2a43-44ed-903a-8d7b09100ac3\") " pod="openstack/dnsmasq-dns-fb745b69-f55rh" Jan 03 06:04:43 crc kubenswrapper[4854]: I0103 06:04:43.679929 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46254f53-deed-4254-801c-1db0db3ec56a-combined-ca-bundle\") pod \"neutron-84c5455478-hczhs\" (UID: \"46254f53-deed-4254-801c-1db0db3ec56a\") " pod="openstack/neutron-84c5455478-hczhs" Jan 03 06:04:43 crc kubenswrapper[4854]: I0103 06:04:43.680296 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0cac862a-2a43-44ed-903a-8d7b09100ac3-ovsdbserver-nb\") pod \"dnsmasq-dns-fb745b69-f55rh\" (UID: \"0cac862a-2a43-44ed-903a-8d7b09100ac3\") " pod="openstack/dnsmasq-dns-fb745b69-f55rh" Jan 03 06:04:43 crc kubenswrapper[4854]: I0103 06:04:43.680399 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cac862a-2a43-44ed-903a-8d7b09100ac3-config\") pod \"dnsmasq-dns-fb745b69-f55rh\" (UID: \"0cac862a-2a43-44ed-903a-8d7b09100ac3\") " pod="openstack/dnsmasq-dns-fb745b69-f55rh" Jan 03 06:04:43 crc kubenswrapper[4854]: I0103 06:04:43.681005 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0cac862a-2a43-44ed-903a-8d7b09100ac3-ovsdbserver-sb\") pod \"dnsmasq-dns-fb745b69-f55rh\" (UID: \"0cac862a-2a43-44ed-903a-8d7b09100ac3\") " pod="openstack/dnsmasq-dns-fb745b69-f55rh" Jan 03 06:04:43 crc kubenswrapper[4854]: I0103 06:04:43.681245 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0cac862a-2a43-44ed-903a-8d7b09100ac3-dns-svc\") pod \"dnsmasq-dns-fb745b69-f55rh\" (UID: \"0cac862a-2a43-44ed-903a-8d7b09100ac3\") " pod="openstack/dnsmasq-dns-fb745b69-f55rh" Jan 03 06:04:43 crc kubenswrapper[4854]: I0103 06:04:43.681390 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0cac862a-2a43-44ed-903a-8d7b09100ac3-ovsdbserver-nb\") pod \"dnsmasq-dns-fb745b69-f55rh\" (UID: \"0cac862a-2a43-44ed-903a-8d7b09100ac3\") " pod="openstack/dnsmasq-dns-fb745b69-f55rh" Jan 03 06:04:43 crc kubenswrapper[4854]: I0103 06:04:43.711713 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mnkg\" (UniqueName: \"kubernetes.io/projected/0cac862a-2a43-44ed-903a-8d7b09100ac3-kube-api-access-9mnkg\") pod \"dnsmasq-dns-fb745b69-f55rh\" (UID: \"0cac862a-2a43-44ed-903a-8d7b09100ac3\") " pod="openstack/dnsmasq-dns-fb745b69-f55rh" Jan 03 06:04:43 crc kubenswrapper[4854]: I0103 06:04:43.783812 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/46254f53-deed-4254-801c-1db0db3ec56a-ovndb-tls-certs\") pod \"neutron-84c5455478-hczhs\" (UID: \"46254f53-deed-4254-801c-1db0db3ec56a\") " pod="openstack/neutron-84c5455478-hczhs" Jan 03 06:04:43 crc kubenswrapper[4854]: I0103 06:04:43.783907 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/46254f53-deed-4254-801c-1db0db3ec56a-config\") pod \"neutron-84c5455478-hczhs\" (UID: \"46254f53-deed-4254-801c-1db0db3ec56a\") " pod="openstack/neutron-84c5455478-hczhs" Jan 03 06:04:43 crc kubenswrapper[4854]: I0103 06:04:43.784109 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46254f53-deed-4254-801c-1db0db3ec56a-combined-ca-bundle\") pod \"neutron-84c5455478-hczhs\" (UID: \"46254f53-deed-4254-801c-1db0db3ec56a\") " pod="openstack/neutron-84c5455478-hczhs" Jan 03 06:04:43 crc kubenswrapper[4854]: I0103 06:04:43.784192 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5nj8\" (UniqueName: \"kubernetes.io/projected/46254f53-deed-4254-801c-1db0db3ec56a-kube-api-access-t5nj8\") pod \"neutron-84c5455478-hczhs\" (UID: \"46254f53-deed-4254-801c-1db0db3ec56a\") " pod="openstack/neutron-84c5455478-hczhs" Jan 03 06:04:43 crc kubenswrapper[4854]: I0103 06:04:43.784227 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/46254f53-deed-4254-801c-1db0db3ec56a-httpd-config\") pod \"neutron-84c5455478-hczhs\" (UID: \"46254f53-deed-4254-801c-1db0db3ec56a\") " pod="openstack/neutron-84c5455478-hczhs" Jan 03 06:04:43 crc kubenswrapper[4854]: I0103 06:04:43.789815 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/46254f53-deed-4254-801c-1db0db3ec56a-httpd-config\") pod \"neutron-84c5455478-hczhs\" (UID: \"46254f53-deed-4254-801c-1db0db3ec56a\") " pod="openstack/neutron-84c5455478-hczhs" Jan 03 06:04:43 crc kubenswrapper[4854]: I0103 06:04:43.791225 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/46254f53-deed-4254-801c-1db0db3ec56a-ovndb-tls-certs\") pod \"neutron-84c5455478-hczhs\" (UID: \"46254f53-deed-4254-801c-1db0db3ec56a\") " pod="openstack/neutron-84c5455478-hczhs" Jan 03 06:04:43 crc kubenswrapper[4854]: I0103 06:04:43.796932 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46254f53-deed-4254-801c-1db0db3ec56a-combined-ca-bundle\") pod \"neutron-84c5455478-hczhs\" (UID: \"46254f53-deed-4254-801c-1db0db3ec56a\") " pod="openstack/neutron-84c5455478-hczhs" Jan 03 06:04:43 crc kubenswrapper[4854]: I0103 06:04:43.805060 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/46254f53-deed-4254-801c-1db0db3ec56a-config\") pod \"neutron-84c5455478-hczhs\" (UID: \"46254f53-deed-4254-801c-1db0db3ec56a\") " pod="openstack/neutron-84c5455478-hczhs" Jan 03 06:04:43 crc kubenswrapper[4854]: I0103 06:04:43.806655 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fb745b69-f55rh" Jan 03 06:04:43 crc kubenswrapper[4854]: I0103 06:04:43.821263 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5nj8\" (UniqueName: \"kubernetes.io/projected/46254f53-deed-4254-801c-1db0db3ec56a-kube-api-access-t5nj8\") pod \"neutron-84c5455478-hczhs\" (UID: \"46254f53-deed-4254-801c-1db0db3ec56a\") " pod="openstack/neutron-84c5455478-hczhs" Jan 03 06:04:43 crc kubenswrapper[4854]: I0103 06:04:43.929027 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-84c5455478-hczhs" Jan 03 06:04:44 crc kubenswrapper[4854]: I0103 06:04:44.438750 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fb745b69-f55rh"] Jan 03 06:04:44 crc kubenswrapper[4854]: I0103 06:04:44.703407 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-84c5455478-hczhs"] Jan 03 06:04:45 crc kubenswrapper[4854]: W0103 06:04:45.224545 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46254f53_deed_4254_801c_1db0db3ec56a.slice/crio-d0c4cdffe46251f9d88830d7cb175feb376e38c04043f8674c983072aaa61a72 WatchSource:0}: Error finding container d0c4cdffe46251f9d88830d7cb175feb376e38c04043f8674c983072aaa61a72: Status 404 returned error can't find the container with id d0c4cdffe46251f9d88830d7cb175feb376e38c04043f8674c983072aaa61a72 Jan 03 06:04:45 crc kubenswrapper[4854]: I0103 06:04:45.370358 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"35c1899a-114d-482d-9798-89c7c65fb40b","Type":"ContainerStarted","Data":"d43c09df31926f65b58a6a10b6696fd843aa799bfa91536ea6b7f4d8695330a3"} Jan 03 06:04:45 crc kubenswrapper[4854]: I0103 06:04:45.370438 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"35c1899a-114d-482d-9798-89c7c65fb40b","Type":"ContainerStarted","Data":"ebd1f3e6f326c8a461319b5669575dd8e7f36d72687adbf4d825c2e1129c91a2"} Jan 03 06:04:45 crc kubenswrapper[4854]: I0103 06:04:45.370558 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="35c1899a-114d-482d-9798-89c7c65fb40b" containerName="glance-log" containerID="cri-o://d43c09df31926f65b58a6a10b6696fd843aa799bfa91536ea6b7f4d8695330a3" gracePeriod=30 Jan 03 06:04:45 crc kubenswrapper[4854]: I0103 06:04:45.371105 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="35c1899a-114d-482d-9798-89c7c65fb40b" containerName="glance-httpd" containerID="cri-o://ebd1f3e6f326c8a461319b5669575dd8e7f36d72687adbf4d825c2e1129c91a2" gracePeriod=30 Jan 03 06:04:45 crc kubenswrapper[4854]: I0103 06:04:45.384053 4854 generic.go:334] "Generic (PLEG): container finished" podID="dc184ac1-7e14-435e-898d-93e19dab6615" containerID="5b934e7550dec6ee13cc8c7fe7a463b3cbaad26a3d961beee14b114f14323ff8" exitCode=0 Jan 03 06:04:45 crc kubenswrapper[4854]: I0103 06:04:45.384180 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-ff9wl" event={"ID":"dc184ac1-7e14-435e-898d-93e19dab6615","Type":"ContainerDied","Data":"5b934e7550dec6ee13cc8c7fe7a463b3cbaad26a3d961beee14b114f14323ff8"} Jan 03 06:04:45 crc kubenswrapper[4854]: I0103 06:04:45.399289 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fb745b69-f55rh" event={"ID":"0cac862a-2a43-44ed-903a-8d7b09100ac3","Type":"ContainerStarted","Data":"f622e97f672d12adc68ca13ff79121c991857a3cc9c602030c590bfd55bd0b6c"} Jan 03 06:04:45 crc kubenswrapper[4854]: I0103 06:04:45.405844 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.405827202 podStartE2EDuration="6.405827202s" podCreationTimestamp="2026-01-03 06:04:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:04:45.404426248 +0000 UTC m=+1463.731002830" watchObservedRunningTime="2026-01-03 06:04:45.405827202 +0000 UTC m=+1463.732403774" Jan 03 06:04:45 crc kubenswrapper[4854]: I0103 06:04:45.411277 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84c5455478-hczhs" event={"ID":"46254f53-deed-4254-801c-1db0db3ec56a","Type":"ContainerStarted","Data":"d0c4cdffe46251f9d88830d7cb175feb376e38c04043f8674c983072aaa61a72"} Jan 03 06:04:45 crc kubenswrapper[4854]: I0103 06:04:45.419860 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84976bdf-nfn7r" event={"ID":"8f24e646-0b5a-449a-bce4-35b97195975d","Type":"ContainerStarted","Data":"40801518f8038df158d1151302baad208024079138e4ff2e026e413a617824a2"} Jan 03 06:04:45 crc kubenswrapper[4854]: I0103 06:04:45.420052 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-f84976bdf-nfn7r" podUID="8f24e646-0b5a-449a-bce4-35b97195975d" containerName="dnsmasq-dns" containerID="cri-o://40801518f8038df158d1151302baad208024079138e4ff2e026e413a617824a2" gracePeriod=10 Jan 03 06:04:45 crc kubenswrapper[4854]: I0103 06:04:45.420231 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f84976bdf-nfn7r" Jan 03 06:04:45 crc kubenswrapper[4854]: I0103 06:04:45.427825 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e","Type":"ContainerStarted","Data":"5a498574fca9ecfcf9acbf2e482a9163e6e6847249c5e1d5ef17e10f37bebab5"} Jan 03 06:04:45 crc kubenswrapper[4854]: I0103 06:04:45.428026 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e" containerName="glance-log" containerID="cri-o://d6b4b3d5aa97256c15631c081f1af1df3520b4ff236e92cc6ca50050a1790fd5" gracePeriod=30 Jan 03 06:04:45 crc kubenswrapper[4854]: I0103 06:04:45.428233 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e" containerName="glance-httpd" containerID="cri-o://5a498574fca9ecfcf9acbf2e482a9163e6e6847249c5e1d5ef17e10f37bebab5" gracePeriod=30 Jan 03 06:04:45 crc kubenswrapper[4854]: I0103 06:04:45.471067 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f84976bdf-nfn7r" podStartSLOduration=6.471047246 podStartE2EDuration="6.471047246s" podCreationTimestamp="2026-01-03 06:04:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:04:45.466467764 +0000 UTC m=+1463.793044356" watchObservedRunningTime="2026-01-03 06:04:45.471047246 +0000 UTC m=+1463.797623818" Jan 03 06:04:45 crc kubenswrapper[4854]: I0103 06:04:45.504489 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.504468207 podStartE2EDuration="7.504468207s" podCreationTimestamp="2026-01-03 06:04:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:04:45.498964734 +0000 UTC m=+1463.825541306" watchObservedRunningTime="2026-01-03 06:04:45.504468207 +0000 UTC m=+1463.831044779" Jan 03 06:04:45 crc kubenswrapper[4854]: I0103 06:04:45.844745 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5fd77dbb5-mpxrq"] Jan 03 06:04:45 crc kubenswrapper[4854]: I0103 06:04:45.847556 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5fd77dbb5-mpxrq" Jan 03 06:04:45 crc kubenswrapper[4854]: I0103 06:04:45.849476 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 03 06:04:45 crc kubenswrapper[4854]: I0103 06:04:45.852767 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 03 06:04:45 crc kubenswrapper[4854]: I0103 06:04:45.871718 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5fd77dbb5-mpxrq"] Jan 03 06:04:45 crc kubenswrapper[4854]: I0103 06:04:45.950833 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7388a140-a67f-4a1e-a5fc-b34be00858e2-internal-tls-certs\") pod \"neutron-5fd77dbb5-mpxrq\" (UID: \"7388a140-a67f-4a1e-a5fc-b34be00858e2\") " pod="openstack/neutron-5fd77dbb5-mpxrq" Jan 03 06:04:45 crc kubenswrapper[4854]: I0103 06:04:45.950911 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7388a140-a67f-4a1e-a5fc-b34be00858e2-ovndb-tls-certs\") pod \"neutron-5fd77dbb5-mpxrq\" (UID: \"7388a140-a67f-4a1e-a5fc-b34be00858e2\") " pod="openstack/neutron-5fd77dbb5-mpxrq" Jan 03 06:04:45 crc kubenswrapper[4854]: I0103 06:04:45.950949 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lr4r\" (UniqueName: \"kubernetes.io/projected/7388a140-a67f-4a1e-a5fc-b34be00858e2-kube-api-access-4lr4r\") pod \"neutron-5fd77dbb5-mpxrq\" (UID: \"7388a140-a67f-4a1e-a5fc-b34be00858e2\") " pod="openstack/neutron-5fd77dbb5-mpxrq" Jan 03 06:04:45 crc kubenswrapper[4854]: I0103 06:04:45.951035 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7388a140-a67f-4a1e-a5fc-b34be00858e2-httpd-config\") pod \"neutron-5fd77dbb5-mpxrq\" (UID: \"7388a140-a67f-4a1e-a5fc-b34be00858e2\") " pod="openstack/neutron-5fd77dbb5-mpxrq" Jan 03 06:04:45 crc kubenswrapper[4854]: I0103 06:04:45.951202 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7388a140-a67f-4a1e-a5fc-b34be00858e2-public-tls-certs\") pod \"neutron-5fd77dbb5-mpxrq\" (UID: \"7388a140-a67f-4a1e-a5fc-b34be00858e2\") " pod="openstack/neutron-5fd77dbb5-mpxrq" Jan 03 06:04:45 crc kubenswrapper[4854]: I0103 06:04:45.951390 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7388a140-a67f-4a1e-a5fc-b34be00858e2-combined-ca-bundle\") pod \"neutron-5fd77dbb5-mpxrq\" (UID: \"7388a140-a67f-4a1e-a5fc-b34be00858e2\") " pod="openstack/neutron-5fd77dbb5-mpxrq" Jan 03 06:04:45 crc kubenswrapper[4854]: I0103 06:04:45.951432 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7388a140-a67f-4a1e-a5fc-b34be00858e2-config\") pod \"neutron-5fd77dbb5-mpxrq\" (UID: \"7388a140-a67f-4a1e-a5fc-b34be00858e2\") " pod="openstack/neutron-5fd77dbb5-mpxrq" Jan 03 06:04:46 crc kubenswrapper[4854]: I0103 06:04:46.053021 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7388a140-a67f-4a1e-a5fc-b34be00858e2-httpd-config\") pod \"neutron-5fd77dbb5-mpxrq\" (UID: \"7388a140-a67f-4a1e-a5fc-b34be00858e2\") " pod="openstack/neutron-5fd77dbb5-mpxrq" Jan 03 06:04:46 crc kubenswrapper[4854]: I0103 06:04:46.053074 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7388a140-a67f-4a1e-a5fc-b34be00858e2-public-tls-certs\") pod \"neutron-5fd77dbb5-mpxrq\" (UID: \"7388a140-a67f-4a1e-a5fc-b34be00858e2\") " pod="openstack/neutron-5fd77dbb5-mpxrq" Jan 03 06:04:46 crc kubenswrapper[4854]: I0103 06:04:46.053164 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7388a140-a67f-4a1e-a5fc-b34be00858e2-combined-ca-bundle\") pod \"neutron-5fd77dbb5-mpxrq\" (UID: \"7388a140-a67f-4a1e-a5fc-b34be00858e2\") " pod="openstack/neutron-5fd77dbb5-mpxrq" Jan 03 06:04:46 crc kubenswrapper[4854]: I0103 06:04:46.053666 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7388a140-a67f-4a1e-a5fc-b34be00858e2-config\") pod \"neutron-5fd77dbb5-mpxrq\" (UID: \"7388a140-a67f-4a1e-a5fc-b34be00858e2\") " pod="openstack/neutron-5fd77dbb5-mpxrq" Jan 03 06:04:46 crc kubenswrapper[4854]: I0103 06:04:46.054024 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7388a140-a67f-4a1e-a5fc-b34be00858e2-internal-tls-certs\") pod \"neutron-5fd77dbb5-mpxrq\" (UID: \"7388a140-a67f-4a1e-a5fc-b34be00858e2\") " pod="openstack/neutron-5fd77dbb5-mpxrq" Jan 03 06:04:46 crc kubenswrapper[4854]: I0103 06:04:46.054058 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7388a140-a67f-4a1e-a5fc-b34be00858e2-ovndb-tls-certs\") pod \"neutron-5fd77dbb5-mpxrq\" (UID: \"7388a140-a67f-4a1e-a5fc-b34be00858e2\") " pod="openstack/neutron-5fd77dbb5-mpxrq" Jan 03 06:04:46 crc kubenswrapper[4854]: I0103 06:04:46.054096 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lr4r\" (UniqueName: \"kubernetes.io/projected/7388a140-a67f-4a1e-a5fc-b34be00858e2-kube-api-access-4lr4r\") pod \"neutron-5fd77dbb5-mpxrq\" (UID: \"7388a140-a67f-4a1e-a5fc-b34be00858e2\") " pod="openstack/neutron-5fd77dbb5-mpxrq" Jan 03 06:04:46 crc kubenswrapper[4854]: I0103 06:04:46.058033 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7388a140-a67f-4a1e-a5fc-b34be00858e2-httpd-config\") pod \"neutron-5fd77dbb5-mpxrq\" (UID: \"7388a140-a67f-4a1e-a5fc-b34be00858e2\") " pod="openstack/neutron-5fd77dbb5-mpxrq" Jan 03 06:04:46 crc kubenswrapper[4854]: I0103 06:04:46.058280 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7388a140-a67f-4a1e-a5fc-b34be00858e2-internal-tls-certs\") pod \"neutron-5fd77dbb5-mpxrq\" (UID: \"7388a140-a67f-4a1e-a5fc-b34be00858e2\") " pod="openstack/neutron-5fd77dbb5-mpxrq" Jan 03 06:04:46 crc kubenswrapper[4854]: I0103 06:04:46.058476 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7388a140-a67f-4a1e-a5fc-b34be00858e2-ovndb-tls-certs\") pod \"neutron-5fd77dbb5-mpxrq\" (UID: \"7388a140-a67f-4a1e-a5fc-b34be00858e2\") " pod="openstack/neutron-5fd77dbb5-mpxrq" Jan 03 06:04:46 crc kubenswrapper[4854]: I0103 06:04:46.060900 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7388a140-a67f-4a1e-a5fc-b34be00858e2-public-tls-certs\") pod \"neutron-5fd77dbb5-mpxrq\" (UID: \"7388a140-a67f-4a1e-a5fc-b34be00858e2\") " pod="openstack/neutron-5fd77dbb5-mpxrq" Jan 03 06:04:46 crc kubenswrapper[4854]: I0103 06:04:46.061710 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7388a140-a67f-4a1e-a5fc-b34be00858e2-config\") pod \"neutron-5fd77dbb5-mpxrq\" (UID: \"7388a140-a67f-4a1e-a5fc-b34be00858e2\") " pod="openstack/neutron-5fd77dbb5-mpxrq" Jan 03 06:04:46 crc kubenswrapper[4854]: I0103 06:04:46.062228 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7388a140-a67f-4a1e-a5fc-b34be00858e2-combined-ca-bundle\") pod \"neutron-5fd77dbb5-mpxrq\" (UID: \"7388a140-a67f-4a1e-a5fc-b34be00858e2\") " pod="openstack/neutron-5fd77dbb5-mpxrq" Jan 03 06:04:46 crc kubenswrapper[4854]: I0103 06:04:46.074898 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lr4r\" (UniqueName: \"kubernetes.io/projected/7388a140-a67f-4a1e-a5fc-b34be00858e2-kube-api-access-4lr4r\") pod \"neutron-5fd77dbb5-mpxrq\" (UID: \"7388a140-a67f-4a1e-a5fc-b34be00858e2\") " pod="openstack/neutron-5fd77dbb5-mpxrq" Jan 03 06:04:46 crc kubenswrapper[4854]: I0103 06:04:46.168678 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5fd77dbb5-mpxrq" Jan 03 06:04:46 crc kubenswrapper[4854]: I0103 06:04:46.452307 4854 generic.go:334] "Generic (PLEG): container finished" podID="35c1899a-114d-482d-9798-89c7c65fb40b" containerID="ebd1f3e6f326c8a461319b5669575dd8e7f36d72687adbf4d825c2e1129c91a2" exitCode=143 Jan 03 06:04:46 crc kubenswrapper[4854]: I0103 06:04:46.452740 4854 generic.go:334] "Generic (PLEG): container finished" podID="35c1899a-114d-482d-9798-89c7c65fb40b" containerID="d43c09df31926f65b58a6a10b6696fd843aa799bfa91536ea6b7f4d8695330a3" exitCode=143 Jan 03 06:04:46 crc kubenswrapper[4854]: I0103 06:04:46.452702 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"35c1899a-114d-482d-9798-89c7c65fb40b","Type":"ContainerDied","Data":"ebd1f3e6f326c8a461319b5669575dd8e7f36d72687adbf4d825c2e1129c91a2"} Jan 03 06:04:46 crc kubenswrapper[4854]: I0103 06:04:46.452805 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"35c1899a-114d-482d-9798-89c7c65fb40b","Type":"ContainerDied","Data":"d43c09df31926f65b58a6a10b6696fd843aa799bfa91536ea6b7f4d8695330a3"} Jan 03 06:04:46 crc kubenswrapper[4854]: I0103 06:04:46.455496 4854 generic.go:334] "Generic (PLEG): container finished" podID="0cac862a-2a43-44ed-903a-8d7b09100ac3" containerID="2918588f419fa62b8b58b68f06748dad12cf158d58d11e2725a85e9e2319dcb5" exitCode=0 Jan 03 06:04:46 crc kubenswrapper[4854]: I0103 06:04:46.455543 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fb745b69-f55rh" event={"ID":"0cac862a-2a43-44ed-903a-8d7b09100ac3","Type":"ContainerDied","Data":"2918588f419fa62b8b58b68f06748dad12cf158d58d11e2725a85e9e2319dcb5"} Jan 03 06:04:46 crc kubenswrapper[4854]: I0103 06:04:46.459615 4854 generic.go:334] "Generic (PLEG): container finished" podID="8f24e646-0b5a-449a-bce4-35b97195975d" containerID="40801518f8038df158d1151302baad208024079138e4ff2e026e413a617824a2" exitCode=0 Jan 03 06:04:46 crc kubenswrapper[4854]: I0103 06:04:46.459676 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84976bdf-nfn7r" event={"ID":"8f24e646-0b5a-449a-bce4-35b97195975d","Type":"ContainerDied","Data":"40801518f8038df158d1151302baad208024079138e4ff2e026e413a617824a2"} Jan 03 06:04:46 crc kubenswrapper[4854]: I0103 06:04:46.462378 4854 generic.go:334] "Generic (PLEG): container finished" podID="133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e" containerID="5a498574fca9ecfcf9acbf2e482a9163e6e6847249c5e1d5ef17e10f37bebab5" exitCode=0 Jan 03 06:04:46 crc kubenswrapper[4854]: I0103 06:04:46.462407 4854 generic.go:334] "Generic (PLEG): container finished" podID="133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e" containerID="d6b4b3d5aa97256c15631c081f1af1df3520b4ff236e92cc6ca50050a1790fd5" exitCode=143 Jan 03 06:04:46 crc kubenswrapper[4854]: I0103 06:04:46.462523 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e","Type":"ContainerDied","Data":"5a498574fca9ecfcf9acbf2e482a9163e6e6847249c5e1d5ef17e10f37bebab5"} Jan 03 06:04:46 crc kubenswrapper[4854]: I0103 06:04:46.462549 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e","Type":"ContainerDied","Data":"d6b4b3d5aa97256c15631c081f1af1df3520b4ff236e92cc6ca50050a1790fd5"} Jan 03 06:04:46 crc kubenswrapper[4854]: I0103 06:04:46.802016 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84976bdf-nfn7r" Jan 03 06:04:46 crc kubenswrapper[4854]: I0103 06:04:46.880987 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f24e646-0b5a-449a-bce4-35b97195975d-dns-svc\") pod \"8f24e646-0b5a-449a-bce4-35b97195975d\" (UID: \"8f24e646-0b5a-449a-bce4-35b97195975d\") " Jan 03 06:04:46 crc kubenswrapper[4854]: I0103 06:04:46.881898 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f24e646-0b5a-449a-bce4-35b97195975d-config\") pod \"8f24e646-0b5a-449a-bce4-35b97195975d\" (UID: \"8f24e646-0b5a-449a-bce4-35b97195975d\") " Jan 03 06:04:46 crc kubenswrapper[4854]: I0103 06:04:46.881977 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8f24e646-0b5a-449a-bce4-35b97195975d-ovsdbserver-nb\") pod \"8f24e646-0b5a-449a-bce4-35b97195975d\" (UID: \"8f24e646-0b5a-449a-bce4-35b97195975d\") " Jan 03 06:04:46 crc kubenswrapper[4854]: I0103 06:04:46.882138 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8f24e646-0b5a-449a-bce4-35b97195975d-ovsdbserver-sb\") pod \"8f24e646-0b5a-449a-bce4-35b97195975d\" (UID: \"8f24e646-0b5a-449a-bce4-35b97195975d\") " Jan 03 06:04:46 crc kubenswrapper[4854]: I0103 06:04:46.882246 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dj6t\" (UniqueName: \"kubernetes.io/projected/8f24e646-0b5a-449a-bce4-35b97195975d-kube-api-access-7dj6t\") pod \"8f24e646-0b5a-449a-bce4-35b97195975d\" (UID: \"8f24e646-0b5a-449a-bce4-35b97195975d\") " Jan 03 06:04:46 crc kubenswrapper[4854]: I0103 06:04:46.891301 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f24e646-0b5a-449a-bce4-35b97195975d-kube-api-access-7dj6t" (OuterVolumeSpecName: "kube-api-access-7dj6t") pod "8f24e646-0b5a-449a-bce4-35b97195975d" (UID: "8f24e646-0b5a-449a-bce4-35b97195975d"). InnerVolumeSpecName "kube-api-access-7dj6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:04:46 crc kubenswrapper[4854]: I0103 06:04:46.986775 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dj6t\" (UniqueName: \"kubernetes.io/projected/8f24e646-0b5a-449a-bce4-35b97195975d-kube-api-access-7dj6t\") on node \"crc\" DevicePath \"\"" Jan 03 06:04:46 crc kubenswrapper[4854]: I0103 06:04:46.995031 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.091589 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e-httpd-run\") pod \"133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e\" (UID: \"133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e\") " Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.091685 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e-config-data\") pod \"133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e\" (UID: \"133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e\") " Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.091795 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eb48dc05-44bc-494a-9a3b-52570d27764e\") pod \"133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e\" (UID: \"133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e\") " Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.091823 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e-logs\") pod \"133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e\" (UID: \"133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e\") " Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.091932 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p72zz\" (UniqueName: \"kubernetes.io/projected/133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e-kube-api-access-p72zz\") pod \"133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e\" (UID: \"133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e\") " Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.092028 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e-scripts\") pod \"133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e\" (UID: \"133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e\") " Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.092090 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e-combined-ca-bundle\") pod \"133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e\" (UID: \"133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e\") " Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.093449 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e" (UID: "133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.094370 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e-logs" (OuterVolumeSpecName: "logs") pod "133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e" (UID: "133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.161358 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e-scripts" (OuterVolumeSpecName: "scripts") pod "133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e" (UID: "133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.163238 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e-kube-api-access-p72zz" (OuterVolumeSpecName: "kube-api-access-p72zz") pod "133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e" (UID: "133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e"). InnerVolumeSpecName "kube-api-access-p72zz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.179919 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f24e646-0b5a-449a-bce4-35b97195975d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8f24e646-0b5a-449a-bce4-35b97195975d" (UID: "8f24e646-0b5a-449a-bce4-35b97195975d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.231648 4854 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.231690 4854 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f24e646-0b5a-449a-bce4-35b97195975d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.231703 4854 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.231727 4854 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e-logs\") on node \"crc\" DevicePath \"\"" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.231738 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p72zz\" (UniqueName: \"kubernetes.io/projected/133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e-kube-api-access-p72zz\") on node \"crc\" DevicePath \"\"" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.312606 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eb48dc05-44bc-494a-9a3b-52570d27764e" (OuterVolumeSpecName: "glance") pod "133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e" (UID: "133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e"). InnerVolumeSpecName "pvc-eb48dc05-44bc-494a-9a3b-52570d27764e". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.331580 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f24e646-0b5a-449a-bce4-35b97195975d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8f24e646-0b5a-449a-bce4-35b97195975d" (UID: "8f24e646-0b5a-449a-bce4-35b97195975d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.346602 4854 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8f24e646-0b5a-449a-bce4-35b97195975d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.346668 4854 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-eb48dc05-44bc-494a-9a3b-52570d27764e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eb48dc05-44bc-494a-9a3b-52570d27764e\") on node \"crc\" " Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.366037 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f24e646-0b5a-449a-bce4-35b97195975d-config" (OuterVolumeSpecName: "config") pod "8f24e646-0b5a-449a-bce4-35b97195975d" (UID: "8f24e646-0b5a-449a-bce4-35b97195975d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.395244 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e-config-data" (OuterVolumeSpecName: "config-data") pod "133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e" (UID: "133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.444915 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e" (UID: "133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.445848 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f24e646-0b5a-449a-bce4-35b97195975d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8f24e646-0b5a-449a-bce4-35b97195975d" (UID: "8f24e646-0b5a-449a-bce4-35b97195975d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.449406 4854 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8f24e646-0b5a-449a-bce4-35b97195975d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.449448 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.449461 4854 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.449474 4854 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f24e646-0b5a-449a-bce4-35b97195975d-config\") on node \"crc\" DevicePath \"\"" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.480391 4854 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.480571 4854 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-eb48dc05-44bc-494a-9a3b-52570d27764e" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eb48dc05-44bc-494a-9a3b-52570d27764e") on node "crc" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.529334 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5fd77dbb5-mpxrq"] Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.545100 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f6a47ad8-d256-453c-910a-1506c8f73657","Type":"ContainerStarted","Data":"d2586c349b4727350667419f732e90ffeb4606fa1d76f7f2d4c0a418d827f5b1"} Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.557468 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-ff9wl" event={"ID":"dc184ac1-7e14-435e-898d-93e19dab6615","Type":"ContainerDied","Data":"6babe08ecd37600028d31a914f298550bace0f9de4371b98024b79f67462276e"} Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.557511 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6babe08ecd37600028d31a914f298550bace0f9de4371b98024b79f67462276e" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.566395 4854 reconciler_common.go:293] "Volume detached for volume \"pvc-eb48dc05-44bc-494a-9a3b-52570d27764e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eb48dc05-44bc-494a-9a3b-52570d27764e\") on node \"crc\" DevicePath \"\"" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.575403 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fb745b69-f55rh" event={"ID":"0cac862a-2a43-44ed-903a-8d7b09100ac3","Type":"ContainerStarted","Data":"dfac7049f9c2a5f4cf7fc6dc6eb91048dc1eecae9557ce503df15c179f15025d"} Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.576461 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-fb745b69-f55rh" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.603881 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-fb745b69-f55rh" podStartSLOduration=4.603862106 podStartE2EDuration="4.603862106s" podCreationTimestamp="2026-01-03 06:04:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:04:47.602684537 +0000 UTC m=+1465.929261129" watchObservedRunningTime="2026-01-03 06:04:47.603862106 +0000 UTC m=+1465.930438668" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.611617 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41de8a7f-850a-4f6e-8623-e0cdbcdf79e6","Type":"ContainerStarted","Data":"ef373afcf83f8495b8f8ff89622224a235b10af839d685bcf814caa4d3119a97"} Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.621332 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84c5455478-hczhs" event={"ID":"46254f53-deed-4254-801c-1db0db3ec56a","Type":"ContainerStarted","Data":"d3785ee455877013b890619cead6235fe7f8ab8a467a1246625f7607bc7132ce"} Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.627363 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84976bdf-nfn7r" event={"ID":"8f24e646-0b5a-449a-bce4-35b97195975d","Type":"ContainerDied","Data":"448b4127b10bf6574a5d229c209b781beeb247d1bcc4cfd19ee1e31ba9e311cb"} Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.627425 4854 scope.go:117] "RemoveContainer" containerID="40801518f8038df158d1151302baad208024079138e4ff2e026e413a617824a2" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.627553 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84976bdf-nfn7r" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.647637 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e","Type":"ContainerDied","Data":"77ec03512c68c4fe7d8dd2ab65edb9c82cf60d7da6026940b820bb14744bbe1f"} Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.647788 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.649712 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-ff9wl" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.659048 4854 generic.go:334] "Generic (PLEG): container finished" podID="c8beded3-7a32-47a0-a12a-e346422e7323" containerID="7ad3dc12935ce23f1c5caad7b24a62c1c73514796ef5fdcd0e70dae3c56ee113" exitCode=0 Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.659196 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-86vzw" event={"ID":"c8beded3-7a32-47a0-a12a-e346422e7323","Type":"ContainerDied","Data":"7ad3dc12935ce23f1c5caad7b24a62c1c73514796ef5fdcd0e70dae3c56ee113"} Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.663046 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.714858 4854 scope.go:117] "RemoveContainer" containerID="5b29b8cecbe3078cd76879a06e354aab32d3dc82be8d94482b28e17eef937b34" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.776375 4854 scope.go:117] "RemoveContainer" containerID="5a498574fca9ecfcf9acbf2e482a9163e6e6847249c5e1d5ef17e10f37bebab5" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.776534 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc184ac1-7e14-435e-898d-93e19dab6615-logs\") pod \"dc184ac1-7e14-435e-898d-93e19dab6615\" (UID: \"dc184ac1-7e14-435e-898d-93e19dab6615\") " Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.776659 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-51516975-721e-4e12-b0dd-7f07c321db4e\") pod \"35c1899a-114d-482d-9798-89c7c65fb40b\" (UID: \"35c1899a-114d-482d-9798-89c7c65fb40b\") " Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.776701 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fs7wq\" (UniqueName: \"kubernetes.io/projected/dc184ac1-7e14-435e-898d-93e19dab6615-kube-api-access-fs7wq\") pod \"dc184ac1-7e14-435e-898d-93e19dab6615\" (UID: \"dc184ac1-7e14-435e-898d-93e19dab6615\") " Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.776776 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mx874\" (UniqueName: \"kubernetes.io/projected/35c1899a-114d-482d-9798-89c7c65fb40b-kube-api-access-mx874\") pod \"35c1899a-114d-482d-9798-89c7c65fb40b\" (UID: \"35c1899a-114d-482d-9798-89c7c65fb40b\") " Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.776809 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35c1899a-114d-482d-9798-89c7c65fb40b-scripts\") pod \"35c1899a-114d-482d-9798-89c7c65fb40b\" (UID: \"35c1899a-114d-482d-9798-89c7c65fb40b\") " Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.776836 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc184ac1-7e14-435e-898d-93e19dab6615-config-data\") pod \"dc184ac1-7e14-435e-898d-93e19dab6615\" (UID: \"dc184ac1-7e14-435e-898d-93e19dab6615\") " Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.776861 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/35c1899a-114d-482d-9798-89c7c65fb40b-httpd-run\") pod \"35c1899a-114d-482d-9798-89c7c65fb40b\" (UID: \"35c1899a-114d-482d-9798-89c7c65fb40b\") " Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.776982 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc184ac1-7e14-435e-898d-93e19dab6615-scripts\") pod \"dc184ac1-7e14-435e-898d-93e19dab6615\" (UID: \"dc184ac1-7e14-435e-898d-93e19dab6615\") " Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.777044 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35c1899a-114d-482d-9798-89c7c65fb40b-combined-ca-bundle\") pod \"35c1899a-114d-482d-9798-89c7c65fb40b\" (UID: \"35c1899a-114d-482d-9798-89c7c65fb40b\") " Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.777110 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35c1899a-114d-482d-9798-89c7c65fb40b-logs\") pod \"35c1899a-114d-482d-9798-89c7c65fb40b\" (UID: \"35c1899a-114d-482d-9798-89c7c65fb40b\") " Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.777178 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35c1899a-114d-482d-9798-89c7c65fb40b-config-data\") pod \"35c1899a-114d-482d-9798-89c7c65fb40b\" (UID: \"35c1899a-114d-482d-9798-89c7c65fb40b\") " Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.777231 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc184ac1-7e14-435e-898d-93e19dab6615-combined-ca-bundle\") pod \"dc184ac1-7e14-435e-898d-93e19dab6615\" (UID: \"dc184ac1-7e14-435e-898d-93e19dab6615\") " Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.778096 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc184ac1-7e14-435e-898d-93e19dab6615-logs" (OuterVolumeSpecName: "logs") pod "dc184ac1-7e14-435e-898d-93e19dab6615" (UID: "dc184ac1-7e14-435e-898d-93e19dab6615"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.778475 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35c1899a-114d-482d-9798-89c7c65fb40b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "35c1899a-114d-482d-9798-89c7c65fb40b" (UID: "35c1899a-114d-482d-9798-89c7c65fb40b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.781685 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35c1899a-114d-482d-9798-89c7c65fb40b-logs" (OuterVolumeSpecName: "logs") pod "35c1899a-114d-482d-9798-89c7c65fb40b" (UID: "35c1899a-114d-482d-9798-89c7c65fb40b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.790881 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc184ac1-7e14-435e-898d-93e19dab6615-kube-api-access-fs7wq" (OuterVolumeSpecName: "kube-api-access-fs7wq") pod "dc184ac1-7e14-435e-898d-93e19dab6615" (UID: "dc184ac1-7e14-435e-898d-93e19dab6615"). InnerVolumeSpecName "kube-api-access-fs7wq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.793070 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f84976bdf-nfn7r"] Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.808132 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f84976bdf-nfn7r"] Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.824749 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.825423 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35c1899a-114d-482d-9798-89c7c65fb40b-scripts" (OuterVolumeSpecName: "scripts") pod "35c1899a-114d-482d-9798-89c7c65fb40b" (UID: "35c1899a-114d-482d-9798-89c7c65fb40b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.831822 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc184ac1-7e14-435e-898d-93e19dab6615-scripts" (OuterVolumeSpecName: "scripts") pod "dc184ac1-7e14-435e-898d-93e19dab6615" (UID: "dc184ac1-7e14-435e-898d-93e19dab6615"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.835446 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35c1899a-114d-482d-9798-89c7c65fb40b-kube-api-access-mx874" (OuterVolumeSpecName: "kube-api-access-mx874") pod "35c1899a-114d-482d-9798-89c7c65fb40b" (UID: "35c1899a-114d-482d-9798-89c7c65fb40b"). InnerVolumeSpecName "kube-api-access-mx874". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.839025 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.852543 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 03 06:04:47 crc kubenswrapper[4854]: E0103 06:04:47.853650 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35c1899a-114d-482d-9798-89c7c65fb40b" containerName="glance-httpd" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.853671 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="35c1899a-114d-482d-9798-89c7c65fb40b" containerName="glance-httpd" Jan 03 06:04:47 crc kubenswrapper[4854]: E0103 06:04:47.853686 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35c1899a-114d-482d-9798-89c7c65fb40b" containerName="glance-log" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.853692 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="35c1899a-114d-482d-9798-89c7c65fb40b" containerName="glance-log" Jan 03 06:04:47 crc kubenswrapper[4854]: E0103 06:04:47.853719 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e" containerName="glance-httpd" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.853725 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e" containerName="glance-httpd" Jan 03 06:04:47 crc kubenswrapper[4854]: E0103 06:04:47.853733 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f24e646-0b5a-449a-bce4-35b97195975d" containerName="init" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.853738 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f24e646-0b5a-449a-bce4-35b97195975d" containerName="init" Jan 03 06:04:47 crc kubenswrapper[4854]: E0103 06:04:47.853750 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e" containerName="glance-log" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.853756 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e" containerName="glance-log" Jan 03 06:04:47 crc kubenswrapper[4854]: E0103 06:04:47.853784 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f24e646-0b5a-449a-bce4-35b97195975d" containerName="dnsmasq-dns" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.853789 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f24e646-0b5a-449a-bce4-35b97195975d" containerName="dnsmasq-dns" Jan 03 06:04:47 crc kubenswrapper[4854]: E0103 06:04:47.853801 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc184ac1-7e14-435e-898d-93e19dab6615" containerName="placement-db-sync" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.853807 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc184ac1-7e14-435e-898d-93e19dab6615" containerName="placement-db-sync" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.854014 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f24e646-0b5a-449a-bce4-35b97195975d" containerName="dnsmasq-dns" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.854804 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc184ac1-7e14-435e-898d-93e19dab6615" containerName="placement-db-sync" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.854822 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e" containerName="glance-httpd" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.854842 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="35c1899a-114d-482d-9798-89c7c65fb40b" containerName="glance-httpd" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.854855 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e" containerName="glance-log" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.854873 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="35c1899a-114d-482d-9798-89c7c65fb40b" containerName="glance-log" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.856712 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.862185 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.862234 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.862471 4854 scope.go:117] "RemoveContainer" containerID="d6b4b3d5aa97256c15631c081f1af1df3520b4ff236e92cc6ca50050a1790fd5" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.878458 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.880258 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mx874\" (UniqueName: \"kubernetes.io/projected/35c1899a-114d-482d-9798-89c7c65fb40b-kube-api-access-mx874\") on node \"crc\" DevicePath \"\"" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.881108 4854 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35c1899a-114d-482d-9798-89c7c65fb40b-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.881122 4854 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/35c1899a-114d-482d-9798-89c7c65fb40b-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.881138 4854 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc184ac1-7e14-435e-898d-93e19dab6615-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.881150 4854 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35c1899a-114d-482d-9798-89c7c65fb40b-logs\") on node \"crc\" DevicePath \"\"" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.881159 4854 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc184ac1-7e14-435e-898d-93e19dab6615-logs\") on node \"crc\" DevicePath \"\"" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.881169 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fs7wq\" (UniqueName: \"kubernetes.io/projected/dc184ac1-7e14-435e-898d-93e19dab6615-kube-api-access-fs7wq\") on node \"crc\" DevicePath \"\"" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.985051 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-eb48dc05-44bc-494a-9a3b-52570d27764e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eb48dc05-44bc-494a-9a3b-52570d27764e\") pod \"glance-default-external-api-0\" (UID: \"7c6807cf-78d2-4314-be86-3193a4f978a7\") " pod="openstack/glance-default-external-api-0" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.985223 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c6807cf-78d2-4314-be86-3193a4f978a7-scripts\") pod \"glance-default-external-api-0\" (UID: \"7c6807cf-78d2-4314-be86-3193a4f978a7\") " pod="openstack/glance-default-external-api-0" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.985381 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c6807cf-78d2-4314-be86-3193a4f978a7-logs\") pod \"glance-default-external-api-0\" (UID: \"7c6807cf-78d2-4314-be86-3193a4f978a7\") " pod="openstack/glance-default-external-api-0" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.985459 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c6807cf-78d2-4314-be86-3193a4f978a7-config-data\") pod \"glance-default-external-api-0\" (UID: \"7c6807cf-78d2-4314-be86-3193a4f978a7\") " pod="openstack/glance-default-external-api-0" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.985589 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7c6807cf-78d2-4314-be86-3193a4f978a7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7c6807cf-78d2-4314-be86-3193a4f978a7\") " pod="openstack/glance-default-external-api-0" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.985677 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c6807cf-78d2-4314-be86-3193a4f978a7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7c6807cf-78d2-4314-be86-3193a4f978a7\") " pod="openstack/glance-default-external-api-0" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.985779 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zc7d\" (UniqueName: \"kubernetes.io/projected/7c6807cf-78d2-4314-be86-3193a4f978a7-kube-api-access-7zc7d\") pod \"glance-default-external-api-0\" (UID: \"7c6807cf-78d2-4314-be86-3193a4f978a7\") " pod="openstack/glance-default-external-api-0" Jan 03 06:04:47 crc kubenswrapper[4854]: I0103 06:04:47.985858 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c6807cf-78d2-4314-be86-3193a4f978a7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7c6807cf-78d2-4314-be86-3193a4f978a7\") " pod="openstack/glance-default-external-api-0" Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.088648 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c6807cf-78d2-4314-be86-3193a4f978a7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7c6807cf-78d2-4314-be86-3193a4f978a7\") " pod="openstack/glance-default-external-api-0" Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.088698 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7c6807cf-78d2-4314-be86-3193a4f978a7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7c6807cf-78d2-4314-be86-3193a4f978a7\") " pod="openstack/glance-default-external-api-0" Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.088757 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zc7d\" (UniqueName: \"kubernetes.io/projected/7c6807cf-78d2-4314-be86-3193a4f978a7-kube-api-access-7zc7d\") pod \"glance-default-external-api-0\" (UID: \"7c6807cf-78d2-4314-be86-3193a4f978a7\") " pod="openstack/glance-default-external-api-0" Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.088785 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c6807cf-78d2-4314-be86-3193a4f978a7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7c6807cf-78d2-4314-be86-3193a4f978a7\") " pod="openstack/glance-default-external-api-0" Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.088829 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-eb48dc05-44bc-494a-9a3b-52570d27764e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eb48dc05-44bc-494a-9a3b-52570d27764e\") pod \"glance-default-external-api-0\" (UID: \"7c6807cf-78d2-4314-be86-3193a4f978a7\") " pod="openstack/glance-default-external-api-0" Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.088871 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c6807cf-78d2-4314-be86-3193a4f978a7-scripts\") pod \"glance-default-external-api-0\" (UID: \"7c6807cf-78d2-4314-be86-3193a4f978a7\") " pod="openstack/glance-default-external-api-0" Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.088947 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c6807cf-78d2-4314-be86-3193a4f978a7-logs\") pod \"glance-default-external-api-0\" (UID: \"7c6807cf-78d2-4314-be86-3193a4f978a7\") " pod="openstack/glance-default-external-api-0" Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.088974 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c6807cf-78d2-4314-be86-3193a4f978a7-config-data\") pod \"glance-default-external-api-0\" (UID: \"7c6807cf-78d2-4314-be86-3193a4f978a7\") " pod="openstack/glance-default-external-api-0" Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.101612 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7c6807cf-78d2-4314-be86-3193a4f978a7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7c6807cf-78d2-4314-be86-3193a4f978a7\") " pod="openstack/glance-default-external-api-0" Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.117398 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c6807cf-78d2-4314-be86-3193a4f978a7-logs\") pod \"glance-default-external-api-0\" (UID: \"7c6807cf-78d2-4314-be86-3193a4f978a7\") " pod="openstack/glance-default-external-api-0" Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.126943 4854 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.126989 4854 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-eb48dc05-44bc-494a-9a3b-52570d27764e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eb48dc05-44bc-494a-9a3b-52570d27764e\") pod \"glance-default-external-api-0\" (UID: \"7c6807cf-78d2-4314-be86-3193a4f978a7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c9f88882bd3572929d1777fc402cfbfc71f661649bff4d337785efef0a76426b/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.160801 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e" path="/var/lib/kubelet/pods/133a9ddd-4191-4b4e-afa0-b3cd2fb9b22e/volumes" Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.161746 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f24e646-0b5a-449a-bce4-35b97195975d" path="/var/lib/kubelet/pods/8f24e646-0b5a-449a-bce4-35b97195975d/volumes" Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.254502 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c6807cf-78d2-4314-be86-3193a4f978a7-scripts\") pod \"glance-default-external-api-0\" (UID: \"7c6807cf-78d2-4314-be86-3193a4f978a7\") " pod="openstack/glance-default-external-api-0" Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.254786 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c6807cf-78d2-4314-be86-3193a4f978a7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7c6807cf-78d2-4314-be86-3193a4f978a7\") " pod="openstack/glance-default-external-api-0" Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.254811 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c6807cf-78d2-4314-be86-3193a4f978a7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7c6807cf-78d2-4314-be86-3193a4f978a7\") " pod="openstack/glance-default-external-api-0" Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.257138 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c6807cf-78d2-4314-be86-3193a4f978a7-config-data\") pod \"glance-default-external-api-0\" (UID: \"7c6807cf-78d2-4314-be86-3193a4f978a7\") " pod="openstack/glance-default-external-api-0" Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.257887 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zc7d\" (UniqueName: \"kubernetes.io/projected/7c6807cf-78d2-4314-be86-3193a4f978a7-kube-api-access-7zc7d\") pod \"glance-default-external-api-0\" (UID: \"7c6807cf-78d2-4314-be86-3193a4f978a7\") " pod="openstack/glance-default-external-api-0" Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.287263 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-51516975-721e-4e12-b0dd-7f07c321db4e" (OuterVolumeSpecName: "glance") pod "35c1899a-114d-482d-9798-89c7c65fb40b" (UID: "35c1899a-114d-482d-9798-89c7c65fb40b"). InnerVolumeSpecName "pvc-51516975-721e-4e12-b0dd-7f07c321db4e". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.295219 4854 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-51516975-721e-4e12-b0dd-7f07c321db4e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-51516975-721e-4e12-b0dd-7f07c321db4e\") on node \"crc\" " Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.303352 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc184ac1-7e14-435e-898d-93e19dab6615-config-data" (OuterVolumeSpecName: "config-data") pod "dc184ac1-7e14-435e-898d-93e19dab6615" (UID: "dc184ac1-7e14-435e-898d-93e19dab6615"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.332725 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35c1899a-114d-482d-9798-89c7c65fb40b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "35c1899a-114d-482d-9798-89c7c65fb40b" (UID: "35c1899a-114d-482d-9798-89c7c65fb40b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.345346 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc184ac1-7e14-435e-898d-93e19dab6615-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc184ac1-7e14-435e-898d-93e19dab6615" (UID: "dc184ac1-7e14-435e-898d-93e19dab6615"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.382524 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-eb48dc05-44bc-494a-9a3b-52570d27764e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eb48dc05-44bc-494a-9a3b-52570d27764e\") pod \"glance-default-external-api-0\" (UID: \"7c6807cf-78d2-4314-be86-3193a4f978a7\") " pod="openstack/glance-default-external-api-0" Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.384489 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35c1899a-114d-482d-9798-89c7c65fb40b-config-data" (OuterVolumeSpecName: "config-data") pod "35c1899a-114d-482d-9798-89c7c65fb40b" (UID: "35c1899a-114d-482d-9798-89c7c65fb40b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.387338 4854 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.387481 4854 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-51516975-721e-4e12-b0dd-7f07c321db4e" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-51516975-721e-4e12-b0dd-7f07c321db4e") on node "crc" Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.400346 4854 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35c1899a-114d-482d-9798-89c7c65fb40b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.400379 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35c1899a-114d-482d-9798-89c7c65fb40b-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.400388 4854 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc184ac1-7e14-435e-898d-93e19dab6615-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.400398 4854 reconciler_common.go:293] "Volume detached for volume \"pvc-51516975-721e-4e12-b0dd-7f07c321db4e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-51516975-721e-4e12-b0dd-7f07c321db4e\") on node \"crc\" DevicePath \"\"" Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.400410 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc184ac1-7e14-435e-898d-93e19dab6615-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.496220 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.758730 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"35c1899a-114d-482d-9798-89c7c65fb40b","Type":"ContainerDied","Data":"447e847c097ce43bb900b656f3b7a64216b8835de5da70241969b41c80e78edf"} Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.759257 4854 scope.go:117] "RemoveContainer" containerID="ebd1f3e6f326c8a461319b5669575dd8e7f36d72687adbf4d825c2e1129c91a2" Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.758792 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.772905 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5fd77dbb5-mpxrq" event={"ID":"7388a140-a67f-4a1e-a5fc-b34be00858e2","Type":"ContainerStarted","Data":"8f1c8d9707d29e9ccae840b07ce8c96da3781339d3e4729b8a6cf1b08fe8e117"} Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.772948 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5fd77dbb5-mpxrq" event={"ID":"7388a140-a67f-4a1e-a5fc-b34be00858e2","Type":"ContainerStarted","Data":"b4b6b2f7da5040caeb2f207b91ff66a35b28564ead1acc94d7dee15b32f8e3b3"} Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.789712 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84c5455478-hczhs" event={"ID":"46254f53-deed-4254-801c-1db0db3ec56a","Type":"ContainerStarted","Data":"6c49479d5e0adf15ac1d291b5867a350c34eaac96d004e0a6d75614322aac40e"} Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.789939 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-84c5455478-hczhs" Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.830147 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.848551 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-ff9wl" Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.850427 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f6a47ad8-d256-453c-910a-1506c8f73657","Type":"ContainerStarted","Data":"3b22d075c3a7ee76e377ae295cc106696ba9c13bc0e7e8a680ca6db13d07ce99"} Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.855594 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.883139 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7479b69c6-2zfrf"] Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.885095 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7479b69c6-2zfrf" Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.886363 4854 scope.go:117] "RemoveContainer" containerID="d43c09df31926f65b58a6a10b6696fd843aa799bfa91536ea6b7f4d8695330a3" Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.896971 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.897189 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-2ljdj" Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.897327 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.897474 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.897654 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.899780 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-84c5455478-hczhs" podStartSLOduration=5.899758429 podStartE2EDuration="5.899758429s" podCreationTimestamp="2026-01-03 06:04:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:04:48.842335454 +0000 UTC m=+1467.168912026" watchObservedRunningTime="2026-01-03 06:04:48.899758429 +0000 UTC m=+1467.226335021" Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.900108 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.901838 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.906947 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.913466 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.934715 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6955c3e-6975-4ae7-b1e4-190e75cf0321-logs\") pod \"placement-7479b69c6-2zfrf\" (UID: \"b6955c3e-6975-4ae7-b1e4-190e75cf0321\") " pod="openstack/placement-7479b69c6-2zfrf" Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.934807 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqf8j\" (UniqueName: \"kubernetes.io/projected/b6955c3e-6975-4ae7-b1e4-190e75cf0321-kube-api-access-nqf8j\") pod \"placement-7479b69c6-2zfrf\" (UID: \"b6955c3e-6975-4ae7-b1e4-190e75cf0321\") " pod="openstack/placement-7479b69c6-2zfrf" Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.934837 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6955c3e-6975-4ae7-b1e4-190e75cf0321-internal-tls-certs\") pod \"placement-7479b69c6-2zfrf\" (UID: \"b6955c3e-6975-4ae7-b1e4-190e75cf0321\") " pod="openstack/placement-7479b69c6-2zfrf" Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.934907 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6955c3e-6975-4ae7-b1e4-190e75cf0321-combined-ca-bundle\") pod \"placement-7479b69c6-2zfrf\" (UID: \"b6955c3e-6975-4ae7-b1e4-190e75cf0321\") " pod="openstack/placement-7479b69c6-2zfrf" Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.934945 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6955c3e-6975-4ae7-b1e4-190e75cf0321-config-data\") pod \"placement-7479b69c6-2zfrf\" (UID: \"b6955c3e-6975-4ae7-b1e4-190e75cf0321\") " pod="openstack/placement-7479b69c6-2zfrf" Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.934970 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6955c3e-6975-4ae7-b1e4-190e75cf0321-scripts\") pod \"placement-7479b69c6-2zfrf\" (UID: \"b6955c3e-6975-4ae7-b1e4-190e75cf0321\") " pod="openstack/placement-7479b69c6-2zfrf" Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.935003 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6955c3e-6975-4ae7-b1e4-190e75cf0321-public-tls-certs\") pod \"placement-7479b69c6-2zfrf\" (UID: \"b6955c3e-6975-4ae7-b1e4-190e75cf0321\") " pod="openstack/placement-7479b69c6-2zfrf" Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.956599 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7479b69c6-2zfrf"] Jan 03 06:04:48 crc kubenswrapper[4854]: I0103 06:04:48.995144 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.038875 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6955c3e-6975-4ae7-b1e4-190e75cf0321-combined-ca-bundle\") pod \"placement-7479b69c6-2zfrf\" (UID: \"b6955c3e-6975-4ae7-b1e4-190e75cf0321\") " pod="openstack/placement-7479b69c6-2zfrf" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.039016 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6955c3e-6975-4ae7-b1e4-190e75cf0321-config-data\") pod \"placement-7479b69c6-2zfrf\" (UID: \"b6955c3e-6975-4ae7-b1e4-190e75cf0321\") " pod="openstack/placement-7479b69c6-2zfrf" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.039055 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/592030b4-bfc1-4eb9-81a3-20a22a405f70-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"592030b4-bfc1-4eb9-81a3-20a22a405f70\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.039100 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6955c3e-6975-4ae7-b1e4-190e75cf0321-scripts\") pod \"placement-7479b69c6-2zfrf\" (UID: \"b6955c3e-6975-4ae7-b1e4-190e75cf0321\") " pod="openstack/placement-7479b69c6-2zfrf" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.039146 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6955c3e-6975-4ae7-b1e4-190e75cf0321-public-tls-certs\") pod \"placement-7479b69c6-2zfrf\" (UID: \"b6955c3e-6975-4ae7-b1e4-190e75cf0321\") " pod="openstack/placement-7479b69c6-2zfrf" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.039211 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/592030b4-bfc1-4eb9-81a3-20a22a405f70-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"592030b4-bfc1-4eb9-81a3-20a22a405f70\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.039276 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6955c3e-6975-4ae7-b1e4-190e75cf0321-logs\") pod \"placement-7479b69c6-2zfrf\" (UID: \"b6955c3e-6975-4ae7-b1e4-190e75cf0321\") " pod="openstack/placement-7479b69c6-2zfrf" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.039302 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/592030b4-bfc1-4eb9-81a3-20a22a405f70-scripts\") pod \"glance-default-internal-api-0\" (UID: \"592030b4-bfc1-4eb9-81a3-20a22a405f70\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.039339 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td89g\" (UniqueName: \"kubernetes.io/projected/592030b4-bfc1-4eb9-81a3-20a22a405f70-kube-api-access-td89g\") pod \"glance-default-internal-api-0\" (UID: \"592030b4-bfc1-4eb9-81a3-20a22a405f70\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.039380 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/592030b4-bfc1-4eb9-81a3-20a22a405f70-logs\") pod \"glance-default-internal-api-0\" (UID: \"592030b4-bfc1-4eb9-81a3-20a22a405f70\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.039420 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/592030b4-bfc1-4eb9-81a3-20a22a405f70-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"592030b4-bfc1-4eb9-81a3-20a22a405f70\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.039466 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqf8j\" (UniqueName: \"kubernetes.io/projected/b6955c3e-6975-4ae7-b1e4-190e75cf0321-kube-api-access-nqf8j\") pod \"placement-7479b69c6-2zfrf\" (UID: \"b6955c3e-6975-4ae7-b1e4-190e75cf0321\") " pod="openstack/placement-7479b69c6-2zfrf" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.039509 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-51516975-721e-4e12-b0dd-7f07c321db4e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-51516975-721e-4e12-b0dd-7f07c321db4e\") pod \"glance-default-internal-api-0\" (UID: \"592030b4-bfc1-4eb9-81a3-20a22a405f70\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.039550 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6955c3e-6975-4ae7-b1e4-190e75cf0321-internal-tls-certs\") pod \"placement-7479b69c6-2zfrf\" (UID: \"b6955c3e-6975-4ae7-b1e4-190e75cf0321\") " pod="openstack/placement-7479b69c6-2zfrf" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.039586 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/592030b4-bfc1-4eb9-81a3-20a22a405f70-config-data\") pod \"glance-default-internal-api-0\" (UID: \"592030b4-bfc1-4eb9-81a3-20a22a405f70\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.040573 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6955c3e-6975-4ae7-b1e4-190e75cf0321-logs\") pod \"placement-7479b69c6-2zfrf\" (UID: \"b6955c3e-6975-4ae7-b1e4-190e75cf0321\") " pod="openstack/placement-7479b69c6-2zfrf" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.049755 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6955c3e-6975-4ae7-b1e4-190e75cf0321-scripts\") pod \"placement-7479b69c6-2zfrf\" (UID: \"b6955c3e-6975-4ae7-b1e4-190e75cf0321\") " pod="openstack/placement-7479b69c6-2zfrf" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.050301 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6955c3e-6975-4ae7-b1e4-190e75cf0321-combined-ca-bundle\") pod \"placement-7479b69c6-2zfrf\" (UID: \"b6955c3e-6975-4ae7-b1e4-190e75cf0321\") " pod="openstack/placement-7479b69c6-2zfrf" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.051596 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6955c3e-6975-4ae7-b1e4-190e75cf0321-internal-tls-certs\") pod \"placement-7479b69c6-2zfrf\" (UID: \"b6955c3e-6975-4ae7-b1e4-190e75cf0321\") " pod="openstack/placement-7479b69c6-2zfrf" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.054700 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6955c3e-6975-4ae7-b1e4-190e75cf0321-public-tls-certs\") pod \"placement-7479b69c6-2zfrf\" (UID: \"b6955c3e-6975-4ae7-b1e4-190e75cf0321\") " pod="openstack/placement-7479b69c6-2zfrf" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.056927 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6955c3e-6975-4ae7-b1e4-190e75cf0321-config-data\") pod \"placement-7479b69c6-2zfrf\" (UID: \"b6955c3e-6975-4ae7-b1e4-190e75cf0321\") " pod="openstack/placement-7479b69c6-2zfrf" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.060895 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqf8j\" (UniqueName: \"kubernetes.io/projected/b6955c3e-6975-4ae7-b1e4-190e75cf0321-kube-api-access-nqf8j\") pod \"placement-7479b69c6-2zfrf\" (UID: \"b6955c3e-6975-4ae7-b1e4-190e75cf0321\") " pod="openstack/placement-7479b69c6-2zfrf" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.150459 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/592030b4-bfc1-4eb9-81a3-20a22a405f70-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"592030b4-bfc1-4eb9-81a3-20a22a405f70\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.150827 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/592030b4-bfc1-4eb9-81a3-20a22a405f70-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"592030b4-bfc1-4eb9-81a3-20a22a405f70\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.150874 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/592030b4-bfc1-4eb9-81a3-20a22a405f70-scripts\") pod \"glance-default-internal-api-0\" (UID: \"592030b4-bfc1-4eb9-81a3-20a22a405f70\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.150908 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-td89g\" (UniqueName: \"kubernetes.io/projected/592030b4-bfc1-4eb9-81a3-20a22a405f70-kube-api-access-td89g\") pod \"glance-default-internal-api-0\" (UID: \"592030b4-bfc1-4eb9-81a3-20a22a405f70\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.150946 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/592030b4-bfc1-4eb9-81a3-20a22a405f70-logs\") pod \"glance-default-internal-api-0\" (UID: \"592030b4-bfc1-4eb9-81a3-20a22a405f70\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.150976 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/592030b4-bfc1-4eb9-81a3-20a22a405f70-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"592030b4-bfc1-4eb9-81a3-20a22a405f70\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.151019 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-51516975-721e-4e12-b0dd-7f07c321db4e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-51516975-721e-4e12-b0dd-7f07c321db4e\") pod \"glance-default-internal-api-0\" (UID: \"592030b4-bfc1-4eb9-81a3-20a22a405f70\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.151047 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/592030b4-bfc1-4eb9-81a3-20a22a405f70-config-data\") pod \"glance-default-internal-api-0\" (UID: \"592030b4-bfc1-4eb9-81a3-20a22a405f70\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.151621 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/592030b4-bfc1-4eb9-81a3-20a22a405f70-logs\") pod \"glance-default-internal-api-0\" (UID: \"592030b4-bfc1-4eb9-81a3-20a22a405f70\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.151842 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/592030b4-bfc1-4eb9-81a3-20a22a405f70-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"592030b4-bfc1-4eb9-81a3-20a22a405f70\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.157811 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/592030b4-bfc1-4eb9-81a3-20a22a405f70-config-data\") pod \"glance-default-internal-api-0\" (UID: \"592030b4-bfc1-4eb9-81a3-20a22a405f70\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.157990 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/592030b4-bfc1-4eb9-81a3-20a22a405f70-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"592030b4-bfc1-4eb9-81a3-20a22a405f70\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.171752 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/592030b4-bfc1-4eb9-81a3-20a22a405f70-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"592030b4-bfc1-4eb9-81a3-20a22a405f70\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.174357 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-td89g\" (UniqueName: \"kubernetes.io/projected/592030b4-bfc1-4eb9-81a3-20a22a405f70-kube-api-access-td89g\") pod \"glance-default-internal-api-0\" (UID: \"592030b4-bfc1-4eb9-81a3-20a22a405f70\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.176567 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/592030b4-bfc1-4eb9-81a3-20a22a405f70-scripts\") pod \"glance-default-internal-api-0\" (UID: \"592030b4-bfc1-4eb9-81a3-20a22a405f70\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.181043 4854 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.181113 4854 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-51516975-721e-4e12-b0dd-7f07c321db4e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-51516975-721e-4e12-b0dd-7f07c321db4e\") pod \"glance-default-internal-api-0\" (UID: \"592030b4-bfc1-4eb9-81a3-20a22a405f70\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0b13373a9549fe2a930d42f4a05d9c8d5f6308eb7dd62d88a464a85534d03fb8/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.249405 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-51516975-721e-4e12-b0dd-7f07c321db4e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-51516975-721e-4e12-b0dd-7f07c321db4e\") pod \"glance-default-internal-api-0\" (UID: \"592030b4-bfc1-4eb9-81a3-20a22a405f70\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.255072 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7479b69c6-2zfrf" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.288852 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.335963 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.662102 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-86vzw" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.778271 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8beded3-7a32-47a0-a12a-e346422e7323-combined-ca-bundle\") pod \"c8beded3-7a32-47a0-a12a-e346422e7323\" (UID: \"c8beded3-7a32-47a0-a12a-e346422e7323\") " Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.778441 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c8beded3-7a32-47a0-a12a-e346422e7323-fernet-keys\") pod \"c8beded3-7a32-47a0-a12a-e346422e7323\" (UID: \"c8beded3-7a32-47a0-a12a-e346422e7323\") " Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.778470 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8beded3-7a32-47a0-a12a-e346422e7323-config-data\") pod \"c8beded3-7a32-47a0-a12a-e346422e7323\" (UID: \"c8beded3-7a32-47a0-a12a-e346422e7323\") " Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.778509 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzfhq\" (UniqueName: \"kubernetes.io/projected/c8beded3-7a32-47a0-a12a-e346422e7323-kube-api-access-fzfhq\") pod \"c8beded3-7a32-47a0-a12a-e346422e7323\" (UID: \"c8beded3-7a32-47a0-a12a-e346422e7323\") " Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.778751 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c8beded3-7a32-47a0-a12a-e346422e7323-credential-keys\") pod \"c8beded3-7a32-47a0-a12a-e346422e7323\" (UID: \"c8beded3-7a32-47a0-a12a-e346422e7323\") " Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.778824 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8beded3-7a32-47a0-a12a-e346422e7323-scripts\") pod \"c8beded3-7a32-47a0-a12a-e346422e7323\" (UID: \"c8beded3-7a32-47a0-a12a-e346422e7323\") " Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.787345 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8beded3-7a32-47a0-a12a-e346422e7323-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c8beded3-7a32-47a0-a12a-e346422e7323" (UID: "c8beded3-7a32-47a0-a12a-e346422e7323"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.794423 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8beded3-7a32-47a0-a12a-e346422e7323-scripts" (OuterVolumeSpecName: "scripts") pod "c8beded3-7a32-47a0-a12a-e346422e7323" (UID: "c8beded3-7a32-47a0-a12a-e346422e7323"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.797850 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8beded3-7a32-47a0-a12a-e346422e7323-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "c8beded3-7a32-47a0-a12a-e346422e7323" (UID: "c8beded3-7a32-47a0-a12a-e346422e7323"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.805527 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8beded3-7a32-47a0-a12a-e346422e7323-kube-api-access-fzfhq" (OuterVolumeSpecName: "kube-api-access-fzfhq") pod "c8beded3-7a32-47a0-a12a-e346422e7323" (UID: "c8beded3-7a32-47a0-a12a-e346422e7323"). InnerVolumeSpecName "kube-api-access-fzfhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.862555 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-57b895997d-9fnc7"] Jan 03 06:04:49 crc kubenswrapper[4854]: E0103 06:04:49.863066 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8beded3-7a32-47a0-a12a-e346422e7323" containerName="keystone-bootstrap" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.863095 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8beded3-7a32-47a0-a12a-e346422e7323" containerName="keystone-bootstrap" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.863298 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8beded3-7a32-47a0-a12a-e346422e7323" containerName="keystone-bootstrap" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.864040 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-57b895997d-9fnc7" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.864218 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8beded3-7a32-47a0-a12a-e346422e7323-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c8beded3-7a32-47a0-a12a-e346422e7323" (UID: "c8beded3-7a32-47a0-a12a-e346422e7323"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.864313 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8beded3-7a32-47a0-a12a-e346422e7323-config-data" (OuterVolumeSpecName: "config-data") pod "c8beded3-7a32-47a0-a12a-e346422e7323" (UID: "c8beded3-7a32-47a0-a12a-e346422e7323"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.866428 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.871113 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.883999 4854 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c8beded3-7a32-47a0-a12a-e346422e7323-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.884036 4854 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8beded3-7a32-47a0-a12a-e346422e7323-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.884048 4854 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8beded3-7a32-47a0-a12a-e346422e7323-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.884061 4854 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c8beded3-7a32-47a0-a12a-e346422e7323-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.884071 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8beded3-7a32-47a0-a12a-e346422e7323-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.884096 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzfhq\" (UniqueName: \"kubernetes.io/projected/c8beded3-7a32-47a0-a12a-e346422e7323-kube-api-access-fzfhq\") on node \"crc\" DevicePath \"\"" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.910154 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-57b895997d-9fnc7"] Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.912115 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-86vzw" event={"ID":"c8beded3-7a32-47a0-a12a-e346422e7323","Type":"ContainerDied","Data":"2daf96baec0fc83b527d6aca89150e852e8474e09a4d5fb5af2f37be042b954d"} Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.912155 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2daf96baec0fc83b527d6aca89150e852e8474e09a4d5fb5af2f37be042b954d" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.912411 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-86vzw" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.949221 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5fd77dbb5-mpxrq" event={"ID":"7388a140-a67f-4a1e-a5fc-b34be00858e2","Type":"ContainerStarted","Data":"1208dbdae15070b3f4f5e5e9640f592bc42ea202a35513b1a0feed5349549465"} Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.950291 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5fd77dbb5-mpxrq" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.961443 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7c6807cf-78d2-4314-be86-3193a4f978a7","Type":"ContainerStarted","Data":"b5f3cfa503a68335cece6183716584750500821567821c9d4616ad7daf3dbc80"} Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.988459 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f6a47ad8-d256-453c-910a-1506c8f73657","Type":"ContainerStarted","Data":"f02f10c77388466a0a93ded1359b36ace28d56aa97a63bc6c90b9e1dd0d836f8"} Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.988524 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f6a47ad8-d256-453c-910a-1506c8f73657","Type":"ContainerStarted","Data":"2b9a9453fef547b82deea80fb072e7228a820876ca3016dcdce11a97808c5445"} Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.988929 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a9d71fd6-d5b7-4a82-8f08-96508acb8807-fernet-keys\") pod \"keystone-57b895997d-9fnc7\" (UID: \"a9d71fd6-d5b7-4a82-8f08-96508acb8807\") " pod="openstack/keystone-57b895997d-9fnc7" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.988970 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9d71fd6-d5b7-4a82-8f08-96508acb8807-config-data\") pod \"keystone-57b895997d-9fnc7\" (UID: \"a9d71fd6-d5b7-4a82-8f08-96508acb8807\") " pod="openstack/keystone-57b895997d-9fnc7" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.988985 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a9d71fd6-d5b7-4a82-8f08-96508acb8807-credential-keys\") pod \"keystone-57b895997d-9fnc7\" (UID: \"a9d71fd6-d5b7-4a82-8f08-96508acb8807\") " pod="openstack/keystone-57b895997d-9fnc7" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.989072 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9d71fd6-d5b7-4a82-8f08-96508acb8807-scripts\") pod \"keystone-57b895997d-9fnc7\" (UID: \"a9d71fd6-d5b7-4a82-8f08-96508acb8807\") " pod="openstack/keystone-57b895997d-9fnc7" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.989115 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9d71fd6-d5b7-4a82-8f08-96508acb8807-combined-ca-bundle\") pod \"keystone-57b895997d-9fnc7\" (UID: \"a9d71fd6-d5b7-4a82-8f08-96508acb8807\") " pod="openstack/keystone-57b895997d-9fnc7" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.989161 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9d71fd6-d5b7-4a82-8f08-96508acb8807-internal-tls-certs\") pod \"keystone-57b895997d-9fnc7\" (UID: \"a9d71fd6-d5b7-4a82-8f08-96508acb8807\") " pod="openstack/keystone-57b895997d-9fnc7" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.989213 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9d71fd6-d5b7-4a82-8f08-96508acb8807-public-tls-certs\") pod \"keystone-57b895997d-9fnc7\" (UID: \"a9d71fd6-d5b7-4a82-8f08-96508acb8807\") " pod="openstack/keystone-57b895997d-9fnc7" Jan 03 06:04:49 crc kubenswrapper[4854]: I0103 06:04:49.989232 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w6zx\" (UniqueName: \"kubernetes.io/projected/a9d71fd6-d5b7-4a82-8f08-96508acb8807-kube-api-access-2w6zx\") pod \"keystone-57b895997d-9fnc7\" (UID: \"a9d71fd6-d5b7-4a82-8f08-96508acb8807\") " pod="openstack/keystone-57b895997d-9fnc7" Jan 03 06:04:50 crc kubenswrapper[4854]: I0103 06:04:50.000681 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5fd77dbb5-mpxrq" podStartSLOduration=5.000656176 podStartE2EDuration="5.000656176s" podCreationTimestamp="2026-01-03 06:04:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:04:49.974318407 +0000 UTC m=+1468.300894989" watchObservedRunningTime="2026-01-03 06:04:50.000656176 +0000 UTC m=+1468.327232748" Jan 03 06:04:50 crc kubenswrapper[4854]: W0103 06:04:50.012980 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6955c3e_6975_4ae7_b1e4_190e75cf0321.slice/crio-7e66b23fe52c6ac74aa9e6ef4504852e3438e817b59836e7e945cb527ffd4348 WatchSource:0}: Error finding container 7e66b23fe52c6ac74aa9e6ef4504852e3438e817b59836e7e945cb527ffd4348: Status 404 returned error can't find the container with id 7e66b23fe52c6ac74aa9e6ef4504852e3438e817b59836e7e945cb527ffd4348 Jan 03 06:04:50 crc kubenswrapper[4854]: I0103 06:04:50.024971 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7479b69c6-2zfrf"] Jan 03 06:04:50 crc kubenswrapper[4854]: I0103 06:04:50.091513 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9d71fd6-d5b7-4a82-8f08-96508acb8807-combined-ca-bundle\") pod \"keystone-57b895997d-9fnc7\" (UID: \"a9d71fd6-d5b7-4a82-8f08-96508acb8807\") " pod="openstack/keystone-57b895997d-9fnc7" Jan 03 06:04:50 crc kubenswrapper[4854]: I0103 06:04:50.091607 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9d71fd6-d5b7-4a82-8f08-96508acb8807-internal-tls-certs\") pod \"keystone-57b895997d-9fnc7\" (UID: \"a9d71fd6-d5b7-4a82-8f08-96508acb8807\") " pod="openstack/keystone-57b895997d-9fnc7" Jan 03 06:04:50 crc kubenswrapper[4854]: I0103 06:04:50.091683 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9d71fd6-d5b7-4a82-8f08-96508acb8807-public-tls-certs\") pod \"keystone-57b895997d-9fnc7\" (UID: \"a9d71fd6-d5b7-4a82-8f08-96508acb8807\") " pod="openstack/keystone-57b895997d-9fnc7" Jan 03 06:04:50 crc kubenswrapper[4854]: I0103 06:04:50.091706 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w6zx\" (UniqueName: \"kubernetes.io/projected/a9d71fd6-d5b7-4a82-8f08-96508acb8807-kube-api-access-2w6zx\") pod \"keystone-57b895997d-9fnc7\" (UID: \"a9d71fd6-d5b7-4a82-8f08-96508acb8807\") " pod="openstack/keystone-57b895997d-9fnc7" Jan 03 06:04:50 crc kubenswrapper[4854]: I0103 06:04:50.091790 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a9d71fd6-d5b7-4a82-8f08-96508acb8807-fernet-keys\") pod \"keystone-57b895997d-9fnc7\" (UID: \"a9d71fd6-d5b7-4a82-8f08-96508acb8807\") " pod="openstack/keystone-57b895997d-9fnc7" Jan 03 06:04:50 crc kubenswrapper[4854]: I0103 06:04:50.091820 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9d71fd6-d5b7-4a82-8f08-96508acb8807-config-data\") pod \"keystone-57b895997d-9fnc7\" (UID: \"a9d71fd6-d5b7-4a82-8f08-96508acb8807\") " pod="openstack/keystone-57b895997d-9fnc7" Jan 03 06:04:50 crc kubenswrapper[4854]: I0103 06:04:50.091836 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a9d71fd6-d5b7-4a82-8f08-96508acb8807-credential-keys\") pod \"keystone-57b895997d-9fnc7\" (UID: \"a9d71fd6-d5b7-4a82-8f08-96508acb8807\") " pod="openstack/keystone-57b895997d-9fnc7" Jan 03 06:04:50 crc kubenswrapper[4854]: I0103 06:04:50.091920 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9d71fd6-d5b7-4a82-8f08-96508acb8807-scripts\") pod \"keystone-57b895997d-9fnc7\" (UID: \"a9d71fd6-d5b7-4a82-8f08-96508acb8807\") " pod="openstack/keystone-57b895997d-9fnc7" Jan 03 06:04:50 crc kubenswrapper[4854]: I0103 06:04:50.097218 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9d71fd6-d5b7-4a82-8f08-96508acb8807-combined-ca-bundle\") pod \"keystone-57b895997d-9fnc7\" (UID: \"a9d71fd6-d5b7-4a82-8f08-96508acb8807\") " pod="openstack/keystone-57b895997d-9fnc7" Jan 03 06:04:50 crc kubenswrapper[4854]: I0103 06:04:50.097294 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9d71fd6-d5b7-4a82-8f08-96508acb8807-scripts\") pod \"keystone-57b895997d-9fnc7\" (UID: \"a9d71fd6-d5b7-4a82-8f08-96508acb8807\") " pod="openstack/keystone-57b895997d-9fnc7" Jan 03 06:04:50 crc kubenswrapper[4854]: I0103 06:04:50.100596 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a9d71fd6-d5b7-4a82-8f08-96508acb8807-credential-keys\") pod \"keystone-57b895997d-9fnc7\" (UID: \"a9d71fd6-d5b7-4a82-8f08-96508acb8807\") " pod="openstack/keystone-57b895997d-9fnc7" Jan 03 06:04:50 crc kubenswrapper[4854]: I0103 06:04:50.101784 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a9d71fd6-d5b7-4a82-8f08-96508acb8807-fernet-keys\") pod \"keystone-57b895997d-9fnc7\" (UID: \"a9d71fd6-d5b7-4a82-8f08-96508acb8807\") " pod="openstack/keystone-57b895997d-9fnc7" Jan 03 06:04:50 crc kubenswrapper[4854]: I0103 06:04:50.102203 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9d71fd6-d5b7-4a82-8f08-96508acb8807-config-data\") pod \"keystone-57b895997d-9fnc7\" (UID: \"a9d71fd6-d5b7-4a82-8f08-96508acb8807\") " pod="openstack/keystone-57b895997d-9fnc7" Jan 03 06:04:50 crc kubenswrapper[4854]: I0103 06:04:50.103964 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9d71fd6-d5b7-4a82-8f08-96508acb8807-public-tls-certs\") pod \"keystone-57b895997d-9fnc7\" (UID: \"a9d71fd6-d5b7-4a82-8f08-96508acb8807\") " pod="openstack/keystone-57b895997d-9fnc7" Jan 03 06:04:50 crc kubenswrapper[4854]: I0103 06:04:50.108894 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9d71fd6-d5b7-4a82-8f08-96508acb8807-internal-tls-certs\") pod \"keystone-57b895997d-9fnc7\" (UID: \"a9d71fd6-d5b7-4a82-8f08-96508acb8807\") " pod="openstack/keystone-57b895997d-9fnc7" Jan 03 06:04:50 crc kubenswrapper[4854]: I0103 06:04:50.118169 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w6zx\" (UniqueName: \"kubernetes.io/projected/a9d71fd6-d5b7-4a82-8f08-96508acb8807-kube-api-access-2w6zx\") pod \"keystone-57b895997d-9fnc7\" (UID: \"a9d71fd6-d5b7-4a82-8f08-96508acb8807\") " pod="openstack/keystone-57b895997d-9fnc7" Jan 03 06:04:50 crc kubenswrapper[4854]: I0103 06:04:50.156152 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35c1899a-114d-482d-9798-89c7c65fb40b" path="/var/lib/kubelet/pods/35c1899a-114d-482d-9798-89c7c65fb40b/volumes" Jan 03 06:04:50 crc kubenswrapper[4854]: I0103 06:04:50.208305 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-57b895997d-9fnc7" Jan 03 06:04:50 crc kubenswrapper[4854]: I0103 06:04:50.306742 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 03 06:04:50 crc kubenswrapper[4854]: W0103 06:04:50.336255 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod592030b4_bfc1_4eb9_81a3_20a22a405f70.slice/crio-c70e8d7c9620c6e4df6b0b13fc6fc26c413b8d9c8584de0c842f3668e9319747 WatchSource:0}: Error finding container c70e8d7c9620c6e4df6b0b13fc6fc26c413b8d9c8584de0c842f3668e9319747: Status 404 returned error can't find the container with id c70e8d7c9620c6e4df6b0b13fc6fc26c413b8d9c8584de0c842f3668e9319747 Jan 03 06:04:50 crc kubenswrapper[4854]: I0103 06:04:50.954753 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-57b895997d-9fnc7"] Jan 03 06:04:51 crc kubenswrapper[4854]: W0103 06:04:51.046191 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9d71fd6_d5b7_4a82_8f08_96508acb8807.slice/crio-babd18b01df417ffb9079330d4992f628ac6a07bd4a134117cf68753099b4cb8 WatchSource:0}: Error finding container babd18b01df417ffb9079330d4992f628ac6a07bd4a134117cf68753099b4cb8: Status 404 returned error can't find the container with id babd18b01df417ffb9079330d4992f628ac6a07bd4a134117cf68753099b4cb8 Jan 03 06:04:51 crc kubenswrapper[4854]: I0103 06:04:51.112963 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7479b69c6-2zfrf" event={"ID":"b6955c3e-6975-4ae7-b1e4-190e75cf0321","Type":"ContainerStarted","Data":"7e66b23fe52c6ac74aa9e6ef4504852e3438e817b59836e7e945cb527ffd4348"} Jan 03 06:04:51 crc kubenswrapper[4854]: I0103 06:04:51.134794 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"592030b4-bfc1-4eb9-81a3-20a22a405f70","Type":"ContainerStarted","Data":"c70e8d7c9620c6e4df6b0b13fc6fc26c413b8d9c8584de0c842f3668e9319747"} Jan 03 06:04:52 crc kubenswrapper[4854]: I0103 06:04:52.154866 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-57b895997d-9fnc7" event={"ID":"a9d71fd6-d5b7-4a82-8f08-96508acb8807","Type":"ContainerStarted","Data":"babd18b01df417ffb9079330d4992f628ac6a07bd4a134117cf68753099b4cb8"} Jan 03 06:04:53 crc kubenswrapper[4854]: I0103 06:04:53.169267 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7c6807cf-78d2-4314-be86-3193a4f978a7","Type":"ContainerStarted","Data":"4f5c017b53116d2cacca17ed02b99c93ce5f2c0ba7b305cd414189f7ff05d415"} Jan 03 06:04:53 crc kubenswrapper[4854]: I0103 06:04:53.171375 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7479b69c6-2zfrf" event={"ID":"b6955c3e-6975-4ae7-b1e4-190e75cf0321","Type":"ContainerStarted","Data":"b9fb3e9bf35b3d352de863025f1bc2a773e055e64bb9a8227cacc2bc0ee77b1d"} Jan 03 06:04:53 crc kubenswrapper[4854]: I0103 06:04:53.172826 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"592030b4-bfc1-4eb9-81a3-20a22a405f70","Type":"ContainerStarted","Data":"85643b8d8711e7fcd4a7c9cf76762eace64d762f122af93501aa913d2f5fb88d"} Jan 03 06:04:53 crc kubenswrapper[4854]: I0103 06:04:53.174209 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-57b895997d-9fnc7" event={"ID":"a9d71fd6-d5b7-4a82-8f08-96508acb8807","Type":"ContainerStarted","Data":"255abf333df9a05ff86efc314ed967f92a21179e949d528556b2da459ca2c4fa"} Jan 03 06:04:53 crc kubenswrapper[4854]: I0103 06:04:53.175884 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-57b895997d-9fnc7" Jan 03 06:04:53 crc kubenswrapper[4854]: I0103 06:04:53.213736 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-57b895997d-9fnc7" podStartSLOduration=4.213683971 podStartE2EDuration="4.213683971s" podCreationTimestamp="2026-01-03 06:04:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:04:53.203663608 +0000 UTC m=+1471.530240180" watchObservedRunningTime="2026-01-03 06:04:53.213683971 +0000 UTC m=+1471.540260563" Jan 03 06:04:53 crc kubenswrapper[4854]: I0103 06:04:53.808966 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-fb745b69-f55rh" Jan 03 06:04:53 crc kubenswrapper[4854]: I0103 06:04:53.915573 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-nrqr2"] Jan 03 06:04:53 crc kubenswrapper[4854]: I0103 06:04:53.916103 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-68dcc9cf6f-nrqr2" podUID="30740ab8-3540-4d4e-a677-2deac6c1b280" containerName="dnsmasq-dns" containerID="cri-o://e4d2c73025d878d75de50e1c68fe008f5cb923378ffa7fd84739cd20ac45b8e6" gracePeriod=10 Jan 03 06:04:54 crc kubenswrapper[4854]: I0103 06:04:54.191324 4854 generic.go:334] "Generic (PLEG): container finished" podID="30740ab8-3540-4d4e-a677-2deac6c1b280" containerID="e4d2c73025d878d75de50e1c68fe008f5cb923378ffa7fd84739cd20ac45b8e6" exitCode=0 Jan 03 06:04:54 crc kubenswrapper[4854]: I0103 06:04:54.191406 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68dcc9cf6f-nrqr2" event={"ID":"30740ab8-3540-4d4e-a677-2deac6c1b280","Type":"ContainerDied","Data":"e4d2c73025d878d75de50e1c68fe008f5cb923378ffa7fd84739cd20ac45b8e6"} Jan 03 06:04:55 crc kubenswrapper[4854]: I0103 06:04:55.412781 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-68dcc9cf6f-nrqr2" podUID="30740ab8-3540-4d4e-a677-2deac6c1b280" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.185:5353: connect: connection refused" Jan 03 06:05:00 crc kubenswrapper[4854]: I0103 06:05:00.977203 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68dcc9cf6f-nrqr2" Jan 03 06:05:01 crc kubenswrapper[4854]: I0103 06:05:01.177027 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30740ab8-3540-4d4e-a677-2deac6c1b280-dns-svc\") pod \"30740ab8-3540-4d4e-a677-2deac6c1b280\" (UID: \"30740ab8-3540-4d4e-a677-2deac6c1b280\") " Jan 03 06:05:01 crc kubenswrapper[4854]: I0103 06:05:01.177560 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/30740ab8-3540-4d4e-a677-2deac6c1b280-ovsdbserver-nb\") pod \"30740ab8-3540-4d4e-a677-2deac6c1b280\" (UID: \"30740ab8-3540-4d4e-a677-2deac6c1b280\") " Jan 03 06:05:01 crc kubenswrapper[4854]: I0103 06:05:01.177735 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wzrb\" (UniqueName: \"kubernetes.io/projected/30740ab8-3540-4d4e-a677-2deac6c1b280-kube-api-access-8wzrb\") pod \"30740ab8-3540-4d4e-a677-2deac6c1b280\" (UID: \"30740ab8-3540-4d4e-a677-2deac6c1b280\") " Jan 03 06:05:01 crc kubenswrapper[4854]: I0103 06:05:01.177880 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/30740ab8-3540-4d4e-a677-2deac6c1b280-ovsdbserver-sb\") pod \"30740ab8-3540-4d4e-a677-2deac6c1b280\" (UID: \"30740ab8-3540-4d4e-a677-2deac6c1b280\") " Jan 03 06:05:01 crc kubenswrapper[4854]: I0103 06:05:01.177983 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30740ab8-3540-4d4e-a677-2deac6c1b280-config\") pod \"30740ab8-3540-4d4e-a677-2deac6c1b280\" (UID: \"30740ab8-3540-4d4e-a677-2deac6c1b280\") " Jan 03 06:05:01 crc kubenswrapper[4854]: I0103 06:05:01.191301 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30740ab8-3540-4d4e-a677-2deac6c1b280-kube-api-access-8wzrb" (OuterVolumeSpecName: "kube-api-access-8wzrb") pod "30740ab8-3540-4d4e-a677-2deac6c1b280" (UID: "30740ab8-3540-4d4e-a677-2deac6c1b280"). InnerVolumeSpecName "kube-api-access-8wzrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:05:01 crc kubenswrapper[4854]: I0103 06:05:01.280718 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wzrb\" (UniqueName: \"kubernetes.io/projected/30740ab8-3540-4d4e-a677-2deac6c1b280-kube-api-access-8wzrb\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:01 crc kubenswrapper[4854]: I0103 06:05:01.343156 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68dcc9cf6f-nrqr2" event={"ID":"30740ab8-3540-4d4e-a677-2deac6c1b280","Type":"ContainerDied","Data":"33a5dc663dd8b85bfebdb19d8004216f20379cb840acc92502fd0ee80e94e366"} Jan 03 06:05:01 crc kubenswrapper[4854]: I0103 06:05:01.343209 4854 scope.go:117] "RemoveContainer" containerID="e4d2c73025d878d75de50e1c68fe008f5cb923378ffa7fd84739cd20ac45b8e6" Jan 03 06:05:01 crc kubenswrapper[4854]: I0103 06:05:01.343335 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68dcc9cf6f-nrqr2" Jan 03 06:05:01 crc kubenswrapper[4854]: I0103 06:05:01.480279 4854 scope.go:117] "RemoveContainer" containerID="756321ee3613bf6dc8c53edf1834b609201867cbaa2907214fed49e2f200da6b" Jan 03 06:05:01 crc kubenswrapper[4854]: I0103 06:05:01.737936 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30740ab8-3540-4d4e-a677-2deac6c1b280-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "30740ab8-3540-4d4e-a677-2deac6c1b280" (UID: "30740ab8-3540-4d4e-a677-2deac6c1b280"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:05:01 crc kubenswrapper[4854]: I0103 06:05:01.741784 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30740ab8-3540-4d4e-a677-2deac6c1b280-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "30740ab8-3540-4d4e-a677-2deac6c1b280" (UID: "30740ab8-3540-4d4e-a677-2deac6c1b280"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:05:01 crc kubenswrapper[4854]: I0103 06:05:01.792864 4854 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30740ab8-3540-4d4e-a677-2deac6c1b280-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:01 crc kubenswrapper[4854]: I0103 06:05:01.792890 4854 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/30740ab8-3540-4d4e-a677-2deac6c1b280-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:01 crc kubenswrapper[4854]: I0103 06:05:01.922303 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30740ab8-3540-4d4e-a677-2deac6c1b280-config" (OuterVolumeSpecName: "config") pod "30740ab8-3540-4d4e-a677-2deac6c1b280" (UID: "30740ab8-3540-4d4e-a677-2deac6c1b280"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:05:01 crc kubenswrapper[4854]: I0103 06:05:01.922330 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30740ab8-3540-4d4e-a677-2deac6c1b280-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "30740ab8-3540-4d4e-a677-2deac6c1b280" (UID: "30740ab8-3540-4d4e-a677-2deac6c1b280"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:05:01 crc kubenswrapper[4854]: I0103 06:05:01.997384 4854 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/30740ab8-3540-4d4e-a677-2deac6c1b280-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:01 crc kubenswrapper[4854]: I0103 06:05:01.997616 4854 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30740ab8-3540-4d4e-a677-2deac6c1b280-config\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:02 crc kubenswrapper[4854]: I0103 06:05:02.161336 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-nrqr2"] Jan 03 06:05:02 crc kubenswrapper[4854]: I0103 06:05:02.172757 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-nrqr2"] Jan 03 06:05:02 crc kubenswrapper[4854]: I0103 06:05:02.368665 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f6a47ad8-d256-453c-910a-1506c8f73657","Type":"ContainerStarted","Data":"3bf04c3e8e72b81fa1f203745a8ecae8c2a6ab55addaf755943519bf6c074b4c"} Jan 03 06:05:02 crc kubenswrapper[4854]: I0103 06:05:02.372768 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7479b69c6-2zfrf" event={"ID":"b6955c3e-6975-4ae7-b1e4-190e75cf0321","Type":"ContainerStarted","Data":"8a81db6a7d2b056cdda5aaf2c15243ed7721fb23acf2d103b21166d72cd86790"} Jan 03 06:05:02 crc kubenswrapper[4854]: I0103 06:05:02.372841 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7479b69c6-2zfrf" Jan 03 06:05:02 crc kubenswrapper[4854]: I0103 06:05:02.373050 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7479b69c6-2zfrf" Jan 03 06:05:02 crc kubenswrapper[4854]: I0103 06:05:02.377554 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"592030b4-bfc1-4eb9-81a3-20a22a405f70","Type":"ContainerStarted","Data":"8100ec056877151f84d04b412963cdb83d2054f08073f1c1ab4a64e7a7da9f5f"} Jan 03 06:05:02 crc kubenswrapper[4854]: I0103 06:05:02.382633 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41de8a7f-850a-4f6e-8623-e0cdbcdf79e6","Type":"ContainerStarted","Data":"3ab9a5fd121afa77a60db235cc7985030170fb6cee975ec755d41725cd7857eb"} Jan 03 06:05:02 crc kubenswrapper[4854]: I0103 06:05:02.404747 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7479b69c6-2zfrf" podStartSLOduration=14.404729285 podStartE2EDuration="14.404729285s" podCreationTimestamp="2026-01-03 06:04:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:05:02.401742822 +0000 UTC m=+1480.728319394" watchObservedRunningTime="2026-01-03 06:05:02.404729285 +0000 UTC m=+1480.731305857" Jan 03 06:05:02 crc kubenswrapper[4854]: I0103 06:05:02.429871 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=14.429853765 podStartE2EDuration="14.429853765s" podCreationTimestamp="2026-01-03 06:04:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:05:02.424776742 +0000 UTC m=+1480.751353314" watchObservedRunningTime="2026-01-03 06:05:02.429853765 +0000 UTC m=+1480.756430337" Jan 03 06:05:03 crc kubenswrapper[4854]: I0103 06:05:03.412976 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-9rnh5" event={"ID":"8f46296d-5d5c-4aa8-94e1-e8e5951da088","Type":"ContainerStarted","Data":"86f9f477689f1793ce52c451d0b1e9302ba247029344f9a1a6cfa17c673ff8e7"} Jan 03 06:05:03 crc kubenswrapper[4854]: I0103 06:05:03.422833 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f6a47ad8-d256-453c-910a-1506c8f73657","Type":"ContainerStarted","Data":"b4f5d04f758bc3d6662756b288e20ec0efb471e38751949aedc7315a8f76f5ef"} Jan 03 06:05:03 crc kubenswrapper[4854]: I0103 06:05:03.422871 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f6a47ad8-d256-453c-910a-1506c8f73657","Type":"ContainerStarted","Data":"ef23945222f6bff19bbb76c5a92e0d40b2ec7c59c98f568aaf107a2a4f3eaf6b"} Jan 03 06:05:03 crc kubenswrapper[4854]: I0103 06:05:03.425630 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-lk7dp" event={"ID":"cc6ba4e7-fb71-40f0-b478-e12ed5fc5ae4","Type":"ContainerStarted","Data":"53c039f0963d33593ee947a8a3ea2c025e9c4672bac14f8a3efbc479981065a6"} Jan 03 06:05:03 crc kubenswrapper[4854]: I0103 06:05:03.429281 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-sd52b" event={"ID":"ca061deb-f600-49db-8ac3-6213e22b2f76","Type":"ContainerStarted","Data":"e3b5c91257f418ab8f271fe9fa7d08b1009bc5b328230b69226f1d7bb15dd647"} Jan 03 06:05:03 crc kubenswrapper[4854]: I0103 06:05:03.434512 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7c6807cf-78d2-4314-be86-3193a4f978a7","Type":"ContainerStarted","Data":"6c46ade4d9d60b7442622c1fefc49cd81bdb643723bc9efdf2f5666873fac3f1"} Jan 03 06:05:03 crc kubenswrapper[4854]: I0103 06:05:03.459627 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-lk7dp" podStartSLOduration=5.039212155 podStartE2EDuration="1m4.459600584s" podCreationTimestamp="2026-01-03 06:03:59 +0000 UTC" firstStartedPulling="2026-01-03 06:04:01.813048758 +0000 UTC m=+1420.139625330" lastFinishedPulling="2026-01-03 06:05:01.233437187 +0000 UTC m=+1479.560013759" observedRunningTime="2026-01-03 06:05:03.455526275 +0000 UTC m=+1481.782102877" watchObservedRunningTime="2026-01-03 06:05:03.459600584 +0000 UTC m=+1481.786177166" Jan 03 06:05:03 crc kubenswrapper[4854]: I0103 06:05:03.462474 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-9rnh5" podStartSLOduration=4.382675363 podStartE2EDuration="1m4.462453843s" podCreationTimestamp="2026-01-03 06:03:59 +0000 UTC" firstStartedPulling="2026-01-03 06:04:01.146121104 +0000 UTC m=+1419.472697676" lastFinishedPulling="2026-01-03 06:05:01.225899584 +0000 UTC m=+1479.552476156" observedRunningTime="2026-01-03 06:05:03.435741124 +0000 UTC m=+1481.762317696" watchObservedRunningTime="2026-01-03 06:05:03.462453843 +0000 UTC m=+1481.789030425" Jan 03 06:05:03 crc kubenswrapper[4854]: I0103 06:05:03.492852 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-sd52b" podStartSLOduration=5.072112424 podStartE2EDuration="1m4.492828081s" podCreationTimestamp="2026-01-03 06:03:59 +0000 UTC" firstStartedPulling="2026-01-03 06:04:01.812780572 +0000 UTC m=+1420.139357144" lastFinishedPulling="2026-01-03 06:05:01.233496229 +0000 UTC m=+1479.560072801" observedRunningTime="2026-01-03 06:05:03.473396339 +0000 UTC m=+1481.799972921" watchObservedRunningTime="2026-01-03 06:05:03.492828081 +0000 UTC m=+1481.819404673" Jan 03 06:05:03 crc kubenswrapper[4854]: I0103 06:05:03.526806 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=16.526786786 podStartE2EDuration="16.526786786s" podCreationTimestamp="2026-01-03 06:04:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:05:03.520266747 +0000 UTC m=+1481.846843319" watchObservedRunningTime="2026-01-03 06:05:03.526786786 +0000 UTC m=+1481.853363378" Jan 03 06:05:04 crc kubenswrapper[4854]: I0103 06:05:04.142201 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30740ab8-3540-4d4e-a677-2deac6c1b280" path="/var/lib/kubelet/pods/30740ab8-3540-4d4e-a677-2deac6c1b280/volumes" Jan 03 06:05:04 crc kubenswrapper[4854]: I0103 06:05:04.482593 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f6a47ad8-d256-453c-910a-1506c8f73657","Type":"ContainerStarted","Data":"952e10370c2237e6da58ba7d277fec53bb912afe903272c9497e2dd46386b2c0"} Jan 03 06:05:04 crc kubenswrapper[4854]: I0103 06:05:04.482841 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f6a47ad8-d256-453c-910a-1506c8f73657","Type":"ContainerStarted","Data":"212d912a2e5550be5cbbaf9441c58e1b32fe0332f2001c90c7e3507c46cd31ad"} Jan 03 06:05:04 crc kubenswrapper[4854]: I0103 06:05:04.482851 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f6a47ad8-d256-453c-910a-1506c8f73657","Type":"ContainerStarted","Data":"2d6ef79d6e7e6a1fff8696e57f0b63f02b9be57aca595fe3795a3dd5e4b338f6"} Jan 03 06:05:05 crc kubenswrapper[4854]: I0103 06:05:05.333416 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7479b69c6-2zfrf" Jan 03 06:05:05 crc kubenswrapper[4854]: I0103 06:05:05.413172 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-68dcc9cf6f-nrqr2" podUID="30740ab8-3540-4d4e-a677-2deac6c1b280" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.185:5353: i/o timeout" Jan 03 06:05:05 crc kubenswrapper[4854]: I0103 06:05:05.501272 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f6a47ad8-d256-453c-910a-1506c8f73657","Type":"ContainerStarted","Data":"8730b31c0592f01aaef296ea622088c69e1d0d40bdc23f2aba9d42c67d198e74"} Jan 03 06:05:05 crc kubenswrapper[4854]: I0103 06:05:05.548627 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=72.431742639 podStartE2EDuration="2m11.54860296s" podCreationTimestamp="2026-01-03 06:02:54 +0000 UTC" firstStartedPulling="2026-01-03 06:04:02.007841545 +0000 UTC m=+1420.334418117" lastFinishedPulling="2026-01-03 06:05:01.124701866 +0000 UTC m=+1479.451278438" observedRunningTime="2026-01-03 06:05:05.540421421 +0000 UTC m=+1483.866997993" watchObservedRunningTime="2026-01-03 06:05:05.54860296 +0000 UTC m=+1483.875179532" Jan 03 06:05:05 crc kubenswrapper[4854]: I0103 06:05:05.850989 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-lrn9n"] Jan 03 06:05:05 crc kubenswrapper[4854]: E0103 06:05:05.853221 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30740ab8-3540-4d4e-a677-2deac6c1b280" containerName="dnsmasq-dns" Jan 03 06:05:05 crc kubenswrapper[4854]: I0103 06:05:05.853254 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="30740ab8-3540-4d4e-a677-2deac6c1b280" containerName="dnsmasq-dns" Jan 03 06:05:05 crc kubenswrapper[4854]: E0103 06:05:05.853284 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30740ab8-3540-4d4e-a677-2deac6c1b280" containerName="init" Jan 03 06:05:05 crc kubenswrapper[4854]: I0103 06:05:05.853292 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="30740ab8-3540-4d4e-a677-2deac6c1b280" containerName="init" Jan 03 06:05:05 crc kubenswrapper[4854]: I0103 06:05:05.853956 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="30740ab8-3540-4d4e-a677-2deac6c1b280" containerName="dnsmasq-dns" Jan 03 06:05:05 crc kubenswrapper[4854]: I0103 06:05:05.856732 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-lrn9n" Jan 03 06:05:05 crc kubenswrapper[4854]: I0103 06:05:05.859650 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 03 06:05:05 crc kubenswrapper[4854]: I0103 06:05:05.881280 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-lrn9n"] Jan 03 06:05:05 crc kubenswrapper[4854]: I0103 06:05:05.926108 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c5068d8-79f7-4319-8b27-2343ad584166-config\") pod \"dnsmasq-dns-55f844cf75-lrn9n\" (UID: \"4c5068d8-79f7-4319-8b27-2343ad584166\") " pod="openstack/dnsmasq-dns-55f844cf75-lrn9n" Jan 03 06:05:05 crc kubenswrapper[4854]: I0103 06:05:05.926306 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wswt2\" (UniqueName: \"kubernetes.io/projected/4c5068d8-79f7-4319-8b27-2343ad584166-kube-api-access-wswt2\") pod \"dnsmasq-dns-55f844cf75-lrn9n\" (UID: \"4c5068d8-79f7-4319-8b27-2343ad584166\") " pod="openstack/dnsmasq-dns-55f844cf75-lrn9n" Jan 03 06:05:05 crc kubenswrapper[4854]: I0103 06:05:05.926412 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4c5068d8-79f7-4319-8b27-2343ad584166-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-lrn9n\" (UID: \"4c5068d8-79f7-4319-8b27-2343ad584166\") " pod="openstack/dnsmasq-dns-55f844cf75-lrn9n" Jan 03 06:05:05 crc kubenswrapper[4854]: I0103 06:05:05.926502 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4c5068d8-79f7-4319-8b27-2343ad584166-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-lrn9n\" (UID: \"4c5068d8-79f7-4319-8b27-2343ad584166\") " pod="openstack/dnsmasq-dns-55f844cf75-lrn9n" Jan 03 06:05:05 crc kubenswrapper[4854]: I0103 06:05:05.926685 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4c5068d8-79f7-4319-8b27-2343ad584166-dns-svc\") pod \"dnsmasq-dns-55f844cf75-lrn9n\" (UID: \"4c5068d8-79f7-4319-8b27-2343ad584166\") " pod="openstack/dnsmasq-dns-55f844cf75-lrn9n" Jan 03 06:05:05 crc kubenswrapper[4854]: I0103 06:05:05.926719 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4c5068d8-79f7-4319-8b27-2343ad584166-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-lrn9n\" (UID: \"4c5068d8-79f7-4319-8b27-2343ad584166\") " pod="openstack/dnsmasq-dns-55f844cf75-lrn9n" Jan 03 06:05:06 crc kubenswrapper[4854]: I0103 06:05:06.035805 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4c5068d8-79f7-4319-8b27-2343ad584166-dns-svc\") pod \"dnsmasq-dns-55f844cf75-lrn9n\" (UID: \"4c5068d8-79f7-4319-8b27-2343ad584166\") " pod="openstack/dnsmasq-dns-55f844cf75-lrn9n" Jan 03 06:05:06 crc kubenswrapper[4854]: I0103 06:05:06.035874 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4c5068d8-79f7-4319-8b27-2343ad584166-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-lrn9n\" (UID: \"4c5068d8-79f7-4319-8b27-2343ad584166\") " pod="openstack/dnsmasq-dns-55f844cf75-lrn9n" Jan 03 06:05:06 crc kubenswrapper[4854]: I0103 06:05:06.035964 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c5068d8-79f7-4319-8b27-2343ad584166-config\") pod \"dnsmasq-dns-55f844cf75-lrn9n\" (UID: \"4c5068d8-79f7-4319-8b27-2343ad584166\") " pod="openstack/dnsmasq-dns-55f844cf75-lrn9n" Jan 03 06:05:06 crc kubenswrapper[4854]: I0103 06:05:06.036066 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wswt2\" (UniqueName: \"kubernetes.io/projected/4c5068d8-79f7-4319-8b27-2343ad584166-kube-api-access-wswt2\") pod \"dnsmasq-dns-55f844cf75-lrn9n\" (UID: \"4c5068d8-79f7-4319-8b27-2343ad584166\") " pod="openstack/dnsmasq-dns-55f844cf75-lrn9n" Jan 03 06:05:06 crc kubenswrapper[4854]: I0103 06:05:06.036334 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4c5068d8-79f7-4319-8b27-2343ad584166-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-lrn9n\" (UID: \"4c5068d8-79f7-4319-8b27-2343ad584166\") " pod="openstack/dnsmasq-dns-55f844cf75-lrn9n" Jan 03 06:05:06 crc kubenswrapper[4854]: I0103 06:05:06.036424 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4c5068d8-79f7-4319-8b27-2343ad584166-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-lrn9n\" (UID: \"4c5068d8-79f7-4319-8b27-2343ad584166\") " pod="openstack/dnsmasq-dns-55f844cf75-lrn9n" Jan 03 06:05:06 crc kubenswrapper[4854]: I0103 06:05:06.036817 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4c5068d8-79f7-4319-8b27-2343ad584166-dns-svc\") pod \"dnsmasq-dns-55f844cf75-lrn9n\" (UID: \"4c5068d8-79f7-4319-8b27-2343ad584166\") " pod="openstack/dnsmasq-dns-55f844cf75-lrn9n" Jan 03 06:05:06 crc kubenswrapper[4854]: I0103 06:05:06.036891 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4c5068d8-79f7-4319-8b27-2343ad584166-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-lrn9n\" (UID: \"4c5068d8-79f7-4319-8b27-2343ad584166\") " pod="openstack/dnsmasq-dns-55f844cf75-lrn9n" Jan 03 06:05:06 crc kubenswrapper[4854]: I0103 06:05:06.037213 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4c5068d8-79f7-4319-8b27-2343ad584166-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-lrn9n\" (UID: \"4c5068d8-79f7-4319-8b27-2343ad584166\") " pod="openstack/dnsmasq-dns-55f844cf75-lrn9n" Jan 03 06:05:06 crc kubenswrapper[4854]: I0103 06:05:06.040593 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4c5068d8-79f7-4319-8b27-2343ad584166-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-lrn9n\" (UID: \"4c5068d8-79f7-4319-8b27-2343ad584166\") " pod="openstack/dnsmasq-dns-55f844cf75-lrn9n" Jan 03 06:05:06 crc kubenswrapper[4854]: I0103 06:05:06.041176 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c5068d8-79f7-4319-8b27-2343ad584166-config\") pod \"dnsmasq-dns-55f844cf75-lrn9n\" (UID: \"4c5068d8-79f7-4319-8b27-2343ad584166\") " pod="openstack/dnsmasq-dns-55f844cf75-lrn9n" Jan 03 06:05:06 crc kubenswrapper[4854]: I0103 06:05:06.061184 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wswt2\" (UniqueName: \"kubernetes.io/projected/4c5068d8-79f7-4319-8b27-2343ad584166-kube-api-access-wswt2\") pod \"dnsmasq-dns-55f844cf75-lrn9n\" (UID: \"4c5068d8-79f7-4319-8b27-2343ad584166\") " pod="openstack/dnsmasq-dns-55f844cf75-lrn9n" Jan 03 06:05:06 crc kubenswrapper[4854]: I0103 06:05:06.194239 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-lrn9n" Jan 03 06:05:06 crc kubenswrapper[4854]: I0103 06:05:06.698741 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-lrn9n"] Jan 03 06:05:07 crc kubenswrapper[4854]: I0103 06:05:07.533093 4854 generic.go:334] "Generic (PLEG): container finished" podID="4c5068d8-79f7-4319-8b27-2343ad584166" containerID="fb0703b40598a0147541955f6542b7f3c5727badd3234e73d2e392f9c7411afe" exitCode=0 Jan 03 06:05:07 crc kubenswrapper[4854]: I0103 06:05:07.533139 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-lrn9n" event={"ID":"4c5068d8-79f7-4319-8b27-2343ad584166","Type":"ContainerDied","Data":"fb0703b40598a0147541955f6542b7f3c5727badd3234e73d2e392f9c7411afe"} Jan 03 06:05:07 crc kubenswrapper[4854]: I0103 06:05:07.533331 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-lrn9n" event={"ID":"4c5068d8-79f7-4319-8b27-2343ad584166","Type":"ContainerStarted","Data":"147edc5e14d7a380a1c17a1e92643076ecf5ae648c6af502432f92898b8efa27"} Jan 03 06:05:08 crc kubenswrapper[4854]: I0103 06:05:08.498263 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 03 06:05:08 crc kubenswrapper[4854]: I0103 06:05:08.499303 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 03 06:05:08 crc kubenswrapper[4854]: I0103 06:05:08.553642 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 03 06:05:08 crc kubenswrapper[4854]: I0103 06:05:08.554328 4854 generic.go:334] "Generic (PLEG): container finished" podID="cc6ba4e7-fb71-40f0-b478-e12ed5fc5ae4" containerID="53c039f0963d33593ee947a8a3ea2c025e9c4672bac14f8a3efbc479981065a6" exitCode=0 Jan 03 06:05:08 crc kubenswrapper[4854]: I0103 06:05:08.554451 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-lk7dp" event={"ID":"cc6ba4e7-fb71-40f0-b478-e12ed5fc5ae4","Type":"ContainerDied","Data":"53c039f0963d33593ee947a8a3ea2c025e9c4672bac14f8a3efbc479981065a6"} Jan 03 06:05:08 crc kubenswrapper[4854]: I0103 06:05:08.554681 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 03 06:05:08 crc kubenswrapper[4854]: I0103 06:05:08.559327 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 03 06:05:09 crc kubenswrapper[4854]: I0103 06:05:09.291251 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 03 06:05:09 crc kubenswrapper[4854]: I0103 06:05:09.291449 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 03 06:05:09 crc kubenswrapper[4854]: I0103 06:05:09.336500 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 03 06:05:09 crc kubenswrapper[4854]: I0103 06:05:09.342207 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 03 06:05:09 crc kubenswrapper[4854]: I0103 06:05:09.576879 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 03 06:05:09 crc kubenswrapper[4854]: I0103 06:05:09.577510 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 03 06:05:09 crc kubenswrapper[4854]: I0103 06:05:09.577524 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 03 06:05:10 crc kubenswrapper[4854]: I0103 06:05:10.590363 4854 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 03 06:05:11 crc kubenswrapper[4854]: I0103 06:05:11.599610 4854 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 03 06:05:11 crc kubenswrapper[4854]: I0103 06:05:11.599858 4854 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 03 06:05:11 crc kubenswrapper[4854]: I0103 06:05:11.755192 4854 patch_prober.go:28] interesting pod/machine-config-daemon-qdhfx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 06:05:11 crc kubenswrapper[4854]: I0103 06:05:11.755256 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 06:05:11 crc kubenswrapper[4854]: I0103 06:05:11.755301 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" Jan 03 06:05:11 crc kubenswrapper[4854]: I0103 06:05:11.756190 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9b4c3aaa2ac11419adcfab21b6c1450ea5c292a92e0be09a3fba503318e11474"} pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 03 06:05:11 crc kubenswrapper[4854]: I0103 06:05:11.756249 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" containerID="cri-o://9b4c3aaa2ac11419adcfab21b6c1450ea5c292a92e0be09a3fba503318e11474" gracePeriod=600 Jan 03 06:05:12 crc kubenswrapper[4854]: I0103 06:05:12.176305 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-lk7dp" Jan 03 06:05:12 crc kubenswrapper[4854]: I0103 06:05:12.279005 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cc6ba4e7-fb71-40f0-b478-e12ed5fc5ae4-db-sync-config-data\") pod \"cc6ba4e7-fb71-40f0-b478-e12ed5fc5ae4\" (UID: \"cc6ba4e7-fb71-40f0-b478-e12ed5fc5ae4\") " Jan 03 06:05:12 crc kubenswrapper[4854]: I0103 06:05:12.279455 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pk9wm\" (UniqueName: \"kubernetes.io/projected/cc6ba4e7-fb71-40f0-b478-e12ed5fc5ae4-kube-api-access-pk9wm\") pod \"cc6ba4e7-fb71-40f0-b478-e12ed5fc5ae4\" (UID: \"cc6ba4e7-fb71-40f0-b478-e12ed5fc5ae4\") " Jan 03 06:05:12 crc kubenswrapper[4854]: I0103 06:05:12.279507 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc6ba4e7-fb71-40f0-b478-e12ed5fc5ae4-combined-ca-bundle\") pod \"cc6ba4e7-fb71-40f0-b478-e12ed5fc5ae4\" (UID: \"cc6ba4e7-fb71-40f0-b478-e12ed5fc5ae4\") " Jan 03 06:05:12 crc kubenswrapper[4854]: I0103 06:05:12.296011 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc6ba4e7-fb71-40f0-b478-e12ed5fc5ae4-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "cc6ba4e7-fb71-40f0-b478-e12ed5fc5ae4" (UID: "cc6ba4e7-fb71-40f0-b478-e12ed5fc5ae4"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:05:12 crc kubenswrapper[4854]: I0103 06:05:12.296723 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc6ba4e7-fb71-40f0-b478-e12ed5fc5ae4-kube-api-access-pk9wm" (OuterVolumeSpecName: "kube-api-access-pk9wm") pod "cc6ba4e7-fb71-40f0-b478-e12ed5fc5ae4" (UID: "cc6ba4e7-fb71-40f0-b478-e12ed5fc5ae4"). InnerVolumeSpecName "kube-api-access-pk9wm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:05:12 crc kubenswrapper[4854]: I0103 06:05:12.316146 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc6ba4e7-fb71-40f0-b478-e12ed5fc5ae4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cc6ba4e7-fb71-40f0-b478-e12ed5fc5ae4" (UID: "cc6ba4e7-fb71-40f0-b478-e12ed5fc5ae4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:05:12 crc kubenswrapper[4854]: I0103 06:05:12.356932 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 03 06:05:12 crc kubenswrapper[4854]: I0103 06:05:12.383793 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pk9wm\" (UniqueName: \"kubernetes.io/projected/cc6ba4e7-fb71-40f0-b478-e12ed5fc5ae4-kube-api-access-pk9wm\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:12 crc kubenswrapper[4854]: I0103 06:05:12.383835 4854 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc6ba4e7-fb71-40f0-b478-e12ed5fc5ae4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:12 crc kubenswrapper[4854]: I0103 06:05:12.383850 4854 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cc6ba4e7-fb71-40f0-b478-e12ed5fc5ae4-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:12 crc kubenswrapper[4854]: I0103 06:05:12.402132 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 03 06:05:12 crc kubenswrapper[4854]: I0103 06:05:12.453297 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 03 06:05:12 crc kubenswrapper[4854]: I0103 06:05:12.453422 4854 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 03 06:05:12 crc kubenswrapper[4854]: I0103 06:05:12.467549 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 03 06:05:12 crc kubenswrapper[4854]: I0103 06:05:12.624326 4854 generic.go:334] "Generic (PLEG): container finished" podID="8f46296d-5d5c-4aa8-94e1-e8e5951da088" containerID="86f9f477689f1793ce52c451d0b1e9302ba247029344f9a1a6cfa17c673ff8e7" exitCode=0 Jan 03 06:05:12 crc kubenswrapper[4854]: I0103 06:05:12.624654 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-9rnh5" event={"ID":"8f46296d-5d5c-4aa8-94e1-e8e5951da088","Type":"ContainerDied","Data":"86f9f477689f1793ce52c451d0b1e9302ba247029344f9a1a6cfa17c673ff8e7"} Jan 03 06:05:12 crc kubenswrapper[4854]: I0103 06:05:12.627883 4854 generic.go:334] "Generic (PLEG): container finished" podID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerID="9b4c3aaa2ac11419adcfab21b6c1450ea5c292a92e0be09a3fba503318e11474" exitCode=0 Jan 03 06:05:12 crc kubenswrapper[4854]: I0103 06:05:12.628009 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" event={"ID":"e8c88d7d-092b-44f7-b4c8-3540be3c0e8b","Type":"ContainerDied","Data":"9b4c3aaa2ac11419adcfab21b6c1450ea5c292a92e0be09a3fba503318e11474"} Jan 03 06:05:12 crc kubenswrapper[4854]: I0103 06:05:12.628062 4854 scope.go:117] "RemoveContainer" containerID="698413854ef7d83140b2bf1b7914886f1f1ee8bad9480a9e32b96368143c12a3" Jan 03 06:05:12 crc kubenswrapper[4854]: I0103 06:05:12.634208 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-lk7dp" Jan 03 06:05:12 crc kubenswrapper[4854]: I0103 06:05:12.636429 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-lk7dp" event={"ID":"cc6ba4e7-fb71-40f0-b478-e12ed5fc5ae4","Type":"ContainerDied","Data":"83a55ddfeddef356295ac2aa6c98fa9b1759f1f9a323231eeade45af5c64f724"} Jan 03 06:05:12 crc kubenswrapper[4854]: I0103 06:05:12.636470 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83a55ddfeddef356295ac2aa6c98fa9b1759f1f9a323231eeade45af5c64f724" Jan 03 06:05:12 crc kubenswrapper[4854]: I0103 06:05:12.636948 4854 generic.go:334] "Generic (PLEG): container finished" podID="ca061deb-f600-49db-8ac3-6213e22b2f76" containerID="e3b5c91257f418ab8f271fe9fa7d08b1009bc5b328230b69226f1d7bb15dd647" exitCode=0 Jan 03 06:05:12 crc kubenswrapper[4854]: I0103 06:05:12.637039 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-sd52b" event={"ID":"ca061deb-f600-49db-8ac3-6213e22b2f76","Type":"ContainerDied","Data":"e3b5c91257f418ab8f271fe9fa7d08b1009bc5b328230b69226f1d7bb15dd647"} Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.550144 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-64c7fcd798-ntxft"] Jan 03 06:05:13 crc kubenswrapper[4854]: E0103 06:05:13.550793 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc6ba4e7-fb71-40f0-b478-e12ed5fc5ae4" containerName="barbican-db-sync" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.550818 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc6ba4e7-fb71-40f0-b478-e12ed5fc5ae4" containerName="barbican-db-sync" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.551098 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc6ba4e7-fb71-40f0-b478-e12ed5fc5ae4" containerName="barbican-db-sync" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.552681 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-64c7fcd798-ntxft" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.562924 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.563235 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.563394 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-zhvd5" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.583496 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-6d8cd4cdc9-wwfpf"] Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.586182 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6d8cd4cdc9-wwfpf" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.589027 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.626831 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l75kl\" (UniqueName: \"kubernetes.io/projected/ebcdaaae-12a6-437d-a050-ee71e343b5b0-kube-api-access-l75kl\") pod \"barbican-keystone-listener-64c7fcd798-ntxft\" (UID: \"ebcdaaae-12a6-437d-a050-ee71e343b5b0\") " pod="openstack/barbican-keystone-listener-64c7fcd798-ntxft" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.627183 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e56f3e19-d54e-44be-9a12-485d5f86231f-combined-ca-bundle\") pod \"barbican-worker-6d8cd4cdc9-wwfpf\" (UID: \"e56f3e19-d54e-44be-9a12-485d5f86231f\") " pod="openstack/barbican-worker-6d8cd4cdc9-wwfpf" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.627237 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebcdaaae-12a6-437d-a050-ee71e343b5b0-logs\") pod \"barbican-keystone-listener-64c7fcd798-ntxft\" (UID: \"ebcdaaae-12a6-437d-a050-ee71e343b5b0\") " pod="openstack/barbican-keystone-listener-64c7fcd798-ntxft" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.627268 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebcdaaae-12a6-437d-a050-ee71e343b5b0-combined-ca-bundle\") pod \"barbican-keystone-listener-64c7fcd798-ntxft\" (UID: \"ebcdaaae-12a6-437d-a050-ee71e343b5b0\") " pod="openstack/barbican-keystone-listener-64c7fcd798-ntxft" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.627293 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk8xj\" (UniqueName: \"kubernetes.io/projected/e56f3e19-d54e-44be-9a12-485d5f86231f-kube-api-access-lk8xj\") pod \"barbican-worker-6d8cd4cdc9-wwfpf\" (UID: \"e56f3e19-d54e-44be-9a12-485d5f86231f\") " pod="openstack/barbican-worker-6d8cd4cdc9-wwfpf" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.627314 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebcdaaae-12a6-437d-a050-ee71e343b5b0-config-data\") pod \"barbican-keystone-listener-64c7fcd798-ntxft\" (UID: \"ebcdaaae-12a6-437d-a050-ee71e343b5b0\") " pod="openstack/barbican-keystone-listener-64c7fcd798-ntxft" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.627386 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e56f3e19-d54e-44be-9a12-485d5f86231f-config-data\") pod \"barbican-worker-6d8cd4cdc9-wwfpf\" (UID: \"e56f3e19-d54e-44be-9a12-485d5f86231f\") " pod="openstack/barbican-worker-6d8cd4cdc9-wwfpf" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.627560 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e56f3e19-d54e-44be-9a12-485d5f86231f-config-data-custom\") pod \"barbican-worker-6d8cd4cdc9-wwfpf\" (UID: \"e56f3e19-d54e-44be-9a12-485d5f86231f\") " pod="openstack/barbican-worker-6d8cd4cdc9-wwfpf" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.627594 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e56f3e19-d54e-44be-9a12-485d5f86231f-logs\") pod \"barbican-worker-6d8cd4cdc9-wwfpf\" (UID: \"e56f3e19-d54e-44be-9a12-485d5f86231f\") " pod="openstack/barbican-worker-6d8cd4cdc9-wwfpf" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.627641 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ebcdaaae-12a6-437d-a050-ee71e343b5b0-config-data-custom\") pod \"barbican-keystone-listener-64c7fcd798-ntxft\" (UID: \"ebcdaaae-12a6-437d-a050-ee71e343b5b0\") " pod="openstack/barbican-keystone-listener-64c7fcd798-ntxft" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.644425 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6d8cd4cdc9-wwfpf"] Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.682295 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-64c7fcd798-ntxft"] Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.729826 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e56f3e19-d54e-44be-9a12-485d5f86231f-config-data\") pod \"barbican-worker-6d8cd4cdc9-wwfpf\" (UID: \"e56f3e19-d54e-44be-9a12-485d5f86231f\") " pod="openstack/barbican-worker-6d8cd4cdc9-wwfpf" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.729909 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e56f3e19-d54e-44be-9a12-485d5f86231f-config-data-custom\") pod \"barbican-worker-6d8cd4cdc9-wwfpf\" (UID: \"e56f3e19-d54e-44be-9a12-485d5f86231f\") " pod="openstack/barbican-worker-6d8cd4cdc9-wwfpf" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.729931 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e56f3e19-d54e-44be-9a12-485d5f86231f-logs\") pod \"barbican-worker-6d8cd4cdc9-wwfpf\" (UID: \"e56f3e19-d54e-44be-9a12-485d5f86231f\") " pod="openstack/barbican-worker-6d8cd4cdc9-wwfpf" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.729954 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ebcdaaae-12a6-437d-a050-ee71e343b5b0-config-data-custom\") pod \"barbican-keystone-listener-64c7fcd798-ntxft\" (UID: \"ebcdaaae-12a6-437d-a050-ee71e343b5b0\") " pod="openstack/barbican-keystone-listener-64c7fcd798-ntxft" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.730056 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l75kl\" (UniqueName: \"kubernetes.io/projected/ebcdaaae-12a6-437d-a050-ee71e343b5b0-kube-api-access-l75kl\") pod \"barbican-keystone-listener-64c7fcd798-ntxft\" (UID: \"ebcdaaae-12a6-437d-a050-ee71e343b5b0\") " pod="openstack/barbican-keystone-listener-64c7fcd798-ntxft" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.730094 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e56f3e19-d54e-44be-9a12-485d5f86231f-combined-ca-bundle\") pod \"barbican-worker-6d8cd4cdc9-wwfpf\" (UID: \"e56f3e19-d54e-44be-9a12-485d5f86231f\") " pod="openstack/barbican-worker-6d8cd4cdc9-wwfpf" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.730122 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebcdaaae-12a6-437d-a050-ee71e343b5b0-logs\") pod \"barbican-keystone-listener-64c7fcd798-ntxft\" (UID: \"ebcdaaae-12a6-437d-a050-ee71e343b5b0\") " pod="openstack/barbican-keystone-listener-64c7fcd798-ntxft" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.730159 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebcdaaae-12a6-437d-a050-ee71e343b5b0-combined-ca-bundle\") pod \"barbican-keystone-listener-64c7fcd798-ntxft\" (UID: \"ebcdaaae-12a6-437d-a050-ee71e343b5b0\") " pod="openstack/barbican-keystone-listener-64c7fcd798-ntxft" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.730197 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk8xj\" (UniqueName: \"kubernetes.io/projected/e56f3e19-d54e-44be-9a12-485d5f86231f-kube-api-access-lk8xj\") pod \"barbican-worker-6d8cd4cdc9-wwfpf\" (UID: \"e56f3e19-d54e-44be-9a12-485d5f86231f\") " pod="openstack/barbican-worker-6d8cd4cdc9-wwfpf" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.730227 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebcdaaae-12a6-437d-a050-ee71e343b5b0-config-data\") pod \"barbican-keystone-listener-64c7fcd798-ntxft\" (UID: \"ebcdaaae-12a6-437d-a050-ee71e343b5b0\") " pod="openstack/barbican-keystone-listener-64c7fcd798-ntxft" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.730724 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e56f3e19-d54e-44be-9a12-485d5f86231f-logs\") pod \"barbican-worker-6d8cd4cdc9-wwfpf\" (UID: \"e56f3e19-d54e-44be-9a12-485d5f86231f\") " pod="openstack/barbican-worker-6d8cd4cdc9-wwfpf" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.732244 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebcdaaae-12a6-437d-a050-ee71e343b5b0-logs\") pod \"barbican-keystone-listener-64c7fcd798-ntxft\" (UID: \"ebcdaaae-12a6-437d-a050-ee71e343b5b0\") " pod="openstack/barbican-keystone-listener-64c7fcd798-ntxft" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.741046 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e56f3e19-d54e-44be-9a12-485d5f86231f-config-data-custom\") pod \"barbican-worker-6d8cd4cdc9-wwfpf\" (UID: \"e56f3e19-d54e-44be-9a12-485d5f86231f\") " pod="openstack/barbican-worker-6d8cd4cdc9-wwfpf" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.744567 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebcdaaae-12a6-437d-a050-ee71e343b5b0-combined-ca-bundle\") pod \"barbican-keystone-listener-64c7fcd798-ntxft\" (UID: \"ebcdaaae-12a6-437d-a050-ee71e343b5b0\") " pod="openstack/barbican-keystone-listener-64c7fcd798-ntxft" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.745002 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e56f3e19-d54e-44be-9a12-485d5f86231f-combined-ca-bundle\") pod \"barbican-worker-6d8cd4cdc9-wwfpf\" (UID: \"e56f3e19-d54e-44be-9a12-485d5f86231f\") " pod="openstack/barbican-worker-6d8cd4cdc9-wwfpf" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.751529 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ebcdaaae-12a6-437d-a050-ee71e343b5b0-config-data-custom\") pod \"barbican-keystone-listener-64c7fcd798-ntxft\" (UID: \"ebcdaaae-12a6-437d-a050-ee71e343b5b0\") " pod="openstack/barbican-keystone-listener-64c7fcd798-ntxft" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.756853 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-lrn9n"] Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.764459 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e56f3e19-d54e-44be-9a12-485d5f86231f-config-data\") pod \"barbican-worker-6d8cd4cdc9-wwfpf\" (UID: \"e56f3e19-d54e-44be-9a12-485d5f86231f\") " pod="openstack/barbican-worker-6d8cd4cdc9-wwfpf" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.765792 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebcdaaae-12a6-437d-a050-ee71e343b5b0-config-data\") pod \"barbican-keystone-listener-64c7fcd798-ntxft\" (UID: \"ebcdaaae-12a6-437d-a050-ee71e343b5b0\") " pod="openstack/barbican-keystone-listener-64c7fcd798-ntxft" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.771896 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l75kl\" (UniqueName: \"kubernetes.io/projected/ebcdaaae-12a6-437d-a050-ee71e343b5b0-kube-api-access-l75kl\") pod \"barbican-keystone-listener-64c7fcd798-ntxft\" (UID: \"ebcdaaae-12a6-437d-a050-ee71e343b5b0\") " pod="openstack/barbican-keystone-listener-64c7fcd798-ntxft" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.780566 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk8xj\" (UniqueName: \"kubernetes.io/projected/e56f3e19-d54e-44be-9a12-485d5f86231f-kube-api-access-lk8xj\") pod \"barbican-worker-6d8cd4cdc9-wwfpf\" (UID: \"e56f3e19-d54e-44be-9a12-485d5f86231f\") " pod="openstack/barbican-worker-6d8cd4cdc9-wwfpf" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.799058 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-ps98h"] Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.801237 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-ps98h" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.810024 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-ps98h"] Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.829583 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-55c4d98986-689lr"] Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.831443 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-55c4d98986-689lr" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.832944 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dd5e414-c0ff-441b-a802-de3f2bbf4a4c-config\") pod \"dnsmasq-dns-85ff748b95-ps98h\" (UID: \"4dd5e414-c0ff-441b-a802-de3f2bbf4a4c\") " pod="openstack/dnsmasq-dns-85ff748b95-ps98h" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.833120 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4dd5e414-c0ff-441b-a802-de3f2bbf4a4c-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-ps98h\" (UID: \"4dd5e414-c0ff-441b-a802-de3f2bbf4a4c\") " pod="openstack/dnsmasq-dns-85ff748b95-ps98h" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.833259 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsf6z\" (UniqueName: \"kubernetes.io/projected/4dd5e414-c0ff-441b-a802-de3f2bbf4a4c-kube-api-access-fsf6z\") pod \"dnsmasq-dns-85ff748b95-ps98h\" (UID: \"4dd5e414-c0ff-441b-a802-de3f2bbf4a4c\") " pod="openstack/dnsmasq-dns-85ff748b95-ps98h" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.833425 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4dd5e414-c0ff-441b-a802-de3f2bbf4a4c-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-ps98h\" (UID: \"4dd5e414-c0ff-441b-a802-de3f2bbf4a4c\") " pod="openstack/dnsmasq-dns-85ff748b95-ps98h" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.838459 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4dd5e414-c0ff-441b-a802-de3f2bbf4a4c-dns-svc\") pod \"dnsmasq-dns-85ff748b95-ps98h\" (UID: \"4dd5e414-c0ff-441b-a802-de3f2bbf4a4c\") " pod="openstack/dnsmasq-dns-85ff748b95-ps98h" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.838790 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4dd5e414-c0ff-441b-a802-de3f2bbf4a4c-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-ps98h\" (UID: \"4dd5e414-c0ff-441b-a802-de3f2bbf4a4c\") " pod="openstack/dnsmasq-dns-85ff748b95-ps98h" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.834579 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.856030 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-55c4d98986-689lr"] Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.927799 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-64c7fcd798-ntxft" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.948559 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6d8cd4cdc9-wwfpf" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.951401 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dd5e414-c0ff-441b-a802-de3f2bbf4a4c-config\") pod \"dnsmasq-dns-85ff748b95-ps98h\" (UID: \"4dd5e414-c0ff-441b-a802-de3f2bbf4a4c\") " pod="openstack/dnsmasq-dns-85ff748b95-ps98h" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.951444 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4dd5e414-c0ff-441b-a802-de3f2bbf4a4c-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-ps98h\" (UID: \"4dd5e414-c0ff-441b-a802-de3f2bbf4a4c\") " pod="openstack/dnsmasq-dns-85ff748b95-ps98h" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.951488 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2481a421-76c9-4baa-bde8-c93eebdc4403-combined-ca-bundle\") pod \"barbican-api-55c4d98986-689lr\" (UID: \"2481a421-76c9-4baa-bde8-c93eebdc4403\") " pod="openstack/barbican-api-55c4d98986-689lr" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.951517 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsf6z\" (UniqueName: \"kubernetes.io/projected/4dd5e414-c0ff-441b-a802-de3f2bbf4a4c-kube-api-access-fsf6z\") pod \"dnsmasq-dns-85ff748b95-ps98h\" (UID: \"4dd5e414-c0ff-441b-a802-de3f2bbf4a4c\") " pod="openstack/dnsmasq-dns-85ff748b95-ps98h" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.951551 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2481a421-76c9-4baa-bde8-c93eebdc4403-config-data\") pod \"barbican-api-55c4d98986-689lr\" (UID: \"2481a421-76c9-4baa-bde8-c93eebdc4403\") " pod="openstack/barbican-api-55c4d98986-689lr" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.951576 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2481a421-76c9-4baa-bde8-c93eebdc4403-logs\") pod \"barbican-api-55c4d98986-689lr\" (UID: \"2481a421-76c9-4baa-bde8-c93eebdc4403\") " pod="openstack/barbican-api-55c4d98986-689lr" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.951610 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2481a421-76c9-4baa-bde8-c93eebdc4403-config-data-custom\") pod \"barbican-api-55c4d98986-689lr\" (UID: \"2481a421-76c9-4baa-bde8-c93eebdc4403\") " pod="openstack/barbican-api-55c4d98986-689lr" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.951631 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4dd5e414-c0ff-441b-a802-de3f2bbf4a4c-dns-svc\") pod \"dnsmasq-dns-85ff748b95-ps98h\" (UID: \"4dd5e414-c0ff-441b-a802-de3f2bbf4a4c\") " pod="openstack/dnsmasq-dns-85ff748b95-ps98h" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.951650 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4dd5e414-c0ff-441b-a802-de3f2bbf4a4c-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-ps98h\" (UID: \"4dd5e414-c0ff-441b-a802-de3f2bbf4a4c\") " pod="openstack/dnsmasq-dns-85ff748b95-ps98h" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.951680 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4dd5e414-c0ff-441b-a802-de3f2bbf4a4c-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-ps98h\" (UID: \"4dd5e414-c0ff-441b-a802-de3f2bbf4a4c\") " pod="openstack/dnsmasq-dns-85ff748b95-ps98h" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.951740 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb4zd\" (UniqueName: \"kubernetes.io/projected/2481a421-76c9-4baa-bde8-c93eebdc4403-kube-api-access-mb4zd\") pod \"barbican-api-55c4d98986-689lr\" (UID: \"2481a421-76c9-4baa-bde8-c93eebdc4403\") " pod="openstack/barbican-api-55c4d98986-689lr" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.952565 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dd5e414-c0ff-441b-a802-de3f2bbf4a4c-config\") pod \"dnsmasq-dns-85ff748b95-ps98h\" (UID: \"4dd5e414-c0ff-441b-a802-de3f2bbf4a4c\") " pod="openstack/dnsmasq-dns-85ff748b95-ps98h" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.952570 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4dd5e414-c0ff-441b-a802-de3f2bbf4a4c-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-ps98h\" (UID: \"4dd5e414-c0ff-441b-a802-de3f2bbf4a4c\") " pod="openstack/dnsmasq-dns-85ff748b95-ps98h" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.953707 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4dd5e414-c0ff-441b-a802-de3f2bbf4a4c-dns-svc\") pod \"dnsmasq-dns-85ff748b95-ps98h\" (UID: \"4dd5e414-c0ff-441b-a802-de3f2bbf4a4c\") " pod="openstack/dnsmasq-dns-85ff748b95-ps98h" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.954165 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4dd5e414-c0ff-441b-a802-de3f2bbf4a4c-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-ps98h\" (UID: \"4dd5e414-c0ff-441b-a802-de3f2bbf4a4c\") " pod="openstack/dnsmasq-dns-85ff748b95-ps98h" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.954498 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4dd5e414-c0ff-441b-a802-de3f2bbf4a4c-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-ps98h\" (UID: \"4dd5e414-c0ff-441b-a802-de3f2bbf4a4c\") " pod="openstack/dnsmasq-dns-85ff748b95-ps98h" Jan 03 06:05:13 crc kubenswrapper[4854]: I0103 06:05:13.959361 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-84c5455478-hczhs" Jan 03 06:05:14 crc kubenswrapper[4854]: I0103 06:05:14.013611 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsf6z\" (UniqueName: \"kubernetes.io/projected/4dd5e414-c0ff-441b-a802-de3f2bbf4a4c-kube-api-access-fsf6z\") pod \"dnsmasq-dns-85ff748b95-ps98h\" (UID: \"4dd5e414-c0ff-441b-a802-de3f2bbf4a4c\") " pod="openstack/dnsmasq-dns-85ff748b95-ps98h" Jan 03 06:05:14 crc kubenswrapper[4854]: I0103 06:05:14.056101 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2481a421-76c9-4baa-bde8-c93eebdc4403-logs\") pod \"barbican-api-55c4d98986-689lr\" (UID: \"2481a421-76c9-4baa-bde8-c93eebdc4403\") " pod="openstack/barbican-api-55c4d98986-689lr" Jan 03 06:05:14 crc kubenswrapper[4854]: I0103 06:05:14.056165 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2481a421-76c9-4baa-bde8-c93eebdc4403-config-data-custom\") pod \"barbican-api-55c4d98986-689lr\" (UID: \"2481a421-76c9-4baa-bde8-c93eebdc4403\") " pod="openstack/barbican-api-55c4d98986-689lr" Jan 03 06:05:14 crc kubenswrapper[4854]: I0103 06:05:14.056280 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb4zd\" (UniqueName: \"kubernetes.io/projected/2481a421-76c9-4baa-bde8-c93eebdc4403-kube-api-access-mb4zd\") pod \"barbican-api-55c4d98986-689lr\" (UID: \"2481a421-76c9-4baa-bde8-c93eebdc4403\") " pod="openstack/barbican-api-55c4d98986-689lr" Jan 03 06:05:14 crc kubenswrapper[4854]: I0103 06:05:14.056451 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2481a421-76c9-4baa-bde8-c93eebdc4403-combined-ca-bundle\") pod \"barbican-api-55c4d98986-689lr\" (UID: \"2481a421-76c9-4baa-bde8-c93eebdc4403\") " pod="openstack/barbican-api-55c4d98986-689lr" Jan 03 06:05:14 crc kubenswrapper[4854]: I0103 06:05:14.056504 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2481a421-76c9-4baa-bde8-c93eebdc4403-config-data\") pod \"barbican-api-55c4d98986-689lr\" (UID: \"2481a421-76c9-4baa-bde8-c93eebdc4403\") " pod="openstack/barbican-api-55c4d98986-689lr" Jan 03 06:05:14 crc kubenswrapper[4854]: I0103 06:05:14.059101 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2481a421-76c9-4baa-bde8-c93eebdc4403-logs\") pod \"barbican-api-55c4d98986-689lr\" (UID: \"2481a421-76c9-4baa-bde8-c93eebdc4403\") " pod="openstack/barbican-api-55c4d98986-689lr" Jan 03 06:05:14 crc kubenswrapper[4854]: I0103 06:05:14.062963 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2481a421-76c9-4baa-bde8-c93eebdc4403-config-data\") pod \"barbican-api-55c4d98986-689lr\" (UID: \"2481a421-76c9-4baa-bde8-c93eebdc4403\") " pod="openstack/barbican-api-55c4d98986-689lr" Jan 03 06:05:14 crc kubenswrapper[4854]: I0103 06:05:14.067957 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2481a421-76c9-4baa-bde8-c93eebdc4403-combined-ca-bundle\") pod \"barbican-api-55c4d98986-689lr\" (UID: \"2481a421-76c9-4baa-bde8-c93eebdc4403\") " pod="openstack/barbican-api-55c4d98986-689lr" Jan 03 06:05:14 crc kubenswrapper[4854]: I0103 06:05:14.091818 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2481a421-76c9-4baa-bde8-c93eebdc4403-config-data-custom\") pod \"barbican-api-55c4d98986-689lr\" (UID: \"2481a421-76c9-4baa-bde8-c93eebdc4403\") " pod="openstack/barbican-api-55c4d98986-689lr" Jan 03 06:05:14 crc kubenswrapper[4854]: I0103 06:05:14.093201 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb4zd\" (UniqueName: \"kubernetes.io/projected/2481a421-76c9-4baa-bde8-c93eebdc4403-kube-api-access-mb4zd\") pod \"barbican-api-55c4d98986-689lr\" (UID: \"2481a421-76c9-4baa-bde8-c93eebdc4403\") " pod="openstack/barbican-api-55c4d98986-689lr" Jan 03 06:05:14 crc kubenswrapper[4854]: I0103 06:05:14.254109 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-ps98h" Jan 03 06:05:14 crc kubenswrapper[4854]: I0103 06:05:14.275007 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-55c4d98986-689lr" Jan 03 06:05:15 crc kubenswrapper[4854]: I0103 06:05:15.742672 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-9rnh5" event={"ID":"8f46296d-5d5c-4aa8-94e1-e8e5951da088","Type":"ContainerDied","Data":"41c5188e210eaa59658606d5f57f53557732ef08e19a9aeb1dccefc4354822fc"} Jan 03 06:05:15 crc kubenswrapper[4854]: I0103 06:05:15.743344 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41c5188e210eaa59658606d5f57f53557732ef08e19a9aeb1dccefc4354822fc" Jan 03 06:05:15 crc kubenswrapper[4854]: I0103 06:05:15.744956 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-sd52b" event={"ID":"ca061deb-f600-49db-8ac3-6213e22b2f76","Type":"ContainerDied","Data":"023fa117f5cafd138b401ad05378ebde7011a33485e2c9cb6e4a43af0698a536"} Jan 03 06:05:15 crc kubenswrapper[4854]: I0103 06:05:15.746071 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="023fa117f5cafd138b401ad05378ebde7011a33485e2c9cb6e4a43af0698a536" Jan 03 06:05:15 crc kubenswrapper[4854]: I0103 06:05:15.749740 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-9rnh5" Jan 03 06:05:15 crc kubenswrapper[4854]: I0103 06:05:15.767115 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-sd52b" Jan 03 06:05:15 crc kubenswrapper[4854]: I0103 06:05:15.880074 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca061deb-f600-49db-8ac3-6213e22b2f76-combined-ca-bundle\") pod \"ca061deb-f600-49db-8ac3-6213e22b2f76\" (UID: \"ca061deb-f600-49db-8ac3-6213e22b2f76\") " Jan 03 06:05:15 crc kubenswrapper[4854]: I0103 06:05:15.880203 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ca061deb-f600-49db-8ac3-6213e22b2f76-etc-machine-id\") pod \"ca061deb-f600-49db-8ac3-6213e22b2f76\" (UID: \"ca061deb-f600-49db-8ac3-6213e22b2f76\") " Jan 03 06:05:15 crc kubenswrapper[4854]: I0103 06:05:15.880306 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ca061deb-f600-49db-8ac3-6213e22b2f76-db-sync-config-data\") pod \"ca061deb-f600-49db-8ac3-6213e22b2f76\" (UID: \"ca061deb-f600-49db-8ac3-6213e22b2f76\") " Jan 03 06:05:15 crc kubenswrapper[4854]: I0103 06:05:15.880363 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca061deb-f600-49db-8ac3-6213e22b2f76-scripts\") pod \"ca061deb-f600-49db-8ac3-6213e22b2f76\" (UID: \"ca061deb-f600-49db-8ac3-6213e22b2f76\") " Jan 03 06:05:15 crc kubenswrapper[4854]: I0103 06:05:15.880410 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f46296d-5d5c-4aa8-94e1-e8e5951da088-combined-ca-bundle\") pod \"8f46296d-5d5c-4aa8-94e1-e8e5951da088\" (UID: \"8f46296d-5d5c-4aa8-94e1-e8e5951da088\") " Jan 03 06:05:15 crc kubenswrapper[4854]: I0103 06:05:15.880432 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f46296d-5d5c-4aa8-94e1-e8e5951da088-config-data\") pod \"8f46296d-5d5c-4aa8-94e1-e8e5951da088\" (UID: \"8f46296d-5d5c-4aa8-94e1-e8e5951da088\") " Jan 03 06:05:15 crc kubenswrapper[4854]: I0103 06:05:15.880483 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrnjg\" (UniqueName: \"kubernetes.io/projected/8f46296d-5d5c-4aa8-94e1-e8e5951da088-kube-api-access-vrnjg\") pod \"8f46296d-5d5c-4aa8-94e1-e8e5951da088\" (UID: \"8f46296d-5d5c-4aa8-94e1-e8e5951da088\") " Jan 03 06:05:15 crc kubenswrapper[4854]: I0103 06:05:15.880556 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca061deb-f600-49db-8ac3-6213e22b2f76-config-data\") pod \"ca061deb-f600-49db-8ac3-6213e22b2f76\" (UID: \"ca061deb-f600-49db-8ac3-6213e22b2f76\") " Jan 03 06:05:15 crc kubenswrapper[4854]: I0103 06:05:15.880582 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bc7nb\" (UniqueName: \"kubernetes.io/projected/ca061deb-f600-49db-8ac3-6213e22b2f76-kube-api-access-bc7nb\") pod \"ca061deb-f600-49db-8ac3-6213e22b2f76\" (UID: \"ca061deb-f600-49db-8ac3-6213e22b2f76\") " Jan 03 06:05:15 crc kubenswrapper[4854]: I0103 06:05:15.884979 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ca061deb-f600-49db-8ac3-6213e22b2f76-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ca061deb-f600-49db-8ac3-6213e22b2f76" (UID: "ca061deb-f600-49db-8ac3-6213e22b2f76"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 06:05:15 crc kubenswrapper[4854]: I0103 06:05:15.911530 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca061deb-f600-49db-8ac3-6213e22b2f76-scripts" (OuterVolumeSpecName: "scripts") pod "ca061deb-f600-49db-8ac3-6213e22b2f76" (UID: "ca061deb-f600-49db-8ac3-6213e22b2f76"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:05:15 crc kubenswrapper[4854]: I0103 06:05:15.914217 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca061deb-f600-49db-8ac3-6213e22b2f76-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "ca061deb-f600-49db-8ac3-6213e22b2f76" (UID: "ca061deb-f600-49db-8ac3-6213e22b2f76"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:05:15 crc kubenswrapper[4854]: I0103 06:05:15.929279 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca061deb-f600-49db-8ac3-6213e22b2f76-kube-api-access-bc7nb" (OuterVolumeSpecName: "kube-api-access-bc7nb") pod "ca061deb-f600-49db-8ac3-6213e22b2f76" (UID: "ca061deb-f600-49db-8ac3-6213e22b2f76"). InnerVolumeSpecName "kube-api-access-bc7nb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:05:15 crc kubenswrapper[4854]: I0103 06:05:15.929358 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f46296d-5d5c-4aa8-94e1-e8e5951da088-kube-api-access-vrnjg" (OuterVolumeSpecName: "kube-api-access-vrnjg") pod "8f46296d-5d5c-4aa8-94e1-e8e5951da088" (UID: "8f46296d-5d5c-4aa8-94e1-e8e5951da088"). InnerVolumeSpecName "kube-api-access-vrnjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:05:15 crc kubenswrapper[4854]: I0103 06:05:15.938955 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca061deb-f600-49db-8ac3-6213e22b2f76-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ca061deb-f600-49db-8ac3-6213e22b2f76" (UID: "ca061deb-f600-49db-8ac3-6213e22b2f76"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:05:15 crc kubenswrapper[4854]: I0103 06:05:15.957853 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f46296d-5d5c-4aa8-94e1-e8e5951da088-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f46296d-5d5c-4aa8-94e1-e8e5951da088" (UID: "8f46296d-5d5c-4aa8-94e1-e8e5951da088"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:05:15 crc kubenswrapper[4854]: I0103 06:05:15.990612 4854 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca061deb-f600-49db-8ac3-6213e22b2f76-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:15 crc kubenswrapper[4854]: I0103 06:05:15.990650 4854 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ca061deb-f600-49db-8ac3-6213e22b2f76-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:15 crc kubenswrapper[4854]: I0103 06:05:15.990660 4854 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ca061deb-f600-49db-8ac3-6213e22b2f76-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:15 crc kubenswrapper[4854]: I0103 06:05:15.990668 4854 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca061deb-f600-49db-8ac3-6213e22b2f76-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:15 crc kubenswrapper[4854]: I0103 06:05:15.991121 4854 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f46296d-5d5c-4aa8-94e1-e8e5951da088-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:15 crc kubenswrapper[4854]: I0103 06:05:15.991134 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrnjg\" (UniqueName: \"kubernetes.io/projected/8f46296d-5d5c-4aa8-94e1-e8e5951da088-kube-api-access-vrnjg\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:15 crc kubenswrapper[4854]: I0103 06:05:15.991145 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bc7nb\" (UniqueName: \"kubernetes.io/projected/ca061deb-f600-49db-8ac3-6213e22b2f76-kube-api-access-bc7nb\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:16 crc kubenswrapper[4854]: I0103 06:05:16.016316 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca061deb-f600-49db-8ac3-6213e22b2f76-config-data" (OuterVolumeSpecName: "config-data") pod "ca061deb-f600-49db-8ac3-6213e22b2f76" (UID: "ca061deb-f600-49db-8ac3-6213e22b2f76"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:05:16 crc kubenswrapper[4854]: I0103 06:05:16.061282 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f46296d-5d5c-4aa8-94e1-e8e5951da088-config-data" (OuterVolumeSpecName: "config-data") pod "8f46296d-5d5c-4aa8-94e1-e8e5951da088" (UID: "8f46296d-5d5c-4aa8-94e1-e8e5951da088"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:05:16 crc kubenswrapper[4854]: I0103 06:05:16.094052 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f46296d-5d5c-4aa8-94e1-e8e5951da088-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:16 crc kubenswrapper[4854]: I0103 06:05:16.099404 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca061deb-f600-49db-8ac3-6213e22b2f76-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:16 crc kubenswrapper[4854]: I0103 06:05:16.193613 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5fd77dbb5-mpxrq" Jan 03 06:05:16 crc kubenswrapper[4854]: I0103 06:05:16.302033 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-84c5455478-hczhs"] Jan 03 06:05:16 crc kubenswrapper[4854]: I0103 06:05:16.302286 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-84c5455478-hczhs" podUID="46254f53-deed-4254-801c-1db0db3ec56a" containerName="neutron-api" containerID="cri-o://d3785ee455877013b890619cead6235fe7f8ab8a467a1246625f7607bc7132ce" gracePeriod=30 Jan 03 06:05:16 crc kubenswrapper[4854]: I0103 06:05:16.302535 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-84c5455478-hczhs" podUID="46254f53-deed-4254-801c-1db0db3ec56a" containerName="neutron-httpd" containerID="cri-o://6c49479d5e0adf15ac1d291b5867a350c34eaac96d004e0a6d75614322aac40e" gracePeriod=30 Jan 03 06:05:16 crc kubenswrapper[4854]: I0103 06:05:16.379894 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6d8cd4cdc9-wwfpf"] Jan 03 06:05:16 crc kubenswrapper[4854]: W0103 06:05:16.392247 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode56f3e19_d54e_44be_9a12_485d5f86231f.slice/crio-2c49f9cba78fda3a56dca441330d2dabaa7e57145f8dff038d4db148555601ee WatchSource:0}: Error finding container 2c49f9cba78fda3a56dca441330d2dabaa7e57145f8dff038d4db148555601ee: Status 404 returned error can't find the container with id 2c49f9cba78fda3a56dca441330d2dabaa7e57145f8dff038d4db148555601ee Jan 03 06:05:16 crc kubenswrapper[4854]: I0103 06:05:16.645172 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-64c7fcd798-ntxft"] Jan 03 06:05:16 crc kubenswrapper[4854]: I0103 06:05:16.682990 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-ps98h"] Jan 03 06:05:16 crc kubenswrapper[4854]: I0103 06:05:16.809955 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-lrn9n" podUID="4c5068d8-79f7-4319-8b27-2343ad584166" containerName="dnsmasq-dns" containerID="cri-o://dbc5f25866c9bc3608c4d3922bbed707a895efde6919056b9465afe61ee5d343" gracePeriod=10 Jan 03 06:05:16 crc kubenswrapper[4854]: I0103 06:05:16.810214 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-lrn9n" event={"ID":"4c5068d8-79f7-4319-8b27-2343ad584166","Type":"ContainerStarted","Data":"dbc5f25866c9bc3608c4d3922bbed707a895efde6919056b9465afe61ee5d343"} Jan 03 06:05:16 crc kubenswrapper[4854]: I0103 06:05:16.810280 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-lrn9n" Jan 03 06:05:16 crc kubenswrapper[4854]: I0103 06:05:16.842707 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41de8a7f-850a-4f6e-8623-e0cdbcdf79e6","Type":"ContainerStarted","Data":"5beab47bad9dc5e476aabb95fa0105475efdf361b3799d04217ef361f80b0dde"} Jan 03 06:05:16 crc kubenswrapper[4854]: I0103 06:05:16.894957 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" event={"ID":"e8c88d7d-092b-44f7-b4c8-3540be3c0e8b","Type":"ContainerStarted","Data":"1db592b68f62b6c2ad08a01037bad35421ca86a46de4654a3ad5f8ce76bb6f1b"} Jan 03 06:05:16 crc kubenswrapper[4854]: I0103 06:05:16.898646 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6d8cd4cdc9-wwfpf" event={"ID":"e56f3e19-d54e-44be-9a12-485d5f86231f","Type":"ContainerStarted","Data":"2c49f9cba78fda3a56dca441330d2dabaa7e57145f8dff038d4db148555601ee"} Jan 03 06:05:16 crc kubenswrapper[4854]: I0103 06:05:16.940505 4854 generic.go:334] "Generic (PLEG): container finished" podID="46254f53-deed-4254-801c-1db0db3ec56a" containerID="6c49479d5e0adf15ac1d291b5867a350c34eaac96d004e0a6d75614322aac40e" exitCode=0 Jan 03 06:05:16 crc kubenswrapper[4854]: I0103 06:05:16.940619 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84c5455478-hczhs" event={"ID":"46254f53-deed-4254-801c-1db0db3ec56a","Type":"ContainerDied","Data":"6c49479d5e0adf15ac1d291b5867a350c34eaac96d004e0a6d75614322aac40e"} Jan 03 06:05:16 crc kubenswrapper[4854]: I0103 06:05:16.943609 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-ps98h" event={"ID":"4dd5e414-c0ff-441b-a802-de3f2bbf4a4c","Type":"ContainerStarted","Data":"a2da0180ffeae2bc3629eba53cd3039a19eedaf745f4e9ba56f4729198e91b0e"} Jan 03 06:05:16 crc kubenswrapper[4854]: I0103 06:05:16.949180 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-9rnh5" Jan 03 06:05:16 crc kubenswrapper[4854]: I0103 06:05:16.949623 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-64c7fcd798-ntxft" event={"ID":"ebcdaaae-12a6-437d-a050-ee71e343b5b0","Type":"ContainerStarted","Data":"2ea0b48af1fcda68624481a4d1ffc0a09340f1350527fa1dc1478b8df30941b7"} Jan 03 06:05:16 crc kubenswrapper[4854]: I0103 06:05:16.949764 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-sd52b" Jan 03 06:05:16 crc kubenswrapper[4854]: I0103 06:05:16.971195 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-55c4d98986-689lr"] Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.023818 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-lrn9n" podStartSLOduration=12.023796038 podStartE2EDuration="12.023796038s" podCreationTimestamp="2026-01-03 06:05:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:05:16.869913151 +0000 UTC m=+1495.196489723" watchObservedRunningTime="2026-01-03 06:05:17.023796038 +0000 UTC m=+1495.350372610" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.108188 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5f9d458c9d-vtsmw"] Jan 03 06:05:17 crc kubenswrapper[4854]: E0103 06:05:17.109325 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca061deb-f600-49db-8ac3-6213e22b2f76" containerName="cinder-db-sync" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.109346 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca061deb-f600-49db-8ac3-6213e22b2f76" containerName="cinder-db-sync" Jan 03 06:05:17 crc kubenswrapper[4854]: E0103 06:05:17.109363 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f46296d-5d5c-4aa8-94e1-e8e5951da088" containerName="heat-db-sync" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.109370 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f46296d-5d5c-4aa8-94e1-e8e5951da088" containerName="heat-db-sync" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.109592 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f46296d-5d5c-4aa8-94e1-e8e5951da088" containerName="heat-db-sync" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.109618 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca061deb-f600-49db-8ac3-6213e22b2f76" containerName="cinder-db-sync" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.114358 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5f9d458c9d-vtsmw" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.126778 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.126977 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.147550 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdj49\" (UniqueName: \"kubernetes.io/projected/c5aa7cd7-25a7-4228-a047-5fef936c6a9a-kube-api-access-gdj49\") pod \"barbican-api-5f9d458c9d-vtsmw\" (UID: \"c5aa7cd7-25a7-4228-a047-5fef936c6a9a\") " pod="openstack/barbican-api-5f9d458c9d-vtsmw" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.147656 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5aa7cd7-25a7-4228-a047-5fef936c6a9a-config-data\") pod \"barbican-api-5f9d458c9d-vtsmw\" (UID: \"c5aa7cd7-25a7-4228-a047-5fef936c6a9a\") " pod="openstack/barbican-api-5f9d458c9d-vtsmw" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.147688 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5aa7cd7-25a7-4228-a047-5fef936c6a9a-internal-tls-certs\") pod \"barbican-api-5f9d458c9d-vtsmw\" (UID: \"c5aa7cd7-25a7-4228-a047-5fef936c6a9a\") " pod="openstack/barbican-api-5f9d458c9d-vtsmw" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.147706 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5aa7cd7-25a7-4228-a047-5fef936c6a9a-logs\") pod \"barbican-api-5f9d458c9d-vtsmw\" (UID: \"c5aa7cd7-25a7-4228-a047-5fef936c6a9a\") " pod="openstack/barbican-api-5f9d458c9d-vtsmw" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.147733 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5aa7cd7-25a7-4228-a047-5fef936c6a9a-public-tls-certs\") pod \"barbican-api-5f9d458c9d-vtsmw\" (UID: \"c5aa7cd7-25a7-4228-a047-5fef936c6a9a\") " pod="openstack/barbican-api-5f9d458c9d-vtsmw" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.147790 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c5aa7cd7-25a7-4228-a047-5fef936c6a9a-config-data-custom\") pod \"barbican-api-5f9d458c9d-vtsmw\" (UID: \"c5aa7cd7-25a7-4228-a047-5fef936c6a9a\") " pod="openstack/barbican-api-5f9d458c9d-vtsmw" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.147822 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5aa7cd7-25a7-4228-a047-5fef936c6a9a-combined-ca-bundle\") pod \"barbican-api-5f9d458c9d-vtsmw\" (UID: \"c5aa7cd7-25a7-4228-a047-5fef936c6a9a\") " pod="openstack/barbican-api-5f9d458c9d-vtsmw" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.167146 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5f9d458c9d-vtsmw"] Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.245304 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.250022 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdj49\" (UniqueName: \"kubernetes.io/projected/c5aa7cd7-25a7-4228-a047-5fef936c6a9a-kube-api-access-gdj49\") pod \"barbican-api-5f9d458c9d-vtsmw\" (UID: \"c5aa7cd7-25a7-4228-a047-5fef936c6a9a\") " pod="openstack/barbican-api-5f9d458c9d-vtsmw" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.250125 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5aa7cd7-25a7-4228-a047-5fef936c6a9a-config-data\") pod \"barbican-api-5f9d458c9d-vtsmw\" (UID: \"c5aa7cd7-25a7-4228-a047-5fef936c6a9a\") " pod="openstack/barbican-api-5f9d458c9d-vtsmw" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.250155 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5aa7cd7-25a7-4228-a047-5fef936c6a9a-internal-tls-certs\") pod \"barbican-api-5f9d458c9d-vtsmw\" (UID: \"c5aa7cd7-25a7-4228-a047-5fef936c6a9a\") " pod="openstack/barbican-api-5f9d458c9d-vtsmw" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.250170 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5aa7cd7-25a7-4228-a047-5fef936c6a9a-logs\") pod \"barbican-api-5f9d458c9d-vtsmw\" (UID: \"c5aa7cd7-25a7-4228-a047-5fef936c6a9a\") " pod="openstack/barbican-api-5f9d458c9d-vtsmw" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.250196 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5aa7cd7-25a7-4228-a047-5fef936c6a9a-public-tls-certs\") pod \"barbican-api-5f9d458c9d-vtsmw\" (UID: \"c5aa7cd7-25a7-4228-a047-5fef936c6a9a\") " pod="openstack/barbican-api-5f9d458c9d-vtsmw" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.250247 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c5aa7cd7-25a7-4228-a047-5fef936c6a9a-config-data-custom\") pod \"barbican-api-5f9d458c9d-vtsmw\" (UID: \"c5aa7cd7-25a7-4228-a047-5fef936c6a9a\") " pod="openstack/barbican-api-5f9d458c9d-vtsmw" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.250282 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5aa7cd7-25a7-4228-a047-5fef936c6a9a-combined-ca-bundle\") pod \"barbican-api-5f9d458c9d-vtsmw\" (UID: \"c5aa7cd7-25a7-4228-a047-5fef936c6a9a\") " pod="openstack/barbican-api-5f9d458c9d-vtsmw" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.274020 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5aa7cd7-25a7-4228-a047-5fef936c6a9a-combined-ca-bundle\") pod \"barbican-api-5f9d458c9d-vtsmw\" (UID: \"c5aa7cd7-25a7-4228-a047-5fef936c6a9a\") " pod="openstack/barbican-api-5f9d458c9d-vtsmw" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.274126 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.274322 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5aa7cd7-25a7-4228-a047-5fef936c6a9a-logs\") pod \"barbican-api-5f9d458c9d-vtsmw\" (UID: \"c5aa7cd7-25a7-4228-a047-5fef936c6a9a\") " pod="openstack/barbican-api-5f9d458c9d-vtsmw" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.285278 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.291015 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5aa7cd7-25a7-4228-a047-5fef936c6a9a-internal-tls-certs\") pod \"barbican-api-5f9d458c9d-vtsmw\" (UID: \"c5aa7cd7-25a7-4228-a047-5fef936c6a9a\") " pod="openstack/barbican-api-5f9d458c9d-vtsmw" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.292053 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-46g77" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.293781 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5aa7cd7-25a7-4228-a047-5fef936c6a9a-public-tls-certs\") pod \"barbican-api-5f9d458c9d-vtsmw\" (UID: \"c5aa7cd7-25a7-4228-a047-5fef936c6a9a\") " pod="openstack/barbican-api-5f9d458c9d-vtsmw" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.300627 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5aa7cd7-25a7-4228-a047-5fef936c6a9a-config-data\") pod \"barbican-api-5f9d458c9d-vtsmw\" (UID: \"c5aa7cd7-25a7-4228-a047-5fef936c6a9a\") " pod="openstack/barbican-api-5f9d458c9d-vtsmw" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.301271 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c5aa7cd7-25a7-4228-a047-5fef936c6a9a-config-data-custom\") pod \"barbican-api-5f9d458c9d-vtsmw\" (UID: \"c5aa7cd7-25a7-4228-a047-5fef936c6a9a\") " pod="openstack/barbican-api-5f9d458c9d-vtsmw" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.303915 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.304299 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.311658 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.322056 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdj49\" (UniqueName: \"kubernetes.io/projected/c5aa7cd7-25a7-4228-a047-5fef936c6a9a-kube-api-access-gdj49\") pod \"barbican-api-5f9d458c9d-vtsmw\" (UID: \"c5aa7cd7-25a7-4228-a047-5fef936c6a9a\") " pod="openstack/barbican-api-5f9d458c9d-vtsmw" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.380464 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-ps98h"] Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.471775 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5f9d458c9d-vtsmw" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.473517 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cab04439-797a-489b-a4a7-d7cd3c23ccec-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"cab04439-797a-489b-a4a7-d7cd3c23ccec\") " pod="openstack/cinder-scheduler-0" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.473575 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cab04439-797a-489b-a4a7-d7cd3c23ccec-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"cab04439-797a-489b-a4a7-d7cd3c23ccec\") " pod="openstack/cinder-scheduler-0" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.473596 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cab04439-797a-489b-a4a7-d7cd3c23ccec-config-data\") pod \"cinder-scheduler-0\" (UID: \"cab04439-797a-489b-a4a7-d7cd3c23ccec\") " pod="openstack/cinder-scheduler-0" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.473615 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cab04439-797a-489b-a4a7-d7cd3c23ccec-scripts\") pod \"cinder-scheduler-0\" (UID: \"cab04439-797a-489b-a4a7-d7cd3c23ccec\") " pod="openstack/cinder-scheduler-0" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.473711 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdlzs\" (UniqueName: \"kubernetes.io/projected/cab04439-797a-489b-a4a7-d7cd3c23ccec-kube-api-access-sdlzs\") pod \"cinder-scheduler-0\" (UID: \"cab04439-797a-489b-a4a7-d7cd3c23ccec\") " pod="openstack/cinder-scheduler-0" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.473741 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cab04439-797a-489b-a4a7-d7cd3c23ccec-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"cab04439-797a-489b-a4a7-d7cd3c23ccec\") " pod="openstack/cinder-scheduler-0" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.578641 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdlzs\" (UniqueName: \"kubernetes.io/projected/cab04439-797a-489b-a4a7-d7cd3c23ccec-kube-api-access-sdlzs\") pod \"cinder-scheduler-0\" (UID: \"cab04439-797a-489b-a4a7-d7cd3c23ccec\") " pod="openstack/cinder-scheduler-0" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.578892 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cab04439-797a-489b-a4a7-d7cd3c23ccec-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"cab04439-797a-489b-a4a7-d7cd3c23ccec\") " pod="openstack/cinder-scheduler-0" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.579045 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cab04439-797a-489b-a4a7-d7cd3c23ccec-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"cab04439-797a-489b-a4a7-d7cd3c23ccec\") " pod="openstack/cinder-scheduler-0" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.579185 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cab04439-797a-489b-a4a7-d7cd3c23ccec-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"cab04439-797a-489b-a4a7-d7cd3c23ccec\") " pod="openstack/cinder-scheduler-0" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.579277 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cab04439-797a-489b-a4a7-d7cd3c23ccec-config-data\") pod \"cinder-scheduler-0\" (UID: \"cab04439-797a-489b-a4a7-d7cd3c23ccec\") " pod="openstack/cinder-scheduler-0" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.579354 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cab04439-797a-489b-a4a7-d7cd3c23ccec-scripts\") pod \"cinder-scheduler-0\" (UID: \"cab04439-797a-489b-a4a7-d7cd3c23ccec\") " pod="openstack/cinder-scheduler-0" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.590800 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cab04439-797a-489b-a4a7-d7cd3c23ccec-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"cab04439-797a-489b-a4a7-d7cd3c23ccec\") " pod="openstack/cinder-scheduler-0" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.612517 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cab04439-797a-489b-a4a7-d7cd3c23ccec-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"cab04439-797a-489b-a4a7-d7cd3c23ccec\") " pod="openstack/cinder-scheduler-0" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.612590 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-klgkr"] Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.614847 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-klgkr" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.656841 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdlzs\" (UniqueName: \"kubernetes.io/projected/cab04439-797a-489b-a4a7-d7cd3c23ccec-kube-api-access-sdlzs\") pod \"cinder-scheduler-0\" (UID: \"cab04439-797a-489b-a4a7-d7cd3c23ccec\") " pod="openstack/cinder-scheduler-0" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.670792 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cab04439-797a-489b-a4a7-d7cd3c23ccec-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"cab04439-797a-489b-a4a7-d7cd3c23ccec\") " pod="openstack/cinder-scheduler-0" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.672552 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cab04439-797a-489b-a4a7-d7cd3c23ccec-scripts\") pod \"cinder-scheduler-0\" (UID: \"cab04439-797a-489b-a4a7-d7cd3c23ccec\") " pod="openstack/cinder-scheduler-0" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.674068 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cab04439-797a-489b-a4a7-d7cd3c23ccec-config-data\") pod \"cinder-scheduler-0\" (UID: \"cab04439-797a-489b-a4a7-d7cd3c23ccec\") " pod="openstack/cinder-scheduler-0" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.680531 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f4da9c0-58a1-41d0-9d97-6cf376e6233d-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-klgkr\" (UID: \"1f4da9c0-58a1-41d0-9d97-6cf376e6233d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-klgkr" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.680572 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f4da9c0-58a1-41d0-9d97-6cf376e6233d-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-klgkr\" (UID: \"1f4da9c0-58a1-41d0-9d97-6cf376e6233d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-klgkr" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.680659 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wx6m\" (UniqueName: \"kubernetes.io/projected/1f4da9c0-58a1-41d0-9d97-6cf376e6233d-kube-api-access-5wx6m\") pod \"dnsmasq-dns-5c9776ccc5-klgkr\" (UID: \"1f4da9c0-58a1-41d0-9d97-6cf376e6233d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-klgkr" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.680680 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f4da9c0-58a1-41d0-9d97-6cf376e6233d-config\") pod \"dnsmasq-dns-5c9776ccc5-klgkr\" (UID: \"1f4da9c0-58a1-41d0-9d97-6cf376e6233d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-klgkr" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.680712 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f4da9c0-58a1-41d0-9d97-6cf376e6233d-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-klgkr\" (UID: \"1f4da9c0-58a1-41d0-9d97-6cf376e6233d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-klgkr" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.680729 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1f4da9c0-58a1-41d0-9d97-6cf376e6233d-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-klgkr\" (UID: \"1f4da9c0-58a1-41d0-9d97-6cf376e6233d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-klgkr" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.787490 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wx6m\" (UniqueName: \"kubernetes.io/projected/1f4da9c0-58a1-41d0-9d97-6cf376e6233d-kube-api-access-5wx6m\") pod \"dnsmasq-dns-5c9776ccc5-klgkr\" (UID: \"1f4da9c0-58a1-41d0-9d97-6cf376e6233d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-klgkr" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.787768 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f4da9c0-58a1-41d0-9d97-6cf376e6233d-config\") pod \"dnsmasq-dns-5c9776ccc5-klgkr\" (UID: \"1f4da9c0-58a1-41d0-9d97-6cf376e6233d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-klgkr" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.787808 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f4da9c0-58a1-41d0-9d97-6cf376e6233d-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-klgkr\" (UID: \"1f4da9c0-58a1-41d0-9d97-6cf376e6233d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-klgkr" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.787837 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1f4da9c0-58a1-41d0-9d97-6cf376e6233d-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-klgkr\" (UID: \"1f4da9c0-58a1-41d0-9d97-6cf376e6233d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-klgkr" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.787938 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f4da9c0-58a1-41d0-9d97-6cf376e6233d-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-klgkr\" (UID: \"1f4da9c0-58a1-41d0-9d97-6cf376e6233d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-klgkr" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.787974 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f4da9c0-58a1-41d0-9d97-6cf376e6233d-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-klgkr\" (UID: \"1f4da9c0-58a1-41d0-9d97-6cf376e6233d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-klgkr" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.789032 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f4da9c0-58a1-41d0-9d97-6cf376e6233d-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-klgkr\" (UID: \"1f4da9c0-58a1-41d0-9d97-6cf376e6233d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-klgkr" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.790752 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f4da9c0-58a1-41d0-9d97-6cf376e6233d-config\") pod \"dnsmasq-dns-5c9776ccc5-klgkr\" (UID: \"1f4da9c0-58a1-41d0-9d97-6cf376e6233d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-klgkr" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.793127 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1f4da9c0-58a1-41d0-9d97-6cf376e6233d-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-klgkr\" (UID: \"1f4da9c0-58a1-41d0-9d97-6cf376e6233d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-klgkr" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.793819 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f4da9c0-58a1-41d0-9d97-6cf376e6233d-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-klgkr\" (UID: \"1f4da9c0-58a1-41d0-9d97-6cf376e6233d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-klgkr" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.793916 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-klgkr"] Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.797938 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f4da9c0-58a1-41d0-9d97-6cf376e6233d-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-klgkr\" (UID: \"1f4da9c0-58a1-41d0-9d97-6cf376e6233d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-klgkr" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.883282 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wx6m\" (UniqueName: \"kubernetes.io/projected/1f4da9c0-58a1-41d0-9d97-6cf376e6233d-kube-api-access-5wx6m\") pod \"dnsmasq-dns-5c9776ccc5-klgkr\" (UID: \"1f4da9c0-58a1-41d0-9d97-6cf376e6233d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-klgkr" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.947836 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.950537 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.958030 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.966913 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.967422 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 03 06:05:17 crc kubenswrapper[4854]: I0103 06:05:17.989698 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55c4d98986-689lr" event={"ID":"2481a421-76c9-4baa-bde8-c93eebdc4403","Type":"ContainerStarted","Data":"e59b61bb699f38a2ee510ee28ab7a005aba2f1546b065316b5520ada8a173cfc"} Jan 03 06:05:18 crc kubenswrapper[4854]: I0103 06:05:18.002160 4854 generic.go:334] "Generic (PLEG): container finished" podID="4c5068d8-79f7-4319-8b27-2343ad584166" containerID="dbc5f25866c9bc3608c4d3922bbed707a895efde6919056b9465afe61ee5d343" exitCode=0 Jan 03 06:05:18 crc kubenswrapper[4854]: I0103 06:05:18.002387 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="41de8a7f-850a-4f6e-8623-e0cdbcdf79e6" containerName="ceilometer-central-agent" containerID="cri-o://0aed09d0236567b894aaca3c66501d8aa34d7c0933ab3953fd875848ea83a542" gracePeriod=30 Jan 03 06:05:18 crc kubenswrapper[4854]: I0103 06:05:18.002632 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-lrn9n" event={"ID":"4c5068d8-79f7-4319-8b27-2343ad584166","Type":"ContainerDied","Data":"dbc5f25866c9bc3608c4d3922bbed707a895efde6919056b9465afe61ee5d343"} Jan 03 06:05:18 crc kubenswrapper[4854]: I0103 06:05:18.003667 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 03 06:05:18 crc kubenswrapper[4854]: I0103 06:05:18.004059 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="41de8a7f-850a-4f6e-8623-e0cdbcdf79e6" containerName="proxy-httpd" containerID="cri-o://5beab47bad9dc5e476aabb95fa0105475efdf361b3799d04217ef361f80b0dde" gracePeriod=30 Jan 03 06:05:18 crc kubenswrapper[4854]: I0103 06:05:18.004138 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="41de8a7f-850a-4f6e-8623-e0cdbcdf79e6" containerName="sg-core" containerID="cri-o://3ab9a5fd121afa77a60db235cc7985030170fb6cee975ec755d41725cd7857eb" gracePeriod=30 Jan 03 06:05:18 crc kubenswrapper[4854]: I0103 06:05:18.004173 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="41de8a7f-850a-4f6e-8623-e0cdbcdf79e6" containerName="ceilometer-notification-agent" containerID="cri-o://ef373afcf83f8495b8f8ff89622224a235b10af839d685bcf814caa4d3119a97" gracePeriod=30 Jan 03 06:05:18 crc kubenswrapper[4854]: I0103 06:05:18.077721 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-klgkr" Jan 03 06:05:18 crc kubenswrapper[4854]: I0103 06:05:18.111601 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=5.084011048 podStartE2EDuration="1m19.111580186s" podCreationTimestamp="2026-01-03 06:03:59 +0000 UTC" firstStartedPulling="2026-01-03 06:04:01.826693095 +0000 UTC m=+1420.153269667" lastFinishedPulling="2026-01-03 06:05:15.854262243 +0000 UTC m=+1494.180838805" observedRunningTime="2026-01-03 06:05:18.065337423 +0000 UTC m=+1496.391913985" watchObservedRunningTime="2026-01-03 06:05:18.111580186 +0000 UTC m=+1496.438156758" Jan 03 06:05:18 crc kubenswrapper[4854]: I0103 06:05:18.153880 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9j97\" (UniqueName: \"kubernetes.io/projected/8e00ba39-1426-43ee-bedc-b865bb3cc96a-kube-api-access-q9j97\") pod \"cinder-api-0\" (UID: \"8e00ba39-1426-43ee-bedc-b865bb3cc96a\") " pod="openstack/cinder-api-0" Jan 03 06:05:18 crc kubenswrapper[4854]: I0103 06:05:18.153925 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e00ba39-1426-43ee-bedc-b865bb3cc96a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8e00ba39-1426-43ee-bedc-b865bb3cc96a\") " pod="openstack/cinder-api-0" Jan 03 06:05:18 crc kubenswrapper[4854]: I0103 06:05:18.153972 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e00ba39-1426-43ee-bedc-b865bb3cc96a-scripts\") pod \"cinder-api-0\" (UID: \"8e00ba39-1426-43ee-bedc-b865bb3cc96a\") " pod="openstack/cinder-api-0" Jan 03 06:05:18 crc kubenswrapper[4854]: I0103 06:05:18.153990 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e00ba39-1426-43ee-bedc-b865bb3cc96a-config-data\") pod \"cinder-api-0\" (UID: \"8e00ba39-1426-43ee-bedc-b865bb3cc96a\") " pod="openstack/cinder-api-0" Jan 03 06:05:18 crc kubenswrapper[4854]: I0103 06:05:18.154035 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8e00ba39-1426-43ee-bedc-b865bb3cc96a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8e00ba39-1426-43ee-bedc-b865bb3cc96a\") " pod="openstack/cinder-api-0" Jan 03 06:05:18 crc kubenswrapper[4854]: I0103 06:05:18.159036 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e00ba39-1426-43ee-bedc-b865bb3cc96a-logs\") pod \"cinder-api-0\" (UID: \"8e00ba39-1426-43ee-bedc-b865bb3cc96a\") " pod="openstack/cinder-api-0" Jan 03 06:05:18 crc kubenswrapper[4854]: I0103 06:05:18.159104 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e00ba39-1426-43ee-bedc-b865bb3cc96a-config-data-custom\") pod \"cinder-api-0\" (UID: \"8e00ba39-1426-43ee-bedc-b865bb3cc96a\") " pod="openstack/cinder-api-0" Jan 03 06:05:18 crc kubenswrapper[4854]: I0103 06:05:18.262811 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9j97\" (UniqueName: \"kubernetes.io/projected/8e00ba39-1426-43ee-bedc-b865bb3cc96a-kube-api-access-q9j97\") pod \"cinder-api-0\" (UID: \"8e00ba39-1426-43ee-bedc-b865bb3cc96a\") " pod="openstack/cinder-api-0" Jan 03 06:05:18 crc kubenswrapper[4854]: I0103 06:05:18.263156 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e00ba39-1426-43ee-bedc-b865bb3cc96a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8e00ba39-1426-43ee-bedc-b865bb3cc96a\") " pod="openstack/cinder-api-0" Jan 03 06:05:18 crc kubenswrapper[4854]: I0103 06:05:18.263230 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e00ba39-1426-43ee-bedc-b865bb3cc96a-scripts\") pod \"cinder-api-0\" (UID: \"8e00ba39-1426-43ee-bedc-b865bb3cc96a\") " pod="openstack/cinder-api-0" Jan 03 06:05:18 crc kubenswrapper[4854]: I0103 06:05:18.263248 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e00ba39-1426-43ee-bedc-b865bb3cc96a-config-data\") pod \"cinder-api-0\" (UID: \"8e00ba39-1426-43ee-bedc-b865bb3cc96a\") " pod="openstack/cinder-api-0" Jan 03 06:05:18 crc kubenswrapper[4854]: I0103 06:05:18.263318 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8e00ba39-1426-43ee-bedc-b865bb3cc96a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8e00ba39-1426-43ee-bedc-b865bb3cc96a\") " pod="openstack/cinder-api-0" Jan 03 06:05:18 crc kubenswrapper[4854]: I0103 06:05:18.263491 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e00ba39-1426-43ee-bedc-b865bb3cc96a-logs\") pod \"cinder-api-0\" (UID: \"8e00ba39-1426-43ee-bedc-b865bb3cc96a\") " pod="openstack/cinder-api-0" Jan 03 06:05:18 crc kubenswrapper[4854]: I0103 06:05:18.263516 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e00ba39-1426-43ee-bedc-b865bb3cc96a-config-data-custom\") pod \"cinder-api-0\" (UID: \"8e00ba39-1426-43ee-bedc-b865bb3cc96a\") " pod="openstack/cinder-api-0" Jan 03 06:05:18 crc kubenswrapper[4854]: I0103 06:05:18.266973 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8e00ba39-1426-43ee-bedc-b865bb3cc96a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8e00ba39-1426-43ee-bedc-b865bb3cc96a\") " pod="openstack/cinder-api-0" Jan 03 06:05:18 crc kubenswrapper[4854]: I0103 06:05:18.270032 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e00ba39-1426-43ee-bedc-b865bb3cc96a-logs\") pod \"cinder-api-0\" (UID: \"8e00ba39-1426-43ee-bedc-b865bb3cc96a\") " pod="openstack/cinder-api-0" Jan 03 06:05:18 crc kubenswrapper[4854]: I0103 06:05:18.271241 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e00ba39-1426-43ee-bedc-b865bb3cc96a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8e00ba39-1426-43ee-bedc-b865bb3cc96a\") " pod="openstack/cinder-api-0" Jan 03 06:05:18 crc kubenswrapper[4854]: I0103 06:05:18.276587 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e00ba39-1426-43ee-bedc-b865bb3cc96a-config-data-custom\") pod \"cinder-api-0\" (UID: \"8e00ba39-1426-43ee-bedc-b865bb3cc96a\") " pod="openstack/cinder-api-0" Jan 03 06:05:18 crc kubenswrapper[4854]: I0103 06:05:18.286184 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e00ba39-1426-43ee-bedc-b865bb3cc96a-scripts\") pod \"cinder-api-0\" (UID: \"8e00ba39-1426-43ee-bedc-b865bb3cc96a\") " pod="openstack/cinder-api-0" Jan 03 06:05:18 crc kubenswrapper[4854]: I0103 06:05:18.286369 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e00ba39-1426-43ee-bedc-b865bb3cc96a-config-data\") pod \"cinder-api-0\" (UID: \"8e00ba39-1426-43ee-bedc-b865bb3cc96a\") " pod="openstack/cinder-api-0" Jan 03 06:05:18 crc kubenswrapper[4854]: I0103 06:05:18.324268 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9j97\" (UniqueName: \"kubernetes.io/projected/8e00ba39-1426-43ee-bedc-b865bb3cc96a-kube-api-access-q9j97\") pod \"cinder-api-0\" (UID: \"8e00ba39-1426-43ee-bedc-b865bb3cc96a\") " pod="openstack/cinder-api-0" Jan 03 06:05:18 crc kubenswrapper[4854]: I0103 06:05:18.391390 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 03 06:05:18 crc kubenswrapper[4854]: I0103 06:05:18.919118 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-lrn9n" Jan 03 06:05:19 crc kubenswrapper[4854]: I0103 06:05:19.009869 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4c5068d8-79f7-4319-8b27-2343ad584166-dns-svc\") pod \"4c5068d8-79f7-4319-8b27-2343ad584166\" (UID: \"4c5068d8-79f7-4319-8b27-2343ad584166\") " Jan 03 06:05:19 crc kubenswrapper[4854]: I0103 06:05:19.010066 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wswt2\" (UniqueName: \"kubernetes.io/projected/4c5068d8-79f7-4319-8b27-2343ad584166-kube-api-access-wswt2\") pod \"4c5068d8-79f7-4319-8b27-2343ad584166\" (UID: \"4c5068d8-79f7-4319-8b27-2343ad584166\") " Jan 03 06:05:19 crc kubenswrapper[4854]: I0103 06:05:19.010114 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4c5068d8-79f7-4319-8b27-2343ad584166-dns-swift-storage-0\") pod \"4c5068d8-79f7-4319-8b27-2343ad584166\" (UID: \"4c5068d8-79f7-4319-8b27-2343ad584166\") " Jan 03 06:05:19 crc kubenswrapper[4854]: I0103 06:05:19.010299 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4c5068d8-79f7-4319-8b27-2343ad584166-ovsdbserver-nb\") pod \"4c5068d8-79f7-4319-8b27-2343ad584166\" (UID: \"4c5068d8-79f7-4319-8b27-2343ad584166\") " Jan 03 06:05:19 crc kubenswrapper[4854]: I0103 06:05:19.010370 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c5068d8-79f7-4319-8b27-2343ad584166-config\") pod \"4c5068d8-79f7-4319-8b27-2343ad584166\" (UID: \"4c5068d8-79f7-4319-8b27-2343ad584166\") " Jan 03 06:05:19 crc kubenswrapper[4854]: I0103 06:05:19.010416 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4c5068d8-79f7-4319-8b27-2343ad584166-ovsdbserver-sb\") pod \"4c5068d8-79f7-4319-8b27-2343ad584166\" (UID: \"4c5068d8-79f7-4319-8b27-2343ad584166\") " Jan 03 06:05:19 crc kubenswrapper[4854]: I0103 06:05:19.051904 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c5068d8-79f7-4319-8b27-2343ad584166-kube-api-access-wswt2" (OuterVolumeSpecName: "kube-api-access-wswt2") pod "4c5068d8-79f7-4319-8b27-2343ad584166" (UID: "4c5068d8-79f7-4319-8b27-2343ad584166"). InnerVolumeSpecName "kube-api-access-wswt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:05:19 crc kubenswrapper[4854]: I0103 06:05:19.056316 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-lrn9n" event={"ID":"4c5068d8-79f7-4319-8b27-2343ad584166","Type":"ContainerDied","Data":"147edc5e14d7a380a1c17a1e92643076ecf5ae648c6af502432f92898b8efa27"} Jan 03 06:05:19 crc kubenswrapper[4854]: I0103 06:05:19.056368 4854 scope.go:117] "RemoveContainer" containerID="dbc5f25866c9bc3608c4d3922bbed707a895efde6919056b9465afe61ee5d343" Jan 03 06:05:19 crc kubenswrapper[4854]: I0103 06:05:19.056568 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-lrn9n" Jan 03 06:05:19 crc kubenswrapper[4854]: I0103 06:05:19.112997 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wswt2\" (UniqueName: \"kubernetes.io/projected/4c5068d8-79f7-4319-8b27-2343ad584166-kube-api-access-wswt2\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:19 crc kubenswrapper[4854]: I0103 06:05:19.132795 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c5068d8-79f7-4319-8b27-2343ad584166-config" (OuterVolumeSpecName: "config") pod "4c5068d8-79f7-4319-8b27-2343ad584166" (UID: "4c5068d8-79f7-4319-8b27-2343ad584166"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:05:19 crc kubenswrapper[4854]: I0103 06:05:19.133728 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41de8a7f-850a-4f6e-8623-e0cdbcdf79e6","Type":"ContainerDied","Data":"5beab47bad9dc5e476aabb95fa0105475efdf361b3799d04217ef361f80b0dde"} Jan 03 06:05:19 crc kubenswrapper[4854]: I0103 06:05:19.134678 4854 generic.go:334] "Generic (PLEG): container finished" podID="41de8a7f-850a-4f6e-8623-e0cdbcdf79e6" containerID="5beab47bad9dc5e476aabb95fa0105475efdf361b3799d04217ef361f80b0dde" exitCode=0 Jan 03 06:05:19 crc kubenswrapper[4854]: I0103 06:05:19.134728 4854 generic.go:334] "Generic (PLEG): container finished" podID="41de8a7f-850a-4f6e-8623-e0cdbcdf79e6" containerID="3ab9a5fd121afa77a60db235cc7985030170fb6cee975ec755d41725cd7857eb" exitCode=2 Jan 03 06:05:19 crc kubenswrapper[4854]: I0103 06:05:19.134739 4854 generic.go:334] "Generic (PLEG): container finished" podID="41de8a7f-850a-4f6e-8623-e0cdbcdf79e6" containerID="0aed09d0236567b894aaca3c66501d8aa34d7c0933ab3953fd875848ea83a542" exitCode=0 Jan 03 06:05:19 crc kubenswrapper[4854]: I0103 06:05:19.135545 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41de8a7f-850a-4f6e-8623-e0cdbcdf79e6","Type":"ContainerDied","Data":"3ab9a5fd121afa77a60db235cc7985030170fb6cee975ec755d41725cd7857eb"} Jan 03 06:05:19 crc kubenswrapper[4854]: I0103 06:05:19.135632 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41de8a7f-850a-4f6e-8623-e0cdbcdf79e6","Type":"ContainerDied","Data":"0aed09d0236567b894aaca3c66501d8aa34d7c0933ab3953fd875848ea83a542"} Jan 03 06:05:19 crc kubenswrapper[4854]: I0103 06:05:19.137115 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c5068d8-79f7-4319-8b27-2343ad584166-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4c5068d8-79f7-4319-8b27-2343ad584166" (UID: "4c5068d8-79f7-4319-8b27-2343ad584166"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:05:19 crc kubenswrapper[4854]: I0103 06:05:19.150199 4854 generic.go:334] "Generic (PLEG): container finished" podID="4dd5e414-c0ff-441b-a802-de3f2bbf4a4c" containerID="1d4bc0c02e6285b3b7b14891500ede1533142bd516108890459082f733482eba" exitCode=0 Jan 03 06:05:19 crc kubenswrapper[4854]: I0103 06:05:19.150290 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-ps98h" event={"ID":"4dd5e414-c0ff-441b-a802-de3f2bbf4a4c","Type":"ContainerDied","Data":"1d4bc0c02e6285b3b7b14891500ede1533142bd516108890459082f733482eba"} Jan 03 06:05:19 crc kubenswrapper[4854]: I0103 06:05:19.178025 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c5068d8-79f7-4319-8b27-2343ad584166-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4c5068d8-79f7-4319-8b27-2343ad584166" (UID: "4c5068d8-79f7-4319-8b27-2343ad584166"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:05:19 crc kubenswrapper[4854]: I0103 06:05:19.190746 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c5068d8-79f7-4319-8b27-2343ad584166-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4c5068d8-79f7-4319-8b27-2343ad584166" (UID: "4c5068d8-79f7-4319-8b27-2343ad584166"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:05:19 crc kubenswrapper[4854]: I0103 06:05:19.193164 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55c4d98986-689lr" event={"ID":"2481a421-76c9-4baa-bde8-c93eebdc4403","Type":"ContainerStarted","Data":"077594c8c17b300edaa8fa8bc9934b5db9958d98469ec182dda06e83f37e2da3"} Jan 03 06:05:19 crc kubenswrapper[4854]: I0103 06:05:19.193209 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55c4d98986-689lr" event={"ID":"2481a421-76c9-4baa-bde8-c93eebdc4403","Type":"ContainerStarted","Data":"7613dcbe587227c5c7f61de9f7ee2d2a1e76ab284f23afa6fe78b635af035ceb"} Jan 03 06:05:19 crc kubenswrapper[4854]: I0103 06:05:19.194132 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-55c4d98986-689lr" Jan 03 06:05:19 crc kubenswrapper[4854]: I0103 06:05:19.194181 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-55c4d98986-689lr" Jan 03 06:05:19 crc kubenswrapper[4854]: I0103 06:05:19.194402 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c5068d8-79f7-4319-8b27-2343ad584166-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4c5068d8-79f7-4319-8b27-2343ad584166" (UID: "4c5068d8-79f7-4319-8b27-2343ad584166"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:05:19 crc kubenswrapper[4854]: I0103 06:05:19.217547 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 03 06:05:19 crc kubenswrapper[4854]: I0103 06:05:19.225796 4854 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4c5068d8-79f7-4319-8b27-2343ad584166-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:19 crc kubenswrapper[4854]: I0103 06:05:19.225835 4854 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4c5068d8-79f7-4319-8b27-2343ad584166-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:19 crc kubenswrapper[4854]: I0103 06:05:19.225848 4854 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c5068d8-79f7-4319-8b27-2343ad584166-config\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:19 crc kubenswrapper[4854]: I0103 06:05:19.225858 4854 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4c5068d8-79f7-4319-8b27-2343ad584166-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:19 crc kubenswrapper[4854]: I0103 06:05:19.225868 4854 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4c5068d8-79f7-4319-8b27-2343ad584166-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:19 crc kubenswrapper[4854]: I0103 06:05:19.241682 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5f9d458c9d-vtsmw"] Jan 03 06:05:19 crc kubenswrapper[4854]: I0103 06:05:19.306252 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-55c4d98986-689lr" podStartSLOduration=6.306225831 podStartE2EDuration="6.306225831s" podCreationTimestamp="2026-01-03 06:05:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:05:19.217521187 +0000 UTC m=+1497.544097769" watchObservedRunningTime="2026-01-03 06:05:19.306225831 +0000 UTC m=+1497.632802403" Jan 03 06:05:19 crc kubenswrapper[4854]: I0103 06:05:19.364071 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7479b69c6-2zfrf" Jan 03 06:05:19 crc kubenswrapper[4854]: I0103 06:05:19.402610 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-klgkr"] Jan 03 06:05:19 crc kubenswrapper[4854]: I0103 06:05:19.513311 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-lrn9n"] Jan 03 06:05:19 crc kubenswrapper[4854]: I0103 06:05:19.557319 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-lrn9n"] Jan 03 06:05:19 crc kubenswrapper[4854]: I0103 06:05:19.779373 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 03 06:05:20 crc kubenswrapper[4854]: I0103 06:05:20.133361 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c5068d8-79f7-4319-8b27-2343ad584166" path="/var/lib/kubelet/pods/4c5068d8-79f7-4319-8b27-2343ad584166/volumes" Jan 03 06:05:20 crc kubenswrapper[4854]: I0103 06:05:20.211970 4854 generic.go:334] "Generic (PLEG): container finished" podID="41de8a7f-850a-4f6e-8623-e0cdbcdf79e6" containerID="ef373afcf83f8495b8f8ff89622224a235b10af839d685bcf814caa4d3119a97" exitCode=0 Jan 03 06:05:20 crc kubenswrapper[4854]: I0103 06:05:20.212041 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41de8a7f-850a-4f6e-8623-e0cdbcdf79e6","Type":"ContainerDied","Data":"ef373afcf83f8495b8f8ff89622224a235b10af839d685bcf814caa4d3119a97"} Jan 03 06:05:20 crc kubenswrapper[4854]: W0103 06:05:20.250829 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcab04439_797a_489b_a4a7_d7cd3c23ccec.slice/crio-9c0c6a7a4c84ea88a2306a2989e750167ae033953609621d1822cb80aa70e3c4 WatchSource:0}: Error finding container 9c0c6a7a4c84ea88a2306a2989e750167ae033953609621d1822cb80aa70e3c4: Status 404 returned error can't find the container with id 9c0c6a7a4c84ea88a2306a2989e750167ae033953609621d1822cb80aa70e3c4 Jan 03 06:05:20 crc kubenswrapper[4854]: I0103 06:05:20.619752 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 03 06:05:21 crc kubenswrapper[4854]: I0103 06:05:21.241339 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-klgkr" event={"ID":"1f4da9c0-58a1-41d0-9d97-6cf376e6233d","Type":"ContainerStarted","Data":"86e7461bf666df7d18a14b87ed1e808de533e7504191a20e3dd9d82fd0eb142c"} Jan 03 06:05:21 crc kubenswrapper[4854]: I0103 06:05:21.242617 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cab04439-797a-489b-a4a7-d7cd3c23ccec","Type":"ContainerStarted","Data":"9c0c6a7a4c84ea88a2306a2989e750167ae033953609621d1822cb80aa70e3c4"} Jan 03 06:05:21 crc kubenswrapper[4854]: I0103 06:05:21.252308 4854 generic.go:334] "Generic (PLEG): container finished" podID="46254f53-deed-4254-801c-1db0db3ec56a" containerID="d3785ee455877013b890619cead6235fe7f8ab8a467a1246625f7607bc7132ce" exitCode=0 Jan 03 06:05:21 crc kubenswrapper[4854]: I0103 06:05:21.252359 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84c5455478-hczhs" event={"ID":"46254f53-deed-4254-801c-1db0db3ec56a","Type":"ContainerDied","Data":"d3785ee455877013b890619cead6235fe7f8ab8a467a1246625f7607bc7132ce"} Jan 03 06:05:21 crc kubenswrapper[4854]: W0103 06:05:21.390767 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5aa7cd7_25a7_4228_a047_5fef936c6a9a.slice/crio-b086604ef7127ba1fe93a232d4dd13ca30c94802827c834ae6ced7cd6e22611a WatchSource:0}: Error finding container b086604ef7127ba1fe93a232d4dd13ca30c94802827c834ae6ced7cd6e22611a: Status 404 returned error can't find the container with id b086604ef7127ba1fe93a232d4dd13ca30c94802827c834ae6ced7cd6e22611a Jan 03 06:05:21 crc kubenswrapper[4854]: I0103 06:05:21.519318 4854 scope.go:117] "RemoveContainer" containerID="fb0703b40598a0147541955f6542b7f3c5727badd3234e73d2e392f9c7411afe" Jan 03 06:05:21 crc kubenswrapper[4854]: I0103 06:05:21.636627 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-ps98h" Jan 03 06:05:21 crc kubenswrapper[4854]: I0103 06:05:21.702624 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 06:05:21 crc kubenswrapper[4854]: I0103 06:05:21.707498 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4dd5e414-c0ff-441b-a802-de3f2bbf4a4c-dns-swift-storage-0\") pod \"4dd5e414-c0ff-441b-a802-de3f2bbf4a4c\" (UID: \"4dd5e414-c0ff-441b-a802-de3f2bbf4a4c\") " Jan 03 06:05:21 crc kubenswrapper[4854]: I0103 06:05:21.707741 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4dd5e414-c0ff-441b-a802-de3f2bbf4a4c-ovsdbserver-sb\") pod \"4dd5e414-c0ff-441b-a802-de3f2bbf4a4c\" (UID: \"4dd5e414-c0ff-441b-a802-de3f2bbf4a4c\") " Jan 03 06:05:21 crc kubenswrapper[4854]: I0103 06:05:21.707805 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsf6z\" (UniqueName: \"kubernetes.io/projected/4dd5e414-c0ff-441b-a802-de3f2bbf4a4c-kube-api-access-fsf6z\") pod \"4dd5e414-c0ff-441b-a802-de3f2bbf4a4c\" (UID: \"4dd5e414-c0ff-441b-a802-de3f2bbf4a4c\") " Jan 03 06:05:21 crc kubenswrapper[4854]: I0103 06:05:21.707955 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4dd5e414-c0ff-441b-a802-de3f2bbf4a4c-dns-svc\") pod \"4dd5e414-c0ff-441b-a802-de3f2bbf4a4c\" (UID: \"4dd5e414-c0ff-441b-a802-de3f2bbf4a4c\") " Jan 03 06:05:21 crc kubenswrapper[4854]: I0103 06:05:21.707990 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4dd5e414-c0ff-441b-a802-de3f2bbf4a4c-ovsdbserver-nb\") pod \"4dd5e414-c0ff-441b-a802-de3f2bbf4a4c\" (UID: \"4dd5e414-c0ff-441b-a802-de3f2bbf4a4c\") " Jan 03 06:05:21 crc kubenswrapper[4854]: I0103 06:05:21.708020 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dd5e414-c0ff-441b-a802-de3f2bbf4a4c-config\") pod \"4dd5e414-c0ff-441b-a802-de3f2bbf4a4c\" (UID: \"4dd5e414-c0ff-441b-a802-de3f2bbf4a4c\") " Jan 03 06:05:21 crc kubenswrapper[4854]: I0103 06:05:21.718068 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dd5e414-c0ff-441b-a802-de3f2bbf4a4c-kube-api-access-fsf6z" (OuterVolumeSpecName: "kube-api-access-fsf6z") pod "4dd5e414-c0ff-441b-a802-de3f2bbf4a4c" (UID: "4dd5e414-c0ff-441b-a802-de3f2bbf4a4c"). InnerVolumeSpecName "kube-api-access-fsf6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:05:21 crc kubenswrapper[4854]: I0103 06:05:21.757790 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dd5e414-c0ff-441b-a802-de3f2bbf4a4c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4dd5e414-c0ff-441b-a802-de3f2bbf4a4c" (UID: "4dd5e414-c0ff-441b-a802-de3f2bbf4a4c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:05:21 crc kubenswrapper[4854]: I0103 06:05:21.761195 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dd5e414-c0ff-441b-a802-de3f2bbf4a4c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4dd5e414-c0ff-441b-a802-de3f2bbf4a4c" (UID: "4dd5e414-c0ff-441b-a802-de3f2bbf4a4c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:05:21 crc kubenswrapper[4854]: I0103 06:05:21.766239 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dd5e414-c0ff-441b-a802-de3f2bbf4a4c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4dd5e414-c0ff-441b-a802-de3f2bbf4a4c" (UID: "4dd5e414-c0ff-441b-a802-de3f2bbf4a4c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:05:21 crc kubenswrapper[4854]: I0103 06:05:21.805635 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dd5e414-c0ff-441b-a802-de3f2bbf4a4c-config" (OuterVolumeSpecName: "config") pod "4dd5e414-c0ff-441b-a802-de3f2bbf4a4c" (UID: "4dd5e414-c0ff-441b-a802-de3f2bbf4a4c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:05:21 crc kubenswrapper[4854]: I0103 06:05:21.816446 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41de8a7f-850a-4f6e-8623-e0cdbcdf79e6-scripts\") pod \"41de8a7f-850a-4f6e-8623-e0cdbcdf79e6\" (UID: \"41de8a7f-850a-4f6e-8623-e0cdbcdf79e6\") " Jan 03 06:05:21 crc kubenswrapper[4854]: I0103 06:05:21.816500 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/41de8a7f-850a-4f6e-8623-e0cdbcdf79e6-sg-core-conf-yaml\") pod \"41de8a7f-850a-4f6e-8623-e0cdbcdf79e6\" (UID: \"41de8a7f-850a-4f6e-8623-e0cdbcdf79e6\") " Jan 03 06:05:21 crc kubenswrapper[4854]: I0103 06:05:21.816726 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41de8a7f-850a-4f6e-8623-e0cdbcdf79e6-log-httpd\") pod \"41de8a7f-850a-4f6e-8623-e0cdbcdf79e6\" (UID: \"41de8a7f-850a-4f6e-8623-e0cdbcdf79e6\") " Jan 03 06:05:21 crc kubenswrapper[4854]: I0103 06:05:21.816755 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41de8a7f-850a-4f6e-8623-e0cdbcdf79e6-run-httpd\") pod \"41de8a7f-850a-4f6e-8623-e0cdbcdf79e6\" (UID: \"41de8a7f-850a-4f6e-8623-e0cdbcdf79e6\") " Jan 03 06:05:21 crc kubenswrapper[4854]: I0103 06:05:21.816804 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41de8a7f-850a-4f6e-8623-e0cdbcdf79e6-config-data\") pod \"41de8a7f-850a-4f6e-8623-e0cdbcdf79e6\" (UID: \"41de8a7f-850a-4f6e-8623-e0cdbcdf79e6\") " Jan 03 06:05:21 crc kubenswrapper[4854]: I0103 06:05:21.816865 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41de8a7f-850a-4f6e-8623-e0cdbcdf79e6-combined-ca-bundle\") pod \"41de8a7f-850a-4f6e-8623-e0cdbcdf79e6\" (UID: \"41de8a7f-850a-4f6e-8623-e0cdbcdf79e6\") " Jan 03 06:05:21 crc kubenswrapper[4854]: I0103 06:05:21.816919 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmj96\" (UniqueName: \"kubernetes.io/projected/41de8a7f-850a-4f6e-8623-e0cdbcdf79e6-kube-api-access-bmj96\") pod \"41de8a7f-850a-4f6e-8623-e0cdbcdf79e6\" (UID: \"41de8a7f-850a-4f6e-8623-e0cdbcdf79e6\") " Jan 03 06:05:21 crc kubenswrapper[4854]: I0103 06:05:21.817613 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41de8a7f-850a-4f6e-8623-e0cdbcdf79e6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "41de8a7f-850a-4f6e-8623-e0cdbcdf79e6" (UID: "41de8a7f-850a-4f6e-8623-e0cdbcdf79e6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:05:21 crc kubenswrapper[4854]: I0103 06:05:21.818674 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41de8a7f-850a-4f6e-8623-e0cdbcdf79e6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "41de8a7f-850a-4f6e-8623-e0cdbcdf79e6" (UID: "41de8a7f-850a-4f6e-8623-e0cdbcdf79e6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:05:21 crc kubenswrapper[4854]: I0103 06:05:21.821161 4854 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dd5e414-c0ff-441b-a802-de3f2bbf4a4c-config\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:21 crc kubenswrapper[4854]: I0103 06:05:21.821202 4854 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4dd5e414-c0ff-441b-a802-de3f2bbf4a4c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:21 crc kubenswrapper[4854]: I0103 06:05:21.821224 4854 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4dd5e414-c0ff-441b-a802-de3f2bbf4a4c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:21 crc kubenswrapper[4854]: I0103 06:05:21.821277 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsf6z\" (UniqueName: \"kubernetes.io/projected/4dd5e414-c0ff-441b-a802-de3f2bbf4a4c-kube-api-access-fsf6z\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:21 crc kubenswrapper[4854]: I0103 06:05:21.821298 4854 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41de8a7f-850a-4f6e-8623-e0cdbcdf79e6-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:21 crc kubenswrapper[4854]: I0103 06:05:21.821311 4854 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41de8a7f-850a-4f6e-8623-e0cdbcdf79e6-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:21 crc kubenswrapper[4854]: I0103 06:05:21.821327 4854 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4dd5e414-c0ff-441b-a802-de3f2bbf4a4c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:21 crc kubenswrapper[4854]: I0103 06:05:21.842719 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41de8a7f-850a-4f6e-8623-e0cdbcdf79e6-scripts" (OuterVolumeSpecName: "scripts") pod "41de8a7f-850a-4f6e-8623-e0cdbcdf79e6" (UID: "41de8a7f-850a-4f6e-8623-e0cdbcdf79e6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:05:21 crc kubenswrapper[4854]: I0103 06:05:21.877629 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dd5e414-c0ff-441b-a802-de3f2bbf4a4c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4dd5e414-c0ff-441b-a802-de3f2bbf4a4c" (UID: "4dd5e414-c0ff-441b-a802-de3f2bbf4a4c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:05:21 crc kubenswrapper[4854]: I0103 06:05:21.892220 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41de8a7f-850a-4f6e-8623-e0cdbcdf79e6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "41de8a7f-850a-4f6e-8623-e0cdbcdf79e6" (UID: "41de8a7f-850a-4f6e-8623-e0cdbcdf79e6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:05:21 crc kubenswrapper[4854]: I0103 06:05:21.929532 4854 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4dd5e414-c0ff-441b-a802-de3f2bbf4a4c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:21 crc kubenswrapper[4854]: I0103 06:05:21.929659 4854 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41de8a7f-850a-4f6e-8623-e0cdbcdf79e6-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:21 crc kubenswrapper[4854]: I0103 06:05:21.929717 4854 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/41de8a7f-850a-4f6e-8623-e0cdbcdf79e6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.242501 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41de8a7f-850a-4f6e-8623-e0cdbcdf79e6-kube-api-access-bmj96" (OuterVolumeSpecName: "kube-api-access-bmj96") pod "41de8a7f-850a-4f6e-8623-e0cdbcdf79e6" (UID: "41de8a7f-850a-4f6e-8623-e0cdbcdf79e6"). InnerVolumeSpecName "kube-api-access-bmj96". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.248382 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmj96\" (UniqueName: \"kubernetes.io/projected/41de8a7f-850a-4f6e-8623-e0cdbcdf79e6-kube-api-access-bmj96\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.302250 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41de8a7f-850a-4f6e-8623-e0cdbcdf79e6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "41de8a7f-850a-4f6e-8623-e0cdbcdf79e6" (UID: "41de8a7f-850a-4f6e-8623-e0cdbcdf79e6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.323324 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8e00ba39-1426-43ee-bedc-b865bb3cc96a","Type":"ContainerStarted","Data":"c1681aac39cb16b22710d796c6212d9fa2c70888bd9fbdc1afb3847136a837f6"} Jan 03 06:05:22 crc kubenswrapper[4854]: E0103 06:05:22.330376 4854 info.go:109] Failed to get network devices: open /sys/class/net/d0c4cdffe46251f/address: no such file or directory Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.334379 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41de8a7f-850a-4f6e-8623-e0cdbcdf79e6","Type":"ContainerDied","Data":"ecc2df736ced624a2de286a9ef0f2a3e871434d7dc7261daabf050a5b1a2966f"} Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.334432 4854 scope.go:117] "RemoveContainer" containerID="5beab47bad9dc5e476aabb95fa0105475efdf361b3799d04217ef361f80b0dde" Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.334554 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.345696 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5f9d458c9d-vtsmw" event={"ID":"c5aa7cd7-25a7-4228-a047-5fef936c6a9a","Type":"ContainerStarted","Data":"b086604ef7127ba1fe93a232d4dd13ca30c94802827c834ae6ced7cd6e22611a"} Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.349888 4854 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41de8a7f-850a-4f6e-8623-e0cdbcdf79e6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.369257 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41de8a7f-850a-4f6e-8623-e0cdbcdf79e6-config-data" (OuterVolumeSpecName: "config-data") pod "41de8a7f-850a-4f6e-8623-e0cdbcdf79e6" (UID: "41de8a7f-850a-4f6e-8623-e0cdbcdf79e6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.374421 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-ps98h" event={"ID":"4dd5e414-c0ff-441b-a802-de3f2bbf4a4c","Type":"ContainerDied","Data":"a2da0180ffeae2bc3629eba53cd3039a19eedaf745f4e9ba56f4729198e91b0e"} Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.374558 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-ps98h" Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.384881 4854 scope.go:117] "RemoveContainer" containerID="3ab9a5fd121afa77a60db235cc7985030170fb6cee975ec755d41725cd7857eb" Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.452217 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41de8a7f-850a-4f6e-8623-e0cdbcdf79e6-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.678016 4854 scope.go:117] "RemoveContainer" containerID="ef373afcf83f8495b8f8ff89622224a235b10af839d685bcf814caa4d3119a97" Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.787632 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-84c5455478-hczhs" Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.787773 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.800090 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.822160 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:05:22 crc kubenswrapper[4854]: E0103 06:05:22.822750 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41de8a7f-850a-4f6e-8623-e0cdbcdf79e6" containerName="ceilometer-notification-agent" Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.822763 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="41de8a7f-850a-4f6e-8623-e0cdbcdf79e6" containerName="ceilometer-notification-agent" Jan 03 06:05:22 crc kubenswrapper[4854]: E0103 06:05:22.822779 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41de8a7f-850a-4f6e-8623-e0cdbcdf79e6" containerName="proxy-httpd" Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.822788 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="41de8a7f-850a-4f6e-8623-e0cdbcdf79e6" containerName="proxy-httpd" Jan 03 06:05:22 crc kubenswrapper[4854]: E0103 06:05:22.822804 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46254f53-deed-4254-801c-1db0db3ec56a" containerName="neutron-api" Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.822810 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="46254f53-deed-4254-801c-1db0db3ec56a" containerName="neutron-api" Jan 03 06:05:22 crc kubenswrapper[4854]: E0103 06:05:22.822819 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41de8a7f-850a-4f6e-8623-e0cdbcdf79e6" containerName="ceilometer-central-agent" Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.822825 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="41de8a7f-850a-4f6e-8623-e0cdbcdf79e6" containerName="ceilometer-central-agent" Jan 03 06:05:22 crc kubenswrapper[4854]: E0103 06:05:22.822837 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41de8a7f-850a-4f6e-8623-e0cdbcdf79e6" containerName="sg-core" Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.822843 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="41de8a7f-850a-4f6e-8623-e0cdbcdf79e6" containerName="sg-core" Jan 03 06:05:22 crc kubenswrapper[4854]: E0103 06:05:22.822856 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46254f53-deed-4254-801c-1db0db3ec56a" containerName="neutron-httpd" Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.822862 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="46254f53-deed-4254-801c-1db0db3ec56a" containerName="neutron-httpd" Jan 03 06:05:22 crc kubenswrapper[4854]: E0103 06:05:22.822877 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dd5e414-c0ff-441b-a802-de3f2bbf4a4c" containerName="init" Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.822884 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dd5e414-c0ff-441b-a802-de3f2bbf4a4c" containerName="init" Jan 03 06:05:22 crc kubenswrapper[4854]: E0103 06:05:22.822905 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c5068d8-79f7-4319-8b27-2343ad584166" containerName="dnsmasq-dns" Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.822912 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c5068d8-79f7-4319-8b27-2343ad584166" containerName="dnsmasq-dns" Jan 03 06:05:22 crc kubenswrapper[4854]: E0103 06:05:22.822925 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c5068d8-79f7-4319-8b27-2343ad584166" containerName="init" Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.822930 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c5068d8-79f7-4319-8b27-2343ad584166" containerName="init" Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.823152 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="41de8a7f-850a-4f6e-8623-e0cdbcdf79e6" containerName="ceilometer-central-agent" Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.823166 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="46254f53-deed-4254-801c-1db0db3ec56a" containerName="neutron-httpd" Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.823182 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="41de8a7f-850a-4f6e-8623-e0cdbcdf79e6" containerName="proxy-httpd" Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.823192 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c5068d8-79f7-4319-8b27-2343ad584166" containerName="dnsmasq-dns" Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.823200 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="41de8a7f-850a-4f6e-8623-e0cdbcdf79e6" containerName="sg-core" Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.823216 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="41de8a7f-850a-4f6e-8623-e0cdbcdf79e6" containerName="ceilometer-notification-agent" Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.823223 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dd5e414-c0ff-441b-a802-de3f2bbf4a4c" containerName="init" Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.823234 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="46254f53-deed-4254-801c-1db0db3ec56a" containerName="neutron-api" Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.825162 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.835677 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.836533 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.847896 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.861542 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5nj8\" (UniqueName: \"kubernetes.io/projected/46254f53-deed-4254-801c-1db0db3ec56a-kube-api-access-t5nj8\") pod \"46254f53-deed-4254-801c-1db0db3ec56a\" (UID: \"46254f53-deed-4254-801c-1db0db3ec56a\") " Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.862010 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/46254f53-deed-4254-801c-1db0db3ec56a-httpd-config\") pod \"46254f53-deed-4254-801c-1db0db3ec56a\" (UID: \"46254f53-deed-4254-801c-1db0db3ec56a\") " Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.862059 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/46254f53-deed-4254-801c-1db0db3ec56a-ovndb-tls-certs\") pod \"46254f53-deed-4254-801c-1db0db3ec56a\" (UID: \"46254f53-deed-4254-801c-1db0db3ec56a\") " Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.862119 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/46254f53-deed-4254-801c-1db0db3ec56a-config\") pod \"46254f53-deed-4254-801c-1db0db3ec56a\" (UID: \"46254f53-deed-4254-801c-1db0db3ec56a\") " Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.862148 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46254f53-deed-4254-801c-1db0db3ec56a-combined-ca-bundle\") pod \"46254f53-deed-4254-801c-1db0db3ec56a\" (UID: \"46254f53-deed-4254-801c-1db0db3ec56a\") " Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.862342 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a52c5887-6e07-4241-b543-55f19941dde9-log-httpd\") pod \"ceilometer-0\" (UID: \"a52c5887-6e07-4241-b543-55f19941dde9\") " pod="openstack/ceilometer-0" Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.862410 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpbfj\" (UniqueName: \"kubernetes.io/projected/a52c5887-6e07-4241-b543-55f19941dde9-kube-api-access-dpbfj\") pod \"ceilometer-0\" (UID: \"a52c5887-6e07-4241-b543-55f19941dde9\") " pod="openstack/ceilometer-0" Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.862477 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a52c5887-6e07-4241-b543-55f19941dde9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a52c5887-6e07-4241-b543-55f19941dde9\") " pod="openstack/ceilometer-0" Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.862515 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a52c5887-6e07-4241-b543-55f19941dde9-run-httpd\") pod \"ceilometer-0\" (UID: \"a52c5887-6e07-4241-b543-55f19941dde9\") " pod="openstack/ceilometer-0" Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.862566 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a52c5887-6e07-4241-b543-55f19941dde9-scripts\") pod \"ceilometer-0\" (UID: \"a52c5887-6e07-4241-b543-55f19941dde9\") " pod="openstack/ceilometer-0" Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.862607 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a52c5887-6e07-4241-b543-55f19941dde9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a52c5887-6e07-4241-b543-55f19941dde9\") " pod="openstack/ceilometer-0" Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.862643 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a52c5887-6e07-4241-b543-55f19941dde9-config-data\") pod \"ceilometer-0\" (UID: \"a52c5887-6e07-4241-b543-55f19941dde9\") " pod="openstack/ceilometer-0" Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.866271 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-ps98h"] Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.880165 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-ps98h"] Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.894144 4854 scope.go:117] "RemoveContainer" containerID="0aed09d0236567b894aaca3c66501d8aa34d7c0933ab3953fd875848ea83a542" Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.929478 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46254f53-deed-4254-801c-1db0db3ec56a-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "46254f53-deed-4254-801c-1db0db3ec56a" (UID: "46254f53-deed-4254-801c-1db0db3ec56a"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.936311 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46254f53-deed-4254-801c-1db0db3ec56a-kube-api-access-t5nj8" (OuterVolumeSpecName: "kube-api-access-t5nj8") pod "46254f53-deed-4254-801c-1db0db3ec56a" (UID: "46254f53-deed-4254-801c-1db0db3ec56a"). InnerVolumeSpecName "kube-api-access-t5nj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.966674 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpbfj\" (UniqueName: \"kubernetes.io/projected/a52c5887-6e07-4241-b543-55f19941dde9-kube-api-access-dpbfj\") pod \"ceilometer-0\" (UID: \"a52c5887-6e07-4241-b543-55f19941dde9\") " pod="openstack/ceilometer-0" Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.966766 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a52c5887-6e07-4241-b543-55f19941dde9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a52c5887-6e07-4241-b543-55f19941dde9\") " pod="openstack/ceilometer-0" Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.966807 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a52c5887-6e07-4241-b543-55f19941dde9-run-httpd\") pod \"ceilometer-0\" (UID: \"a52c5887-6e07-4241-b543-55f19941dde9\") " pod="openstack/ceilometer-0" Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.966834 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a52c5887-6e07-4241-b543-55f19941dde9-scripts\") pod \"ceilometer-0\" (UID: \"a52c5887-6e07-4241-b543-55f19941dde9\") " pod="openstack/ceilometer-0" Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.966875 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a52c5887-6e07-4241-b543-55f19941dde9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a52c5887-6e07-4241-b543-55f19941dde9\") " pod="openstack/ceilometer-0" Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.966902 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a52c5887-6e07-4241-b543-55f19941dde9-config-data\") pod \"ceilometer-0\" (UID: \"a52c5887-6e07-4241-b543-55f19941dde9\") " pod="openstack/ceilometer-0" Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.966937 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a52c5887-6e07-4241-b543-55f19941dde9-log-httpd\") pod \"ceilometer-0\" (UID: \"a52c5887-6e07-4241-b543-55f19941dde9\") " pod="openstack/ceilometer-0" Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.967028 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5nj8\" (UniqueName: \"kubernetes.io/projected/46254f53-deed-4254-801c-1db0db3ec56a-kube-api-access-t5nj8\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.967039 4854 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/46254f53-deed-4254-801c-1db0db3ec56a-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.967411 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a52c5887-6e07-4241-b543-55f19941dde9-log-httpd\") pod \"ceilometer-0\" (UID: \"a52c5887-6e07-4241-b543-55f19941dde9\") " pod="openstack/ceilometer-0" Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.967571 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a52c5887-6e07-4241-b543-55f19941dde9-run-httpd\") pod \"ceilometer-0\" (UID: \"a52c5887-6e07-4241-b543-55f19941dde9\") " pod="openstack/ceilometer-0" Jan 03 06:05:22 crc kubenswrapper[4854]: E0103 06:05:22.974929 4854 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41de8a7f_850a_4f6e_8623_e0cdbcdf79e6.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41de8a7f_850a_4f6e_8623_e0cdbcdf79e6.slice/crio-ecc2df736ced624a2de286a9ef0f2a3e871434d7dc7261daabf050a5b1a2966f\": RecentStats: unable to find data in memory cache]" Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.982583 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a52c5887-6e07-4241-b543-55f19941dde9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a52c5887-6e07-4241-b543-55f19941dde9\") " pod="openstack/ceilometer-0" Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.984834 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a52c5887-6e07-4241-b543-55f19941dde9-scripts\") pod \"ceilometer-0\" (UID: \"a52c5887-6e07-4241-b543-55f19941dde9\") " pod="openstack/ceilometer-0" Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.985494 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a52c5887-6e07-4241-b543-55f19941dde9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a52c5887-6e07-4241-b543-55f19941dde9\") " pod="openstack/ceilometer-0" Jan 03 06:05:22 crc kubenswrapper[4854]: I0103 06:05:22.986280 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a52c5887-6e07-4241-b543-55f19941dde9-config-data\") pod \"ceilometer-0\" (UID: \"a52c5887-6e07-4241-b543-55f19941dde9\") " pod="openstack/ceilometer-0" Jan 03 06:05:23 crc kubenswrapper[4854]: I0103 06:05:22.999371 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpbfj\" (UniqueName: \"kubernetes.io/projected/a52c5887-6e07-4241-b543-55f19941dde9-kube-api-access-dpbfj\") pod \"ceilometer-0\" (UID: \"a52c5887-6e07-4241-b543-55f19941dde9\") " pod="openstack/ceilometer-0" Jan 03 06:05:23 crc kubenswrapper[4854]: I0103 06:05:23.168066 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 06:05:23 crc kubenswrapper[4854]: I0103 06:05:23.177768 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46254f53-deed-4254-801c-1db0db3ec56a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "46254f53-deed-4254-801c-1db0db3ec56a" (UID: "46254f53-deed-4254-801c-1db0db3ec56a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:05:23 crc kubenswrapper[4854]: I0103 06:05:23.193034 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46254f53-deed-4254-801c-1db0db3ec56a-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "46254f53-deed-4254-801c-1db0db3ec56a" (UID: "46254f53-deed-4254-801c-1db0db3ec56a"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:05:23 crc kubenswrapper[4854]: I0103 06:05:23.201708 4854 scope.go:117] "RemoveContainer" containerID="1d4bc0c02e6285b3b7b14891500ede1533142bd516108890459082f733482eba" Jan 03 06:05:23 crc kubenswrapper[4854]: I0103 06:05:23.203615 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46254f53-deed-4254-801c-1db0db3ec56a-config" (OuterVolumeSpecName: "config") pod "46254f53-deed-4254-801c-1db0db3ec56a" (UID: "46254f53-deed-4254-801c-1db0db3ec56a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:05:23 crc kubenswrapper[4854]: I0103 06:05:23.274768 4854 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/46254f53-deed-4254-801c-1db0db3ec56a-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:23 crc kubenswrapper[4854]: I0103 06:05:23.274809 4854 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/46254f53-deed-4254-801c-1db0db3ec56a-config\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:23 crc kubenswrapper[4854]: I0103 06:05:23.274820 4854 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46254f53-deed-4254-801c-1db0db3ec56a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:23 crc kubenswrapper[4854]: I0103 06:05:23.391459 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-64c7fcd798-ntxft" event={"ID":"ebcdaaae-12a6-437d-a050-ee71e343b5b0","Type":"ContainerStarted","Data":"0f928f3fac6c333e6a3dc668b046f0528eca6cf010c522902e1f49fcd13cefbf"} Jan 03 06:05:23 crc kubenswrapper[4854]: I0103 06:05:23.398471 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6d8cd4cdc9-wwfpf" event={"ID":"e56f3e19-d54e-44be-9a12-485d5f86231f","Type":"ContainerStarted","Data":"3a76bb2eb14f3a7cb3e9195d6e7ccb6175e985b97f4d883eb283d260264d605c"} Jan 03 06:05:23 crc kubenswrapper[4854]: I0103 06:05:23.399659 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5f9d458c9d-vtsmw" event={"ID":"c5aa7cd7-25a7-4228-a047-5fef936c6a9a","Type":"ContainerStarted","Data":"209fbf827b81ce533d6ed28038e862cd93267bf878e2a649e41b99390f137397"} Jan 03 06:05:23 crc kubenswrapper[4854]: I0103 06:05:23.403338 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84c5455478-hczhs" event={"ID":"46254f53-deed-4254-801c-1db0db3ec56a","Type":"ContainerDied","Data":"d0c4cdffe46251f9d88830d7cb175feb376e38c04043f8674c983072aaa61a72"} Jan 03 06:05:23 crc kubenswrapper[4854]: I0103 06:05:23.403374 4854 scope.go:117] "RemoveContainer" containerID="6c49479d5e0adf15ac1d291b5867a350c34eaac96d004e0a6d75614322aac40e" Jan 03 06:05:23 crc kubenswrapper[4854]: I0103 06:05:23.403505 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-84c5455478-hczhs" Jan 03 06:05:23 crc kubenswrapper[4854]: I0103 06:05:23.416708 4854 generic.go:334] "Generic (PLEG): container finished" podID="1f4da9c0-58a1-41d0-9d97-6cf376e6233d" containerID="5e1dbf25860fd5885f0a1a369acd074c344d95ffafb3709b272c0b5769b94715" exitCode=0 Jan 03 06:05:23 crc kubenswrapper[4854]: I0103 06:05:23.416750 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-klgkr" event={"ID":"1f4da9c0-58a1-41d0-9d97-6cf376e6233d","Type":"ContainerDied","Data":"5e1dbf25860fd5885f0a1a369acd074c344d95ffafb3709b272c0b5769b94715"} Jan 03 06:05:23 crc kubenswrapper[4854]: I0103 06:05:23.497413 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-84c5455478-hczhs"] Jan 03 06:05:23 crc kubenswrapper[4854]: I0103 06:05:23.507533 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-84c5455478-hczhs"] Jan 03 06:05:23 crc kubenswrapper[4854]: I0103 06:05:23.639841 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-57b895997d-9fnc7" Jan 03 06:05:23 crc kubenswrapper[4854]: I0103 06:05:23.779912 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:05:23 crc kubenswrapper[4854]: I0103 06:05:23.793861 4854 scope.go:117] "RemoveContainer" containerID="d3785ee455877013b890619cead6235fe7f8ab8a467a1246625f7607bc7132ce" Jan 03 06:05:24 crc kubenswrapper[4854]: I0103 06:05:24.157132 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41de8a7f-850a-4f6e-8623-e0cdbcdf79e6" path="/var/lib/kubelet/pods/41de8a7f-850a-4f6e-8623-e0cdbcdf79e6/volumes" Jan 03 06:05:24 crc kubenswrapper[4854]: I0103 06:05:24.158265 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46254f53-deed-4254-801c-1db0db3ec56a" path="/var/lib/kubelet/pods/46254f53-deed-4254-801c-1db0db3ec56a/volumes" Jan 03 06:05:24 crc kubenswrapper[4854]: I0103 06:05:24.161588 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4dd5e414-c0ff-441b-a802-de3f2bbf4a4c" path="/var/lib/kubelet/pods/4dd5e414-c0ff-441b-a802-de3f2bbf4a4c/volumes" Jan 03 06:05:24 crc kubenswrapper[4854]: I0103 06:05:24.433791 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6d8cd4cdc9-wwfpf" event={"ID":"e56f3e19-d54e-44be-9a12-485d5f86231f","Type":"ContainerStarted","Data":"a2e317cf3b6871a54c41baf54d5d1566ac51234088840906ff91e060eece354b"} Jan 03 06:05:24 crc kubenswrapper[4854]: I0103 06:05:24.449426 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5f9d458c9d-vtsmw" event={"ID":"c5aa7cd7-25a7-4228-a047-5fef936c6a9a","Type":"ContainerStarted","Data":"6ab0a8350f926ea6c531393f23479a1d87997baf6246ca04f98700954aa632a6"} Jan 03 06:05:24 crc kubenswrapper[4854]: I0103 06:05:24.449539 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5f9d458c9d-vtsmw" Jan 03 06:05:24 crc kubenswrapper[4854]: I0103 06:05:24.449647 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5f9d458c9d-vtsmw" Jan 03 06:05:24 crc kubenswrapper[4854]: I0103 06:05:24.479544 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a52c5887-6e07-4241-b543-55f19941dde9","Type":"ContainerStarted","Data":"5a7cb76d376453f46fb9280fdc45b04579d9d5455a8661be9be7b88c88c488fa"} Jan 03 06:05:24 crc kubenswrapper[4854]: I0103 06:05:24.479989 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-6d8cd4cdc9-wwfpf" podStartSLOduration=6.024229252 podStartE2EDuration="11.479975195s" podCreationTimestamp="2026-01-03 06:05:13 +0000 UTC" firstStartedPulling="2026-01-03 06:05:16.39513624 +0000 UTC m=+1494.721712812" lastFinishedPulling="2026-01-03 06:05:21.850882183 +0000 UTC m=+1500.177458755" observedRunningTime="2026-01-03 06:05:24.454983368 +0000 UTC m=+1502.781559950" watchObservedRunningTime="2026-01-03 06:05:24.479975195 +0000 UTC m=+1502.806551787" Jan 03 06:05:24 crc kubenswrapper[4854]: I0103 06:05:24.507429 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5f9d458c9d-vtsmw" podStartSLOduration=8.507407161 podStartE2EDuration="8.507407161s" podCreationTimestamp="2026-01-03 06:05:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:05:24.478503519 +0000 UTC m=+1502.805080101" watchObservedRunningTime="2026-01-03 06:05:24.507407161 +0000 UTC m=+1502.833983743" Jan 03 06:05:24 crc kubenswrapper[4854]: I0103 06:05:24.511624 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-klgkr" event={"ID":"1f4da9c0-58a1-41d0-9d97-6cf376e6233d","Type":"ContainerStarted","Data":"5dde7ca3d26f94eae20a208ed1bfb1b2061d103aca875cd6846d10ec38acc813"} Jan 03 06:05:24 crc kubenswrapper[4854]: I0103 06:05:24.513018 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-klgkr" Jan 03 06:05:24 crc kubenswrapper[4854]: I0103 06:05:24.535829 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-64c7fcd798-ntxft" event={"ID":"ebcdaaae-12a6-437d-a050-ee71e343b5b0","Type":"ContainerStarted","Data":"13d17519dbc8064f9407aefa58328c313a10bd1bac653872f9c0e12bdda8580c"} Jan 03 06:05:24 crc kubenswrapper[4854]: I0103 06:05:24.548153 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8e00ba39-1426-43ee-bedc-b865bb3cc96a","Type":"ContainerStarted","Data":"f39d91732fbc3e932a86b606581cdf01cb9c30d6f97ee8558a89acfe632c15e7"} Jan 03 06:05:24 crc kubenswrapper[4854]: I0103 06:05:24.583458 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-klgkr" podStartSLOduration=7.583436498 podStartE2EDuration="7.583436498s" podCreationTimestamp="2026-01-03 06:05:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:05:24.573604439 +0000 UTC m=+1502.900181011" watchObservedRunningTime="2026-01-03 06:05:24.583436498 +0000 UTC m=+1502.910013070" Jan 03 06:05:24 crc kubenswrapper[4854]: I0103 06:05:24.602428 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-64c7fcd798-ntxft" podStartSLOduration=6.401381473 podStartE2EDuration="11.602411099s" podCreationTimestamp="2026-01-03 06:05:13 +0000 UTC" firstStartedPulling="2026-01-03 06:05:16.654194902 +0000 UTC m=+1494.980771484" lastFinishedPulling="2026-01-03 06:05:21.855224538 +0000 UTC m=+1500.181801110" observedRunningTime="2026-01-03 06:05:24.593546194 +0000 UTC m=+1502.920122776" watchObservedRunningTime="2026-01-03 06:05:24.602411099 +0000 UTC m=+1502.928987671" Jan 03 06:05:25 crc kubenswrapper[4854]: I0103 06:05:25.573274 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cab04439-797a-489b-a4a7-d7cd3c23ccec","Type":"ContainerStarted","Data":"8077bc9993ac2bb6a7b10086e56be2c094ec7f21cd49b833a329e9a19b88a14a"} Jan 03 06:05:25 crc kubenswrapper[4854]: I0103 06:05:25.579108 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a52c5887-6e07-4241-b543-55f19941dde9","Type":"ContainerStarted","Data":"7199f6b6395edb419bf9df1d3076dde926984c802dbee49cfb66c4c755d2f869"} Jan 03 06:05:25 crc kubenswrapper[4854]: I0103 06:05:25.588058 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="8e00ba39-1426-43ee-bedc-b865bb3cc96a" containerName="cinder-api-log" containerID="cri-o://f39d91732fbc3e932a86b606581cdf01cb9c30d6f97ee8558a89acfe632c15e7" gracePeriod=30 Jan 03 06:05:25 crc kubenswrapper[4854]: I0103 06:05:25.588331 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8e00ba39-1426-43ee-bedc-b865bb3cc96a","Type":"ContainerStarted","Data":"cd0cdff0c3dca2fa64ecaa486b3f9f9418f65ad235f30ce662331a62232f7128"} Jan 03 06:05:25 crc kubenswrapper[4854]: I0103 06:05:25.589710 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 03 06:05:25 crc kubenswrapper[4854]: I0103 06:05:25.590000 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="8e00ba39-1426-43ee-bedc-b865bb3cc96a" containerName="cinder-api" containerID="cri-o://cd0cdff0c3dca2fa64ecaa486b3f9f9418f65ad235f30ce662331a62232f7128" gracePeriod=30 Jan 03 06:05:25 crc kubenswrapper[4854]: I0103 06:05:25.626391 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=8.626367728 podStartE2EDuration="8.626367728s" podCreationTimestamp="2026-01-03 06:05:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:05:25.60586872 +0000 UTC m=+1503.932445302" watchObservedRunningTime="2026-01-03 06:05:25.626367728 +0000 UTC m=+1503.952944300" Jan 03 06:05:26 crc kubenswrapper[4854]: I0103 06:05:26.597866 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a52c5887-6e07-4241-b543-55f19941dde9","Type":"ContainerStarted","Data":"f7229a733728ec33b78d351688cbd31f4086095ba01b693e0a036584a390ef2b"} Jan 03 06:05:26 crc kubenswrapper[4854]: I0103 06:05:26.599831 4854 generic.go:334] "Generic (PLEG): container finished" podID="8e00ba39-1426-43ee-bedc-b865bb3cc96a" containerID="cd0cdff0c3dca2fa64ecaa486b3f9f9418f65ad235f30ce662331a62232f7128" exitCode=0 Jan 03 06:05:26 crc kubenswrapper[4854]: I0103 06:05:26.599856 4854 generic.go:334] "Generic (PLEG): container finished" podID="8e00ba39-1426-43ee-bedc-b865bb3cc96a" containerID="f39d91732fbc3e932a86b606581cdf01cb9c30d6f97ee8558a89acfe632c15e7" exitCode=143 Jan 03 06:05:26 crc kubenswrapper[4854]: I0103 06:05:26.599897 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8e00ba39-1426-43ee-bedc-b865bb3cc96a","Type":"ContainerDied","Data":"cd0cdff0c3dca2fa64ecaa486b3f9f9418f65ad235f30ce662331a62232f7128"} Jan 03 06:05:26 crc kubenswrapper[4854]: I0103 06:05:26.599924 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8e00ba39-1426-43ee-bedc-b865bb3cc96a","Type":"ContainerDied","Data":"f39d91732fbc3e932a86b606581cdf01cb9c30d6f97ee8558a89acfe632c15e7"} Jan 03 06:05:26 crc kubenswrapper[4854]: I0103 06:05:26.599935 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8e00ba39-1426-43ee-bedc-b865bb3cc96a","Type":"ContainerDied","Data":"c1681aac39cb16b22710d796c6212d9fa2c70888bd9fbdc1afb3847136a837f6"} Jan 03 06:05:26 crc kubenswrapper[4854]: I0103 06:05:26.599945 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1681aac39cb16b22710d796c6212d9fa2c70888bd9fbdc1afb3847136a837f6" Jan 03 06:05:26 crc kubenswrapper[4854]: I0103 06:05:26.601401 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cab04439-797a-489b-a4a7-d7cd3c23ccec","Type":"ContainerStarted","Data":"cebbbfc964da1c9663fa0ce2ecb5dec47511ccc057528f8a595230d975c97e5a"} Jan 03 06:05:26 crc kubenswrapper[4854]: I0103 06:05:26.607509 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 03 06:05:26 crc kubenswrapper[4854]: I0103 06:05:26.667751 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.122439716 podStartE2EDuration="9.66773185s" podCreationTimestamp="2026-01-03 06:05:17 +0000 UTC" firstStartedPulling="2026-01-03 06:05:20.264462624 +0000 UTC m=+1498.591039196" lastFinishedPulling="2026-01-03 06:05:23.809754758 +0000 UTC m=+1502.136331330" observedRunningTime="2026-01-03 06:05:26.650990663 +0000 UTC m=+1504.977567245" watchObservedRunningTime="2026-01-03 06:05:26.66773185 +0000 UTC m=+1504.994308422" Jan 03 06:05:26 crc kubenswrapper[4854]: I0103 06:05:26.687772 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8e00ba39-1426-43ee-bedc-b865bb3cc96a-etc-machine-id\") pod \"8e00ba39-1426-43ee-bedc-b865bb3cc96a\" (UID: \"8e00ba39-1426-43ee-bedc-b865bb3cc96a\") " Jan 03 06:05:26 crc kubenswrapper[4854]: I0103 06:05:26.687864 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9j97\" (UniqueName: \"kubernetes.io/projected/8e00ba39-1426-43ee-bedc-b865bb3cc96a-kube-api-access-q9j97\") pod \"8e00ba39-1426-43ee-bedc-b865bb3cc96a\" (UID: \"8e00ba39-1426-43ee-bedc-b865bb3cc96a\") " Jan 03 06:05:26 crc kubenswrapper[4854]: I0103 06:05:26.687921 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e00ba39-1426-43ee-bedc-b865bb3cc96a-logs\") pod \"8e00ba39-1426-43ee-bedc-b865bb3cc96a\" (UID: \"8e00ba39-1426-43ee-bedc-b865bb3cc96a\") " Jan 03 06:05:26 crc kubenswrapper[4854]: I0103 06:05:26.687996 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e00ba39-1426-43ee-bedc-b865bb3cc96a-config-data-custom\") pod \"8e00ba39-1426-43ee-bedc-b865bb3cc96a\" (UID: \"8e00ba39-1426-43ee-bedc-b865bb3cc96a\") " Jan 03 06:05:26 crc kubenswrapper[4854]: I0103 06:05:26.688059 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e00ba39-1426-43ee-bedc-b865bb3cc96a-combined-ca-bundle\") pod \"8e00ba39-1426-43ee-bedc-b865bb3cc96a\" (UID: \"8e00ba39-1426-43ee-bedc-b865bb3cc96a\") " Jan 03 06:05:26 crc kubenswrapper[4854]: I0103 06:05:26.688097 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e00ba39-1426-43ee-bedc-b865bb3cc96a-scripts\") pod \"8e00ba39-1426-43ee-bedc-b865bb3cc96a\" (UID: \"8e00ba39-1426-43ee-bedc-b865bb3cc96a\") " Jan 03 06:05:26 crc kubenswrapper[4854]: I0103 06:05:26.688260 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e00ba39-1426-43ee-bedc-b865bb3cc96a-config-data\") pod \"8e00ba39-1426-43ee-bedc-b865bb3cc96a\" (UID: \"8e00ba39-1426-43ee-bedc-b865bb3cc96a\") " Jan 03 06:05:26 crc kubenswrapper[4854]: I0103 06:05:26.689527 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e00ba39-1426-43ee-bedc-b865bb3cc96a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8e00ba39-1426-43ee-bedc-b865bb3cc96a" (UID: "8e00ba39-1426-43ee-bedc-b865bb3cc96a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 06:05:26 crc kubenswrapper[4854]: I0103 06:05:26.690506 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e00ba39-1426-43ee-bedc-b865bb3cc96a-logs" (OuterVolumeSpecName: "logs") pod "8e00ba39-1426-43ee-bedc-b865bb3cc96a" (UID: "8e00ba39-1426-43ee-bedc-b865bb3cc96a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:05:26 crc kubenswrapper[4854]: I0103 06:05:26.705372 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e00ba39-1426-43ee-bedc-b865bb3cc96a-scripts" (OuterVolumeSpecName: "scripts") pod "8e00ba39-1426-43ee-bedc-b865bb3cc96a" (UID: "8e00ba39-1426-43ee-bedc-b865bb3cc96a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:05:26 crc kubenswrapper[4854]: I0103 06:05:26.712304 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e00ba39-1426-43ee-bedc-b865bb3cc96a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8e00ba39-1426-43ee-bedc-b865bb3cc96a" (UID: "8e00ba39-1426-43ee-bedc-b865bb3cc96a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:05:26 crc kubenswrapper[4854]: I0103 06:05:26.727242 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e00ba39-1426-43ee-bedc-b865bb3cc96a-kube-api-access-q9j97" (OuterVolumeSpecName: "kube-api-access-q9j97") pod "8e00ba39-1426-43ee-bedc-b865bb3cc96a" (UID: "8e00ba39-1426-43ee-bedc-b865bb3cc96a"). InnerVolumeSpecName "kube-api-access-q9j97". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:05:26 crc kubenswrapper[4854]: I0103 06:05:26.796613 4854 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e00ba39-1426-43ee-bedc-b865bb3cc96a-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:26 crc kubenswrapper[4854]: I0103 06:05:26.796888 4854 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e00ba39-1426-43ee-bedc-b865bb3cc96a-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:26 crc kubenswrapper[4854]: I0103 06:05:26.796898 4854 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8e00ba39-1426-43ee-bedc-b865bb3cc96a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:26 crc kubenswrapper[4854]: I0103 06:05:26.796906 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9j97\" (UniqueName: \"kubernetes.io/projected/8e00ba39-1426-43ee-bedc-b865bb3cc96a-kube-api-access-q9j97\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:26 crc kubenswrapper[4854]: I0103 06:05:26.796921 4854 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e00ba39-1426-43ee-bedc-b865bb3cc96a-logs\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:26 crc kubenswrapper[4854]: I0103 06:05:26.857378 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e00ba39-1426-43ee-bedc-b865bb3cc96a-config-data" (OuterVolumeSpecName: "config-data") pod "8e00ba39-1426-43ee-bedc-b865bb3cc96a" (UID: "8e00ba39-1426-43ee-bedc-b865bb3cc96a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:05:26 crc kubenswrapper[4854]: I0103 06:05:26.859105 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e00ba39-1426-43ee-bedc-b865bb3cc96a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8e00ba39-1426-43ee-bedc-b865bb3cc96a" (UID: "8e00ba39-1426-43ee-bedc-b865bb3cc96a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:05:26 crc kubenswrapper[4854]: I0103 06:05:26.899264 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e00ba39-1426-43ee-bedc-b865bb3cc96a-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:26 crc kubenswrapper[4854]: I0103 06:05:26.899295 4854 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e00ba39-1426-43ee-bedc-b865bb3cc96a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:27 crc kubenswrapper[4854]: I0103 06:05:27.616373 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 03 06:05:27 crc kubenswrapper[4854]: I0103 06:05:27.625881 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a52c5887-6e07-4241-b543-55f19941dde9","Type":"ContainerStarted","Data":"507f29401ac91b50aabd91f6ab4b69c01f2f93312b8ff8318e4039a50c06c000"} Jan 03 06:05:27 crc kubenswrapper[4854]: I0103 06:05:27.659215 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 03 06:05:27 crc kubenswrapper[4854]: I0103 06:05:27.669249 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 03 06:05:27 crc kubenswrapper[4854]: I0103 06:05:27.694090 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 03 06:05:27 crc kubenswrapper[4854]: E0103 06:05:27.694557 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e00ba39-1426-43ee-bedc-b865bb3cc96a" containerName="cinder-api-log" Jan 03 06:05:27 crc kubenswrapper[4854]: I0103 06:05:27.694575 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e00ba39-1426-43ee-bedc-b865bb3cc96a" containerName="cinder-api-log" Jan 03 06:05:27 crc kubenswrapper[4854]: E0103 06:05:27.694612 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e00ba39-1426-43ee-bedc-b865bb3cc96a" containerName="cinder-api" Jan 03 06:05:27 crc kubenswrapper[4854]: I0103 06:05:27.694620 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e00ba39-1426-43ee-bedc-b865bb3cc96a" containerName="cinder-api" Jan 03 06:05:27 crc kubenswrapper[4854]: I0103 06:05:27.694809 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e00ba39-1426-43ee-bedc-b865bb3cc96a" containerName="cinder-api" Jan 03 06:05:27 crc kubenswrapper[4854]: I0103 06:05:27.694836 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e00ba39-1426-43ee-bedc-b865bb3cc96a" containerName="cinder-api-log" Jan 03 06:05:27 crc kubenswrapper[4854]: I0103 06:05:27.695938 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 03 06:05:27 crc kubenswrapper[4854]: I0103 06:05:27.698848 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 03 06:05:27 crc kubenswrapper[4854]: I0103 06:05:27.698999 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 03 06:05:27 crc kubenswrapper[4854]: I0103 06:05:27.699282 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 03 06:05:27 crc kubenswrapper[4854]: I0103 06:05:27.705794 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-55c4d98986-689lr" Jan 03 06:05:27 crc kubenswrapper[4854]: I0103 06:05:27.712189 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 03 06:05:27 crc kubenswrapper[4854]: I0103 06:05:27.819776 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4krr\" (UniqueName: \"kubernetes.io/projected/d8d21d2a-7f73-4026-87ab-632c4a623577-kube-api-access-n4krr\") pod \"cinder-api-0\" (UID: \"d8d21d2a-7f73-4026-87ab-632c4a623577\") " pod="openstack/cinder-api-0" Jan 03 06:05:27 crc kubenswrapper[4854]: I0103 06:05:27.819819 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d8d21d2a-7f73-4026-87ab-632c4a623577-config-data-custom\") pod \"cinder-api-0\" (UID: \"d8d21d2a-7f73-4026-87ab-632c4a623577\") " pod="openstack/cinder-api-0" Jan 03 06:05:27 crc kubenswrapper[4854]: I0103 06:05:27.819889 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8d21d2a-7f73-4026-87ab-632c4a623577-config-data\") pod \"cinder-api-0\" (UID: \"d8d21d2a-7f73-4026-87ab-632c4a623577\") " pod="openstack/cinder-api-0" Jan 03 06:05:27 crc kubenswrapper[4854]: I0103 06:05:27.820162 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8d21d2a-7f73-4026-87ab-632c4a623577-public-tls-certs\") pod \"cinder-api-0\" (UID: \"d8d21d2a-7f73-4026-87ab-632c4a623577\") " pod="openstack/cinder-api-0" Jan 03 06:05:27 crc kubenswrapper[4854]: I0103 06:05:27.820231 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8d21d2a-7f73-4026-87ab-632c4a623577-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"d8d21d2a-7f73-4026-87ab-632c4a623577\") " pod="openstack/cinder-api-0" Jan 03 06:05:27 crc kubenswrapper[4854]: I0103 06:05:27.820418 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8d21d2a-7f73-4026-87ab-632c4a623577-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d8d21d2a-7f73-4026-87ab-632c4a623577\") " pod="openstack/cinder-api-0" Jan 03 06:05:27 crc kubenswrapper[4854]: I0103 06:05:27.820527 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8d21d2a-7f73-4026-87ab-632c4a623577-scripts\") pod \"cinder-api-0\" (UID: \"d8d21d2a-7f73-4026-87ab-632c4a623577\") " pod="openstack/cinder-api-0" Jan 03 06:05:27 crc kubenswrapper[4854]: I0103 06:05:27.820627 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d8d21d2a-7f73-4026-87ab-632c4a623577-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d8d21d2a-7f73-4026-87ab-632c4a623577\") " pod="openstack/cinder-api-0" Jan 03 06:05:27 crc kubenswrapper[4854]: I0103 06:05:27.820763 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8d21d2a-7f73-4026-87ab-632c4a623577-logs\") pod \"cinder-api-0\" (UID: \"d8d21d2a-7f73-4026-87ab-632c4a623577\") " pod="openstack/cinder-api-0" Jan 03 06:05:27 crc kubenswrapper[4854]: I0103 06:05:27.922605 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8d21d2a-7f73-4026-87ab-632c4a623577-config-data\") pod \"cinder-api-0\" (UID: \"d8d21d2a-7f73-4026-87ab-632c4a623577\") " pod="openstack/cinder-api-0" Jan 03 06:05:27 crc kubenswrapper[4854]: I0103 06:05:27.922674 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8d21d2a-7f73-4026-87ab-632c4a623577-public-tls-certs\") pod \"cinder-api-0\" (UID: \"d8d21d2a-7f73-4026-87ab-632c4a623577\") " pod="openstack/cinder-api-0" Jan 03 06:05:27 crc kubenswrapper[4854]: I0103 06:05:27.922699 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8d21d2a-7f73-4026-87ab-632c4a623577-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"d8d21d2a-7f73-4026-87ab-632c4a623577\") " pod="openstack/cinder-api-0" Jan 03 06:05:27 crc kubenswrapper[4854]: I0103 06:05:27.922756 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8d21d2a-7f73-4026-87ab-632c4a623577-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d8d21d2a-7f73-4026-87ab-632c4a623577\") " pod="openstack/cinder-api-0" Jan 03 06:05:27 crc kubenswrapper[4854]: I0103 06:05:27.922795 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8d21d2a-7f73-4026-87ab-632c4a623577-scripts\") pod \"cinder-api-0\" (UID: \"d8d21d2a-7f73-4026-87ab-632c4a623577\") " pod="openstack/cinder-api-0" Jan 03 06:05:27 crc kubenswrapper[4854]: I0103 06:05:27.922827 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d8d21d2a-7f73-4026-87ab-632c4a623577-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d8d21d2a-7f73-4026-87ab-632c4a623577\") " pod="openstack/cinder-api-0" Jan 03 06:05:27 crc kubenswrapper[4854]: I0103 06:05:27.922868 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8d21d2a-7f73-4026-87ab-632c4a623577-logs\") pod \"cinder-api-0\" (UID: \"d8d21d2a-7f73-4026-87ab-632c4a623577\") " pod="openstack/cinder-api-0" Jan 03 06:05:27 crc kubenswrapper[4854]: I0103 06:05:27.922900 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4krr\" (UniqueName: \"kubernetes.io/projected/d8d21d2a-7f73-4026-87ab-632c4a623577-kube-api-access-n4krr\") pod \"cinder-api-0\" (UID: \"d8d21d2a-7f73-4026-87ab-632c4a623577\") " pod="openstack/cinder-api-0" Jan 03 06:05:27 crc kubenswrapper[4854]: I0103 06:05:27.922922 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d8d21d2a-7f73-4026-87ab-632c4a623577-config-data-custom\") pod \"cinder-api-0\" (UID: \"d8d21d2a-7f73-4026-87ab-632c4a623577\") " pod="openstack/cinder-api-0" Jan 03 06:05:27 crc kubenswrapper[4854]: I0103 06:05:27.922970 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d8d21d2a-7f73-4026-87ab-632c4a623577-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d8d21d2a-7f73-4026-87ab-632c4a623577\") " pod="openstack/cinder-api-0" Jan 03 06:05:27 crc kubenswrapper[4854]: I0103 06:05:27.923380 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8d21d2a-7f73-4026-87ab-632c4a623577-logs\") pod \"cinder-api-0\" (UID: \"d8d21d2a-7f73-4026-87ab-632c4a623577\") " pod="openstack/cinder-api-0" Jan 03 06:05:27 crc kubenswrapper[4854]: I0103 06:05:27.929672 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8d21d2a-7f73-4026-87ab-632c4a623577-public-tls-certs\") pod \"cinder-api-0\" (UID: \"d8d21d2a-7f73-4026-87ab-632c4a623577\") " pod="openstack/cinder-api-0" Jan 03 06:05:27 crc kubenswrapper[4854]: I0103 06:05:27.930882 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8d21d2a-7f73-4026-87ab-632c4a623577-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d8d21d2a-7f73-4026-87ab-632c4a623577\") " pod="openstack/cinder-api-0" Jan 03 06:05:27 crc kubenswrapper[4854]: I0103 06:05:27.930900 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d8d21d2a-7f73-4026-87ab-632c4a623577-config-data-custom\") pod \"cinder-api-0\" (UID: \"d8d21d2a-7f73-4026-87ab-632c4a623577\") " pod="openstack/cinder-api-0" Jan 03 06:05:27 crc kubenswrapper[4854]: I0103 06:05:27.931674 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8d21d2a-7f73-4026-87ab-632c4a623577-config-data\") pod \"cinder-api-0\" (UID: \"d8d21d2a-7f73-4026-87ab-632c4a623577\") " pod="openstack/cinder-api-0" Jan 03 06:05:27 crc kubenswrapper[4854]: I0103 06:05:27.932559 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8d21d2a-7f73-4026-87ab-632c4a623577-scripts\") pod \"cinder-api-0\" (UID: \"d8d21d2a-7f73-4026-87ab-632c4a623577\") " pod="openstack/cinder-api-0" Jan 03 06:05:27 crc kubenswrapper[4854]: I0103 06:05:27.939783 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8d21d2a-7f73-4026-87ab-632c4a623577-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"d8d21d2a-7f73-4026-87ab-632c4a623577\") " pod="openstack/cinder-api-0" Jan 03 06:05:27 crc kubenswrapper[4854]: I0103 06:05:27.941159 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4krr\" (UniqueName: \"kubernetes.io/projected/d8d21d2a-7f73-4026-87ab-632c4a623577-kube-api-access-n4krr\") pod \"cinder-api-0\" (UID: \"d8d21d2a-7f73-4026-87ab-632c4a623577\") " pod="openstack/cinder-api-0" Jan 03 06:05:27 crc kubenswrapper[4854]: I0103 06:05:27.969968 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 03 06:05:28 crc kubenswrapper[4854]: I0103 06:05:28.016120 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 03 06:05:28 crc kubenswrapper[4854]: I0103 06:05:28.140935 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e00ba39-1426-43ee-bedc-b865bb3cc96a" path="/var/lib/kubelet/pods/8e00ba39-1426-43ee-bedc-b865bb3cc96a/volumes" Jan 03 06:05:28 crc kubenswrapper[4854]: I0103 06:05:28.537139 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 03 06:05:28 crc kubenswrapper[4854]: I0103 06:05:28.538956 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 03 06:05:28 crc kubenswrapper[4854]: I0103 06:05:28.545598 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-jzdfs" Jan 03 06:05:28 crc kubenswrapper[4854]: I0103 06:05:28.545862 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 03 06:05:28 crc kubenswrapper[4854]: I0103 06:05:28.546042 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 03 06:05:28 crc kubenswrapper[4854]: I0103 06:05:28.567284 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 03 06:05:28 crc kubenswrapper[4854]: I0103 06:05:28.617390 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 03 06:05:28 crc kubenswrapper[4854]: I0103 06:05:28.641307 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d8d21d2a-7f73-4026-87ab-632c4a623577","Type":"ContainerStarted","Data":"860fc357c5164d0987f2a9c92e9c0cea3609c0df1669f31f83d7c33da33a002a"} Jan 03 06:05:28 crc kubenswrapper[4854]: I0103 06:05:28.679405 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5d443f98-7ca4-4ea2-bf9b-c64182525733-openstack-config\") pod \"openstackclient\" (UID: \"5d443f98-7ca4-4ea2-bf9b-c64182525733\") " pod="openstack/openstackclient" Jan 03 06:05:28 crc kubenswrapper[4854]: I0103 06:05:28.679477 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5d443f98-7ca4-4ea2-bf9b-c64182525733-openstack-config-secret\") pod \"openstackclient\" (UID: \"5d443f98-7ca4-4ea2-bf9b-c64182525733\") " pod="openstack/openstackclient" Jan 03 06:05:28 crc kubenswrapper[4854]: I0103 06:05:28.679549 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvz89\" (UniqueName: \"kubernetes.io/projected/5d443f98-7ca4-4ea2-bf9b-c64182525733-kube-api-access-kvz89\") pod \"openstackclient\" (UID: \"5d443f98-7ca4-4ea2-bf9b-c64182525733\") " pod="openstack/openstackclient" Jan 03 06:05:28 crc kubenswrapper[4854]: I0103 06:05:28.679674 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d443f98-7ca4-4ea2-bf9b-c64182525733-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5d443f98-7ca4-4ea2-bf9b-c64182525733\") " pod="openstack/openstackclient" Jan 03 06:05:28 crc kubenswrapper[4854]: I0103 06:05:28.711397 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-55c4d98986-689lr" Jan 03 06:05:28 crc kubenswrapper[4854]: I0103 06:05:28.781889 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d443f98-7ca4-4ea2-bf9b-c64182525733-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5d443f98-7ca4-4ea2-bf9b-c64182525733\") " pod="openstack/openstackclient" Jan 03 06:05:28 crc kubenswrapper[4854]: I0103 06:05:28.782086 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5d443f98-7ca4-4ea2-bf9b-c64182525733-openstack-config\") pod \"openstackclient\" (UID: \"5d443f98-7ca4-4ea2-bf9b-c64182525733\") " pod="openstack/openstackclient" Jan 03 06:05:28 crc kubenswrapper[4854]: I0103 06:05:28.782251 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5d443f98-7ca4-4ea2-bf9b-c64182525733-openstack-config-secret\") pod \"openstackclient\" (UID: \"5d443f98-7ca4-4ea2-bf9b-c64182525733\") " pod="openstack/openstackclient" Jan 03 06:05:28 crc kubenswrapper[4854]: I0103 06:05:28.782368 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvz89\" (UniqueName: \"kubernetes.io/projected/5d443f98-7ca4-4ea2-bf9b-c64182525733-kube-api-access-kvz89\") pod \"openstackclient\" (UID: \"5d443f98-7ca4-4ea2-bf9b-c64182525733\") " pod="openstack/openstackclient" Jan 03 06:05:28 crc kubenswrapper[4854]: I0103 06:05:28.792943 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5d443f98-7ca4-4ea2-bf9b-c64182525733-openstack-config-secret\") pod \"openstackclient\" (UID: \"5d443f98-7ca4-4ea2-bf9b-c64182525733\") " pod="openstack/openstackclient" Jan 03 06:05:28 crc kubenswrapper[4854]: I0103 06:05:28.797021 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5d443f98-7ca4-4ea2-bf9b-c64182525733-openstack-config\") pod \"openstackclient\" (UID: \"5d443f98-7ca4-4ea2-bf9b-c64182525733\") " pod="openstack/openstackclient" Jan 03 06:05:28 crc kubenswrapper[4854]: I0103 06:05:28.800761 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvz89\" (UniqueName: \"kubernetes.io/projected/5d443f98-7ca4-4ea2-bf9b-c64182525733-kube-api-access-kvz89\") pod \"openstackclient\" (UID: \"5d443f98-7ca4-4ea2-bf9b-c64182525733\") " pod="openstack/openstackclient" Jan 03 06:05:28 crc kubenswrapper[4854]: I0103 06:05:28.801105 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d443f98-7ca4-4ea2-bf9b-c64182525733-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5d443f98-7ca4-4ea2-bf9b-c64182525733\") " pod="openstack/openstackclient" Jan 03 06:05:28 crc kubenswrapper[4854]: I0103 06:05:28.906790 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 03 06:05:29 crc kubenswrapper[4854]: I0103 06:05:29.567963 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 03 06:05:29 crc kubenswrapper[4854]: I0103 06:05:29.710181 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d8d21d2a-7f73-4026-87ab-632c4a623577","Type":"ContainerStarted","Data":"5a7ca0f78f3747c7337f0170bef44a486004bf722edb7ba551dea91e43fa36db"} Jan 03 06:05:29 crc kubenswrapper[4854]: I0103 06:05:29.724674 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a52c5887-6e07-4241-b543-55f19941dde9","Type":"ContainerStarted","Data":"c861b854ae5d787e768ed083a63320f19df8118c66fa42a4fcccb16892ca21c7"} Jan 03 06:05:29 crc kubenswrapper[4854]: I0103 06:05:29.725409 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 03 06:05:29 crc kubenswrapper[4854]: I0103 06:05:29.745286 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"5d443f98-7ca4-4ea2-bf9b-c64182525733","Type":"ContainerStarted","Data":"b93533dc22966563006c354f2657e67bddd4216f876e28127d6f3af8ad6135a5"} Jan 03 06:05:29 crc kubenswrapper[4854]: I0103 06:05:29.759702 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.268421805 podStartE2EDuration="7.759677994s" podCreationTimestamp="2026-01-03 06:05:22 +0000 UTC" firstStartedPulling="2026-01-03 06:05:23.856434552 +0000 UTC m=+1502.183011124" lastFinishedPulling="2026-01-03 06:05:28.347690751 +0000 UTC m=+1506.674267313" observedRunningTime="2026-01-03 06:05:29.743306626 +0000 UTC m=+1508.069883218" watchObservedRunningTime="2026-01-03 06:05:29.759677994 +0000 UTC m=+1508.086254586" Jan 03 06:05:30 crc kubenswrapper[4854]: I0103 06:05:30.368537 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5f9d458c9d-vtsmw" Jan 03 06:05:30 crc kubenswrapper[4854]: I0103 06:05:30.460719 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-85bb9cd67c-w6ss9"] Jan 03 06:05:30 crc kubenswrapper[4854]: I0103 06:05:30.616039 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-85bb9cd67c-w6ss9" Jan 03 06:05:30 crc kubenswrapper[4854]: I0103 06:05:30.629750 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 03 06:05:30 crc kubenswrapper[4854]: I0103 06:05:30.629960 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 03 06:05:30 crc kubenswrapper[4854]: I0103 06:05:30.630148 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 03 06:05:30 crc kubenswrapper[4854]: I0103 06:05:30.670193 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-85bb9cd67c-w6ss9"] Jan 03 06:05:30 crc kubenswrapper[4854]: I0103 06:05:30.735518 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3991ad0-4c9f-466c-a5b2-a801fad29c1e-log-httpd\") pod \"swift-proxy-85bb9cd67c-w6ss9\" (UID: \"b3991ad0-4c9f-466c-a5b2-a801fad29c1e\") " pod="openstack/swift-proxy-85bb9cd67c-w6ss9" Jan 03 06:05:30 crc kubenswrapper[4854]: I0103 06:05:30.735651 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3991ad0-4c9f-466c-a5b2-a801fad29c1e-internal-tls-certs\") pod \"swift-proxy-85bb9cd67c-w6ss9\" (UID: \"b3991ad0-4c9f-466c-a5b2-a801fad29c1e\") " pod="openstack/swift-proxy-85bb9cd67c-w6ss9" Jan 03 06:05:30 crc kubenswrapper[4854]: I0103 06:05:30.735744 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3991ad0-4c9f-466c-a5b2-a801fad29c1e-config-data\") pod \"swift-proxy-85bb9cd67c-w6ss9\" (UID: \"b3991ad0-4c9f-466c-a5b2-a801fad29c1e\") " pod="openstack/swift-proxy-85bb9cd67c-w6ss9" Jan 03 06:05:30 crc kubenswrapper[4854]: I0103 06:05:30.736262 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3991ad0-4c9f-466c-a5b2-a801fad29c1e-combined-ca-bundle\") pod \"swift-proxy-85bb9cd67c-w6ss9\" (UID: \"b3991ad0-4c9f-466c-a5b2-a801fad29c1e\") " pod="openstack/swift-proxy-85bb9cd67c-w6ss9" Jan 03 06:05:30 crc kubenswrapper[4854]: I0103 06:05:30.736304 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b3991ad0-4c9f-466c-a5b2-a801fad29c1e-etc-swift\") pod \"swift-proxy-85bb9cd67c-w6ss9\" (UID: \"b3991ad0-4c9f-466c-a5b2-a801fad29c1e\") " pod="openstack/swift-proxy-85bb9cd67c-w6ss9" Jan 03 06:05:30 crc kubenswrapper[4854]: I0103 06:05:30.736339 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3991ad0-4c9f-466c-a5b2-a801fad29c1e-public-tls-certs\") pod \"swift-proxy-85bb9cd67c-w6ss9\" (UID: \"b3991ad0-4c9f-466c-a5b2-a801fad29c1e\") " pod="openstack/swift-proxy-85bb9cd67c-w6ss9" Jan 03 06:05:30 crc kubenswrapper[4854]: I0103 06:05:30.736385 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzcwl\" (UniqueName: \"kubernetes.io/projected/b3991ad0-4c9f-466c-a5b2-a801fad29c1e-kube-api-access-kzcwl\") pod \"swift-proxy-85bb9cd67c-w6ss9\" (UID: \"b3991ad0-4c9f-466c-a5b2-a801fad29c1e\") " pod="openstack/swift-proxy-85bb9cd67c-w6ss9" Jan 03 06:05:30 crc kubenswrapper[4854]: I0103 06:05:30.736473 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3991ad0-4c9f-466c-a5b2-a801fad29c1e-run-httpd\") pod \"swift-proxy-85bb9cd67c-w6ss9\" (UID: \"b3991ad0-4c9f-466c-a5b2-a801fad29c1e\") " pod="openstack/swift-proxy-85bb9cd67c-w6ss9" Jan 03 06:05:30 crc kubenswrapper[4854]: I0103 06:05:30.841517 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3991ad0-4c9f-466c-a5b2-a801fad29c1e-combined-ca-bundle\") pod \"swift-proxy-85bb9cd67c-w6ss9\" (UID: \"b3991ad0-4c9f-466c-a5b2-a801fad29c1e\") " pod="openstack/swift-proxy-85bb9cd67c-w6ss9" Jan 03 06:05:30 crc kubenswrapper[4854]: I0103 06:05:30.841566 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b3991ad0-4c9f-466c-a5b2-a801fad29c1e-etc-swift\") pod \"swift-proxy-85bb9cd67c-w6ss9\" (UID: \"b3991ad0-4c9f-466c-a5b2-a801fad29c1e\") " pod="openstack/swift-proxy-85bb9cd67c-w6ss9" Jan 03 06:05:30 crc kubenswrapper[4854]: I0103 06:05:30.841588 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3991ad0-4c9f-466c-a5b2-a801fad29c1e-public-tls-certs\") pod \"swift-proxy-85bb9cd67c-w6ss9\" (UID: \"b3991ad0-4c9f-466c-a5b2-a801fad29c1e\") " pod="openstack/swift-proxy-85bb9cd67c-w6ss9" Jan 03 06:05:30 crc kubenswrapper[4854]: I0103 06:05:30.841621 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzcwl\" (UniqueName: \"kubernetes.io/projected/b3991ad0-4c9f-466c-a5b2-a801fad29c1e-kube-api-access-kzcwl\") pod \"swift-proxy-85bb9cd67c-w6ss9\" (UID: \"b3991ad0-4c9f-466c-a5b2-a801fad29c1e\") " pod="openstack/swift-proxy-85bb9cd67c-w6ss9" Jan 03 06:05:30 crc kubenswrapper[4854]: I0103 06:05:30.841659 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3991ad0-4c9f-466c-a5b2-a801fad29c1e-run-httpd\") pod \"swift-proxy-85bb9cd67c-w6ss9\" (UID: \"b3991ad0-4c9f-466c-a5b2-a801fad29c1e\") " pod="openstack/swift-proxy-85bb9cd67c-w6ss9" Jan 03 06:05:30 crc kubenswrapper[4854]: I0103 06:05:30.841727 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3991ad0-4c9f-466c-a5b2-a801fad29c1e-log-httpd\") pod \"swift-proxy-85bb9cd67c-w6ss9\" (UID: \"b3991ad0-4c9f-466c-a5b2-a801fad29c1e\") " pod="openstack/swift-proxy-85bb9cd67c-w6ss9" Jan 03 06:05:30 crc kubenswrapper[4854]: I0103 06:05:30.841743 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3991ad0-4c9f-466c-a5b2-a801fad29c1e-internal-tls-certs\") pod \"swift-proxy-85bb9cd67c-w6ss9\" (UID: \"b3991ad0-4c9f-466c-a5b2-a801fad29c1e\") " pod="openstack/swift-proxy-85bb9cd67c-w6ss9" Jan 03 06:05:30 crc kubenswrapper[4854]: I0103 06:05:30.841773 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3991ad0-4c9f-466c-a5b2-a801fad29c1e-config-data\") pod \"swift-proxy-85bb9cd67c-w6ss9\" (UID: \"b3991ad0-4c9f-466c-a5b2-a801fad29c1e\") " pod="openstack/swift-proxy-85bb9cd67c-w6ss9" Jan 03 06:05:30 crc kubenswrapper[4854]: I0103 06:05:30.842888 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3991ad0-4c9f-466c-a5b2-a801fad29c1e-log-httpd\") pod \"swift-proxy-85bb9cd67c-w6ss9\" (UID: \"b3991ad0-4c9f-466c-a5b2-a801fad29c1e\") " pod="openstack/swift-proxy-85bb9cd67c-w6ss9" Jan 03 06:05:30 crc kubenswrapper[4854]: I0103 06:05:30.842926 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3991ad0-4c9f-466c-a5b2-a801fad29c1e-run-httpd\") pod \"swift-proxy-85bb9cd67c-w6ss9\" (UID: \"b3991ad0-4c9f-466c-a5b2-a801fad29c1e\") " pod="openstack/swift-proxy-85bb9cd67c-w6ss9" Jan 03 06:05:30 crc kubenswrapper[4854]: I0103 06:05:30.858143 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3991ad0-4c9f-466c-a5b2-a801fad29c1e-combined-ca-bundle\") pod \"swift-proxy-85bb9cd67c-w6ss9\" (UID: \"b3991ad0-4c9f-466c-a5b2-a801fad29c1e\") " pod="openstack/swift-proxy-85bb9cd67c-w6ss9" Jan 03 06:05:30 crc kubenswrapper[4854]: I0103 06:05:30.863240 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b3991ad0-4c9f-466c-a5b2-a801fad29c1e-etc-swift\") pod \"swift-proxy-85bb9cd67c-w6ss9\" (UID: \"b3991ad0-4c9f-466c-a5b2-a801fad29c1e\") " pod="openstack/swift-proxy-85bb9cd67c-w6ss9" Jan 03 06:05:30 crc kubenswrapper[4854]: I0103 06:05:30.876185 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3991ad0-4c9f-466c-a5b2-a801fad29c1e-public-tls-certs\") pod \"swift-proxy-85bb9cd67c-w6ss9\" (UID: \"b3991ad0-4c9f-466c-a5b2-a801fad29c1e\") " pod="openstack/swift-proxy-85bb9cd67c-w6ss9" Jan 03 06:05:30 crc kubenswrapper[4854]: I0103 06:05:30.878621 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3991ad0-4c9f-466c-a5b2-a801fad29c1e-internal-tls-certs\") pod \"swift-proxy-85bb9cd67c-w6ss9\" (UID: \"b3991ad0-4c9f-466c-a5b2-a801fad29c1e\") " pod="openstack/swift-proxy-85bb9cd67c-w6ss9" Jan 03 06:05:30 crc kubenswrapper[4854]: I0103 06:05:30.881259 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3991ad0-4c9f-466c-a5b2-a801fad29c1e-config-data\") pod \"swift-proxy-85bb9cd67c-w6ss9\" (UID: \"b3991ad0-4c9f-466c-a5b2-a801fad29c1e\") " pod="openstack/swift-proxy-85bb9cd67c-w6ss9" Jan 03 06:05:30 crc kubenswrapper[4854]: I0103 06:05:30.898683 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzcwl\" (UniqueName: \"kubernetes.io/projected/b3991ad0-4c9f-466c-a5b2-a801fad29c1e-kube-api-access-kzcwl\") pod \"swift-proxy-85bb9cd67c-w6ss9\" (UID: \"b3991ad0-4c9f-466c-a5b2-a801fad29c1e\") " pod="openstack/swift-proxy-85bb9cd67c-w6ss9" Jan 03 06:05:30 crc kubenswrapper[4854]: I0103 06:05:30.946216 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-85bb9cd67c-w6ss9" Jan 03 06:05:31 crc kubenswrapper[4854]: I0103 06:05:31.780661 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d8d21d2a-7f73-4026-87ab-632c4a623577","Type":"ContainerStarted","Data":"492dce1c383781bd42c3ae9ed49b79b753039db2b6a7b031de62a0f70a484d80"} Jan 03 06:05:31 crc kubenswrapper[4854]: I0103 06:05:31.783753 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 03 06:05:31 crc kubenswrapper[4854]: I0103 06:05:31.807664 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.807643562 podStartE2EDuration="4.807643562s" podCreationTimestamp="2026-01-03 06:05:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:05:31.800155441 +0000 UTC m=+1510.126732023" watchObservedRunningTime="2026-01-03 06:05:31.807643562 +0000 UTC m=+1510.134220134" Jan 03 06:05:31 crc kubenswrapper[4854]: I0103 06:05:31.875420 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-85bb9cd67c-w6ss9"] Jan 03 06:05:32 crc kubenswrapper[4854]: I0103 06:05:32.349550 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5f9d458c9d-vtsmw" Jan 03 06:05:32 crc kubenswrapper[4854]: I0103 06:05:32.440245 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-55c4d98986-689lr"] Jan 03 06:05:32 crc kubenswrapper[4854]: I0103 06:05:32.440932 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-55c4d98986-689lr" podUID="2481a421-76c9-4baa-bde8-c93eebdc4403" containerName="barbican-api-log" containerID="cri-o://7613dcbe587227c5c7f61de9f7ee2d2a1e76ab284f23afa6fe78b635af035ceb" gracePeriod=30 Jan 03 06:05:32 crc kubenswrapper[4854]: I0103 06:05:32.441641 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-55c4d98986-689lr" podUID="2481a421-76c9-4baa-bde8-c93eebdc4403" containerName="barbican-api" containerID="cri-o://077594c8c17b300edaa8fa8bc9934b5db9958d98469ec182dda06e83f37e2da3" gracePeriod=30 Jan 03 06:05:32 crc kubenswrapper[4854]: I0103 06:05:32.876048 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-85bb9cd67c-w6ss9" event={"ID":"b3991ad0-4c9f-466c-a5b2-a801fad29c1e","Type":"ContainerStarted","Data":"142432c264138f3e9180e5cb66ca748159e97c2e54c2b66b63fdb7335b568348"} Jan 03 06:05:32 crc kubenswrapper[4854]: I0103 06:05:32.876335 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-85bb9cd67c-w6ss9" event={"ID":"b3991ad0-4c9f-466c-a5b2-a801fad29c1e","Type":"ContainerStarted","Data":"49edbc3d831041900a6961fa0b9f2a9d520daaf212c8191945b94d236cc2243b"} Jan 03 06:05:32 crc kubenswrapper[4854]: I0103 06:05:32.876377 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-85bb9cd67c-w6ss9" Jan 03 06:05:32 crc kubenswrapper[4854]: I0103 06:05:32.876397 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-85bb9cd67c-w6ss9" Jan 03 06:05:32 crc kubenswrapper[4854]: I0103 06:05:32.881162 4854 generic.go:334] "Generic (PLEG): container finished" podID="2481a421-76c9-4baa-bde8-c93eebdc4403" containerID="7613dcbe587227c5c7f61de9f7ee2d2a1e76ab284f23afa6fe78b635af035ceb" exitCode=143 Jan 03 06:05:32 crc kubenswrapper[4854]: I0103 06:05:32.881928 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55c4d98986-689lr" event={"ID":"2481a421-76c9-4baa-bde8-c93eebdc4403","Type":"ContainerDied","Data":"7613dcbe587227c5c7f61de9f7ee2d2a1e76ab284f23afa6fe78b635af035ceb"} Jan 03 06:05:32 crc kubenswrapper[4854]: I0103 06:05:32.915711 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-85bb9cd67c-w6ss9" podStartSLOduration=2.915692774 podStartE2EDuration="2.915692774s" podCreationTimestamp="2026-01-03 06:05:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:05:32.906011479 +0000 UTC m=+1511.232588051" watchObservedRunningTime="2026-01-03 06:05:32.915692774 +0000 UTC m=+1511.242269346" Jan 03 06:05:33 crc kubenswrapper[4854]: I0103 06:05:33.085814 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-klgkr" Jan 03 06:05:33 crc kubenswrapper[4854]: I0103 06:05:33.160304 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fb745b69-f55rh"] Jan 03 06:05:33 crc kubenswrapper[4854]: I0103 06:05:33.160528 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-fb745b69-f55rh" podUID="0cac862a-2a43-44ed-903a-8d7b09100ac3" containerName="dnsmasq-dns" containerID="cri-o://dfac7049f9c2a5f4cf7fc6dc6eb91048dc1eecae9557ce503df15c179f15025d" gracePeriod=10 Jan 03 06:05:33 crc kubenswrapper[4854]: E0103 06:05:33.312735 4854 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0cac862a_2a43_44ed_903a_8d7b09100ac3.slice/crio-conmon-dfac7049f9c2a5f4cf7fc6dc6eb91048dc1eecae9557ce503df15c179f15025d.scope\": RecentStats: unable to find data in memory cache]" Jan 03 06:05:33 crc kubenswrapper[4854]: I0103 06:05:33.320363 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 03 06:05:33 crc kubenswrapper[4854]: I0103 06:05:33.373594 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:05:33 crc kubenswrapper[4854]: I0103 06:05:33.374410 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a52c5887-6e07-4241-b543-55f19941dde9" containerName="ceilometer-central-agent" containerID="cri-o://7199f6b6395edb419bf9df1d3076dde926984c802dbee49cfb66c4c755d2f869" gracePeriod=30 Jan 03 06:05:33 crc kubenswrapper[4854]: I0103 06:05:33.374599 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a52c5887-6e07-4241-b543-55f19941dde9" containerName="proxy-httpd" containerID="cri-o://c861b854ae5d787e768ed083a63320f19df8118c66fa42a4fcccb16892ca21c7" gracePeriod=30 Jan 03 06:05:33 crc kubenswrapper[4854]: I0103 06:05:33.374668 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a52c5887-6e07-4241-b543-55f19941dde9" containerName="sg-core" containerID="cri-o://507f29401ac91b50aabd91f6ab4b69c01f2f93312b8ff8318e4039a50c06c000" gracePeriod=30 Jan 03 06:05:33 crc kubenswrapper[4854]: I0103 06:05:33.374729 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a52c5887-6e07-4241-b543-55f19941dde9" containerName="ceilometer-notification-agent" containerID="cri-o://f7229a733728ec33b78d351688cbd31f4086095ba01b693e0a036584a390ef2b" gracePeriod=30 Jan 03 06:05:33 crc kubenswrapper[4854]: I0103 06:05:33.449929 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 03 06:05:33 crc kubenswrapper[4854]: I0103 06:05:33.921352 4854 generic.go:334] "Generic (PLEG): container finished" podID="0cac862a-2a43-44ed-903a-8d7b09100ac3" containerID="dfac7049f9c2a5f4cf7fc6dc6eb91048dc1eecae9557ce503df15c179f15025d" exitCode=0 Jan 03 06:05:33 crc kubenswrapper[4854]: I0103 06:05:33.921423 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fb745b69-f55rh" event={"ID":"0cac862a-2a43-44ed-903a-8d7b09100ac3","Type":"ContainerDied","Data":"dfac7049f9c2a5f4cf7fc6dc6eb91048dc1eecae9557ce503df15c179f15025d"} Jan 03 06:05:33 crc kubenswrapper[4854]: I0103 06:05:33.923533 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-85bb9cd67c-w6ss9" event={"ID":"b3991ad0-4c9f-466c-a5b2-a801fad29c1e","Type":"ContainerStarted","Data":"831da6ac1b89b5210ae5eac4db4b15ec941f78c045e5dbd28a621464122ebe58"} Jan 03 06:05:33 crc kubenswrapper[4854]: I0103 06:05:33.942033 4854 generic.go:334] "Generic (PLEG): container finished" podID="a52c5887-6e07-4241-b543-55f19941dde9" containerID="c861b854ae5d787e768ed083a63320f19df8118c66fa42a4fcccb16892ca21c7" exitCode=0 Jan 03 06:05:33 crc kubenswrapper[4854]: I0103 06:05:33.942513 4854 generic.go:334] "Generic (PLEG): container finished" podID="a52c5887-6e07-4241-b543-55f19941dde9" containerID="507f29401ac91b50aabd91f6ab4b69c01f2f93312b8ff8318e4039a50c06c000" exitCode=2 Jan 03 06:05:33 crc kubenswrapper[4854]: I0103 06:05:33.942524 4854 generic.go:334] "Generic (PLEG): container finished" podID="a52c5887-6e07-4241-b543-55f19941dde9" containerID="f7229a733728ec33b78d351688cbd31f4086095ba01b693e0a036584a390ef2b" exitCode=0 Jan 03 06:05:33 crc kubenswrapper[4854]: I0103 06:05:33.942731 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="cab04439-797a-489b-a4a7-d7cd3c23ccec" containerName="cinder-scheduler" containerID="cri-o://8077bc9993ac2bb6a7b10086e56be2c094ec7f21cd49b833a329e9a19b88a14a" gracePeriod=30 Jan 03 06:05:33 crc kubenswrapper[4854]: I0103 06:05:33.942988 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a52c5887-6e07-4241-b543-55f19941dde9","Type":"ContainerDied","Data":"c861b854ae5d787e768ed083a63320f19df8118c66fa42a4fcccb16892ca21c7"} Jan 03 06:05:33 crc kubenswrapper[4854]: I0103 06:05:33.943019 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a52c5887-6e07-4241-b543-55f19941dde9","Type":"ContainerDied","Data":"507f29401ac91b50aabd91f6ab4b69c01f2f93312b8ff8318e4039a50c06c000"} Jan 03 06:05:33 crc kubenswrapper[4854]: I0103 06:05:33.943028 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a52c5887-6e07-4241-b543-55f19941dde9","Type":"ContainerDied","Data":"f7229a733728ec33b78d351688cbd31f4086095ba01b693e0a036584a390ef2b"} Jan 03 06:05:33 crc kubenswrapper[4854]: I0103 06:05:33.943451 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="cab04439-797a-489b-a4a7-d7cd3c23ccec" containerName="probe" containerID="cri-o://cebbbfc964da1c9663fa0ce2ecb5dec47511ccc057528f8a595230d975c97e5a" gracePeriod=30 Jan 03 06:05:34 crc kubenswrapper[4854]: I0103 06:05:34.108441 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fb745b69-f55rh" Jan 03 06:05:34 crc kubenswrapper[4854]: I0103 06:05:34.274958 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0cac862a-2a43-44ed-903a-8d7b09100ac3-ovsdbserver-sb\") pod \"0cac862a-2a43-44ed-903a-8d7b09100ac3\" (UID: \"0cac862a-2a43-44ed-903a-8d7b09100ac3\") " Jan 03 06:05:34 crc kubenswrapper[4854]: I0103 06:05:34.275037 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0cac862a-2a43-44ed-903a-8d7b09100ac3-ovsdbserver-nb\") pod \"0cac862a-2a43-44ed-903a-8d7b09100ac3\" (UID: \"0cac862a-2a43-44ed-903a-8d7b09100ac3\") " Jan 03 06:05:34 crc kubenswrapper[4854]: I0103 06:05:34.275061 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cac862a-2a43-44ed-903a-8d7b09100ac3-config\") pod \"0cac862a-2a43-44ed-903a-8d7b09100ac3\" (UID: \"0cac862a-2a43-44ed-903a-8d7b09100ac3\") " Jan 03 06:05:34 crc kubenswrapper[4854]: I0103 06:05:34.275161 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0cac862a-2a43-44ed-903a-8d7b09100ac3-dns-svc\") pod \"0cac862a-2a43-44ed-903a-8d7b09100ac3\" (UID: \"0cac862a-2a43-44ed-903a-8d7b09100ac3\") " Jan 03 06:05:34 crc kubenswrapper[4854]: I0103 06:05:34.275232 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mnkg\" (UniqueName: \"kubernetes.io/projected/0cac862a-2a43-44ed-903a-8d7b09100ac3-kube-api-access-9mnkg\") pod \"0cac862a-2a43-44ed-903a-8d7b09100ac3\" (UID: \"0cac862a-2a43-44ed-903a-8d7b09100ac3\") " Jan 03 06:05:34 crc kubenswrapper[4854]: I0103 06:05:34.285029 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cac862a-2a43-44ed-903a-8d7b09100ac3-kube-api-access-9mnkg" (OuterVolumeSpecName: "kube-api-access-9mnkg") pod "0cac862a-2a43-44ed-903a-8d7b09100ac3" (UID: "0cac862a-2a43-44ed-903a-8d7b09100ac3"). InnerVolumeSpecName "kube-api-access-9mnkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:05:34 crc kubenswrapper[4854]: I0103 06:05:34.379282 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mnkg\" (UniqueName: \"kubernetes.io/projected/0cac862a-2a43-44ed-903a-8d7b09100ac3-kube-api-access-9mnkg\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:34 crc kubenswrapper[4854]: I0103 06:05:34.426419 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0cac862a-2a43-44ed-903a-8d7b09100ac3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0cac862a-2a43-44ed-903a-8d7b09100ac3" (UID: "0cac862a-2a43-44ed-903a-8d7b09100ac3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:05:34 crc kubenswrapper[4854]: I0103 06:05:34.439019 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0cac862a-2a43-44ed-903a-8d7b09100ac3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0cac862a-2a43-44ed-903a-8d7b09100ac3" (UID: "0cac862a-2a43-44ed-903a-8d7b09100ac3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:05:34 crc kubenswrapper[4854]: I0103 06:05:34.463987 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0cac862a-2a43-44ed-903a-8d7b09100ac3-config" (OuterVolumeSpecName: "config") pod "0cac862a-2a43-44ed-903a-8d7b09100ac3" (UID: "0cac862a-2a43-44ed-903a-8d7b09100ac3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:05:34 crc kubenswrapper[4854]: I0103 06:05:34.471249 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0cac862a-2a43-44ed-903a-8d7b09100ac3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0cac862a-2a43-44ed-903a-8d7b09100ac3" (UID: "0cac862a-2a43-44ed-903a-8d7b09100ac3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:05:34 crc kubenswrapper[4854]: I0103 06:05:34.495033 4854 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0cac862a-2a43-44ed-903a-8d7b09100ac3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:34 crc kubenswrapper[4854]: I0103 06:05:34.495080 4854 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cac862a-2a43-44ed-903a-8d7b09100ac3-config\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:34 crc kubenswrapper[4854]: I0103 06:05:34.495108 4854 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0cac862a-2a43-44ed-903a-8d7b09100ac3-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:34 crc kubenswrapper[4854]: I0103 06:05:34.495120 4854 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0cac862a-2a43-44ed-903a-8d7b09100ac3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:34 crc kubenswrapper[4854]: I0103 06:05:34.745654 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 06:05:34 crc kubenswrapper[4854]: I0103 06:05:34.804884 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpbfj\" (UniqueName: \"kubernetes.io/projected/a52c5887-6e07-4241-b543-55f19941dde9-kube-api-access-dpbfj\") pod \"a52c5887-6e07-4241-b543-55f19941dde9\" (UID: \"a52c5887-6e07-4241-b543-55f19941dde9\") " Jan 03 06:05:34 crc kubenswrapper[4854]: I0103 06:05:34.804967 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a52c5887-6e07-4241-b543-55f19941dde9-sg-core-conf-yaml\") pod \"a52c5887-6e07-4241-b543-55f19941dde9\" (UID: \"a52c5887-6e07-4241-b543-55f19941dde9\") " Jan 03 06:05:34 crc kubenswrapper[4854]: I0103 06:05:34.805562 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a52c5887-6e07-4241-b543-55f19941dde9-run-httpd\") pod \"a52c5887-6e07-4241-b543-55f19941dde9\" (UID: \"a52c5887-6e07-4241-b543-55f19941dde9\") " Jan 03 06:05:34 crc kubenswrapper[4854]: I0103 06:05:34.805690 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a52c5887-6e07-4241-b543-55f19941dde9-config-data\") pod \"a52c5887-6e07-4241-b543-55f19941dde9\" (UID: \"a52c5887-6e07-4241-b543-55f19941dde9\") " Jan 03 06:05:34 crc kubenswrapper[4854]: I0103 06:05:34.805728 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a52c5887-6e07-4241-b543-55f19941dde9-scripts\") pod \"a52c5887-6e07-4241-b543-55f19941dde9\" (UID: \"a52c5887-6e07-4241-b543-55f19941dde9\") " Jan 03 06:05:34 crc kubenswrapper[4854]: I0103 06:05:34.805751 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a52c5887-6e07-4241-b543-55f19941dde9-log-httpd\") pod \"a52c5887-6e07-4241-b543-55f19941dde9\" (UID: \"a52c5887-6e07-4241-b543-55f19941dde9\") " Jan 03 06:05:34 crc kubenswrapper[4854]: I0103 06:05:34.805777 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a52c5887-6e07-4241-b543-55f19941dde9-combined-ca-bundle\") pod \"a52c5887-6e07-4241-b543-55f19941dde9\" (UID: \"a52c5887-6e07-4241-b543-55f19941dde9\") " Jan 03 06:05:34 crc kubenswrapper[4854]: I0103 06:05:34.815781 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a52c5887-6e07-4241-b543-55f19941dde9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a52c5887-6e07-4241-b543-55f19941dde9" (UID: "a52c5887-6e07-4241-b543-55f19941dde9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:05:34 crc kubenswrapper[4854]: I0103 06:05:34.815878 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a52c5887-6e07-4241-b543-55f19941dde9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a52c5887-6e07-4241-b543-55f19941dde9" (UID: "a52c5887-6e07-4241-b543-55f19941dde9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:05:34 crc kubenswrapper[4854]: I0103 06:05:34.822611 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52c5887-6e07-4241-b543-55f19941dde9-scripts" (OuterVolumeSpecName: "scripts") pod "a52c5887-6e07-4241-b543-55f19941dde9" (UID: "a52c5887-6e07-4241-b543-55f19941dde9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:05:34 crc kubenswrapper[4854]: I0103 06:05:34.832364 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52c5887-6e07-4241-b543-55f19941dde9-kube-api-access-dpbfj" (OuterVolumeSpecName: "kube-api-access-dpbfj") pod "a52c5887-6e07-4241-b543-55f19941dde9" (UID: "a52c5887-6e07-4241-b543-55f19941dde9"). InnerVolumeSpecName "kube-api-access-dpbfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:05:34 crc kubenswrapper[4854]: I0103 06:05:34.895073 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52c5887-6e07-4241-b543-55f19941dde9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a52c5887-6e07-4241-b543-55f19941dde9" (UID: "a52c5887-6e07-4241-b543-55f19941dde9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:05:34 crc kubenswrapper[4854]: I0103 06:05:34.909637 4854 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a52c5887-6e07-4241-b543-55f19941dde9-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:34 crc kubenswrapper[4854]: I0103 06:05:34.909677 4854 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a52c5887-6e07-4241-b543-55f19941dde9-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:34 crc kubenswrapper[4854]: I0103 06:05:34.909687 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpbfj\" (UniqueName: \"kubernetes.io/projected/a52c5887-6e07-4241-b543-55f19941dde9-kube-api-access-dpbfj\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:34 crc kubenswrapper[4854]: I0103 06:05:34.909696 4854 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a52c5887-6e07-4241-b543-55f19941dde9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:34 crc kubenswrapper[4854]: I0103 06:05:34.909706 4854 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a52c5887-6e07-4241-b543-55f19941dde9-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:34 crc kubenswrapper[4854]: I0103 06:05:34.961342 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52c5887-6e07-4241-b543-55f19941dde9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a52c5887-6e07-4241-b543-55f19941dde9" (UID: "a52c5887-6e07-4241-b543-55f19941dde9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.002031 4854 generic.go:334] "Generic (PLEG): container finished" podID="a52c5887-6e07-4241-b543-55f19941dde9" containerID="7199f6b6395edb419bf9df1d3076dde926984c802dbee49cfb66c4c755d2f869" exitCode=0 Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.002106 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a52c5887-6e07-4241-b543-55f19941dde9","Type":"ContainerDied","Data":"7199f6b6395edb419bf9df1d3076dde926984c802dbee49cfb66c4c755d2f869"} Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.002134 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a52c5887-6e07-4241-b543-55f19941dde9","Type":"ContainerDied","Data":"5a7cb76d376453f46fb9280fdc45b04579d9d5455a8661be9be7b88c88c488fa"} Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.002150 4854 scope.go:117] "RemoveContainer" containerID="c861b854ae5d787e768ed083a63320f19df8118c66fa42a4fcccb16892ca21c7" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.002302 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.013033 4854 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a52c5887-6e07-4241-b543-55f19941dde9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.042175 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fb745b69-f55rh" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.047202 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fb745b69-f55rh" event={"ID":"0cac862a-2a43-44ed-903a-8d7b09100ac3","Type":"ContainerDied","Data":"f622e97f672d12adc68ca13ff79121c991857a3cc9c602030c590bfd55bd0b6c"} Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.107245 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52c5887-6e07-4241-b543-55f19941dde9-config-data" (OuterVolumeSpecName: "config-data") pod "a52c5887-6e07-4241-b543-55f19941dde9" (UID: "a52c5887-6e07-4241-b543-55f19941dde9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.120613 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a52c5887-6e07-4241-b543-55f19941dde9-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.209783 4854 scope.go:117] "RemoveContainer" containerID="507f29401ac91b50aabd91f6ab4b69c01f2f93312b8ff8318e4039a50c06c000" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.225547 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fb745b69-f55rh"] Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.242981 4854 scope.go:117] "RemoveContainer" containerID="f7229a733728ec33b78d351688cbd31f4086095ba01b693e0a036584a390ef2b" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.244414 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-fb745b69-f55rh"] Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.291049 4854 scope.go:117] "RemoveContainer" containerID="7199f6b6395edb419bf9df1d3076dde926984c802dbee49cfb66c4c755d2f869" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.319873 4854 scope.go:117] "RemoveContainer" containerID="c861b854ae5d787e768ed083a63320f19df8118c66fa42a4fcccb16892ca21c7" Jan 03 06:05:35 crc kubenswrapper[4854]: E0103 06:05:35.320432 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c861b854ae5d787e768ed083a63320f19df8118c66fa42a4fcccb16892ca21c7\": container with ID starting with c861b854ae5d787e768ed083a63320f19df8118c66fa42a4fcccb16892ca21c7 not found: ID does not exist" containerID="c861b854ae5d787e768ed083a63320f19df8118c66fa42a4fcccb16892ca21c7" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.320477 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c861b854ae5d787e768ed083a63320f19df8118c66fa42a4fcccb16892ca21c7"} err="failed to get container status \"c861b854ae5d787e768ed083a63320f19df8118c66fa42a4fcccb16892ca21c7\": rpc error: code = NotFound desc = could not find container \"c861b854ae5d787e768ed083a63320f19df8118c66fa42a4fcccb16892ca21c7\": container with ID starting with c861b854ae5d787e768ed083a63320f19df8118c66fa42a4fcccb16892ca21c7 not found: ID does not exist" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.320507 4854 scope.go:117] "RemoveContainer" containerID="507f29401ac91b50aabd91f6ab4b69c01f2f93312b8ff8318e4039a50c06c000" Jan 03 06:05:35 crc kubenswrapper[4854]: E0103 06:05:35.320801 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"507f29401ac91b50aabd91f6ab4b69c01f2f93312b8ff8318e4039a50c06c000\": container with ID starting with 507f29401ac91b50aabd91f6ab4b69c01f2f93312b8ff8318e4039a50c06c000 not found: ID does not exist" containerID="507f29401ac91b50aabd91f6ab4b69c01f2f93312b8ff8318e4039a50c06c000" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.320831 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"507f29401ac91b50aabd91f6ab4b69c01f2f93312b8ff8318e4039a50c06c000"} err="failed to get container status \"507f29401ac91b50aabd91f6ab4b69c01f2f93312b8ff8318e4039a50c06c000\": rpc error: code = NotFound desc = could not find container \"507f29401ac91b50aabd91f6ab4b69c01f2f93312b8ff8318e4039a50c06c000\": container with ID starting with 507f29401ac91b50aabd91f6ab4b69c01f2f93312b8ff8318e4039a50c06c000 not found: ID does not exist" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.320848 4854 scope.go:117] "RemoveContainer" containerID="f7229a733728ec33b78d351688cbd31f4086095ba01b693e0a036584a390ef2b" Jan 03 06:05:35 crc kubenswrapper[4854]: E0103 06:05:35.321305 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7229a733728ec33b78d351688cbd31f4086095ba01b693e0a036584a390ef2b\": container with ID starting with f7229a733728ec33b78d351688cbd31f4086095ba01b693e0a036584a390ef2b not found: ID does not exist" containerID="f7229a733728ec33b78d351688cbd31f4086095ba01b693e0a036584a390ef2b" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.321342 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7229a733728ec33b78d351688cbd31f4086095ba01b693e0a036584a390ef2b"} err="failed to get container status \"f7229a733728ec33b78d351688cbd31f4086095ba01b693e0a036584a390ef2b\": rpc error: code = NotFound desc = could not find container \"f7229a733728ec33b78d351688cbd31f4086095ba01b693e0a036584a390ef2b\": container with ID starting with f7229a733728ec33b78d351688cbd31f4086095ba01b693e0a036584a390ef2b not found: ID does not exist" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.321362 4854 scope.go:117] "RemoveContainer" containerID="7199f6b6395edb419bf9df1d3076dde926984c802dbee49cfb66c4c755d2f869" Jan 03 06:05:35 crc kubenswrapper[4854]: E0103 06:05:35.321607 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7199f6b6395edb419bf9df1d3076dde926984c802dbee49cfb66c4c755d2f869\": container with ID starting with 7199f6b6395edb419bf9df1d3076dde926984c802dbee49cfb66c4c755d2f869 not found: ID does not exist" containerID="7199f6b6395edb419bf9df1d3076dde926984c802dbee49cfb66c4c755d2f869" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.321704 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7199f6b6395edb419bf9df1d3076dde926984c802dbee49cfb66c4c755d2f869"} err="failed to get container status \"7199f6b6395edb419bf9df1d3076dde926984c802dbee49cfb66c4c755d2f869\": rpc error: code = NotFound desc = could not find container \"7199f6b6395edb419bf9df1d3076dde926984c802dbee49cfb66c4c755d2f869\": container with ID starting with 7199f6b6395edb419bf9df1d3076dde926984c802dbee49cfb66c4c755d2f869 not found: ID does not exist" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.321724 4854 scope.go:117] "RemoveContainer" containerID="dfac7049f9c2a5f4cf7fc6dc6eb91048dc1eecae9557ce503df15c179f15025d" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.354214 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.377904 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.416142 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:05:35 crc kubenswrapper[4854]: E0103 06:05:35.416681 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a52c5887-6e07-4241-b543-55f19941dde9" containerName="proxy-httpd" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.416698 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="a52c5887-6e07-4241-b543-55f19941dde9" containerName="proxy-httpd" Jan 03 06:05:35 crc kubenswrapper[4854]: E0103 06:05:35.416717 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cac862a-2a43-44ed-903a-8d7b09100ac3" containerName="init" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.416724 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cac862a-2a43-44ed-903a-8d7b09100ac3" containerName="init" Jan 03 06:05:35 crc kubenswrapper[4854]: E0103 06:05:35.416733 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a52c5887-6e07-4241-b543-55f19941dde9" containerName="ceilometer-notification-agent" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.416740 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="a52c5887-6e07-4241-b543-55f19941dde9" containerName="ceilometer-notification-agent" Jan 03 06:05:35 crc kubenswrapper[4854]: E0103 06:05:35.416747 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a52c5887-6e07-4241-b543-55f19941dde9" containerName="sg-core" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.416752 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="a52c5887-6e07-4241-b543-55f19941dde9" containerName="sg-core" Jan 03 06:05:35 crc kubenswrapper[4854]: E0103 06:05:35.416771 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a52c5887-6e07-4241-b543-55f19941dde9" containerName="ceilometer-central-agent" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.416779 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="a52c5887-6e07-4241-b543-55f19941dde9" containerName="ceilometer-central-agent" Jan 03 06:05:35 crc kubenswrapper[4854]: E0103 06:05:35.416798 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cac862a-2a43-44ed-903a-8d7b09100ac3" containerName="dnsmasq-dns" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.416804 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cac862a-2a43-44ed-903a-8d7b09100ac3" containerName="dnsmasq-dns" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.417011 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="a52c5887-6e07-4241-b543-55f19941dde9" containerName="proxy-httpd" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.417029 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cac862a-2a43-44ed-903a-8d7b09100ac3" containerName="dnsmasq-dns" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.417038 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="a52c5887-6e07-4241-b543-55f19941dde9" containerName="sg-core" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.417050 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="a52c5887-6e07-4241-b543-55f19941dde9" containerName="ceilometer-notification-agent" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.417058 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="a52c5887-6e07-4241-b543-55f19941dde9" containerName="ceilometer-central-agent" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.424444 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.428227 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.428382 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.436282 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.460495 4854 scope.go:117] "RemoveContainer" containerID="2918588f419fa62b8b58b68f06748dad12cf158d58d11e2725a85e9e2319dcb5" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.531385 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b3ecf60-8687-4af2-a477-f2f058b854ea-log-httpd\") pod \"ceilometer-0\" (UID: \"7b3ecf60-8687-4af2-a477-f2f058b854ea\") " pod="openstack/ceilometer-0" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.531437 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b3ecf60-8687-4af2-a477-f2f058b854ea-scripts\") pod \"ceilometer-0\" (UID: \"7b3ecf60-8687-4af2-a477-f2f058b854ea\") " pod="openstack/ceilometer-0" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.531498 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b3ecf60-8687-4af2-a477-f2f058b854ea-config-data\") pod \"ceilometer-0\" (UID: \"7b3ecf60-8687-4af2-a477-f2f058b854ea\") " pod="openstack/ceilometer-0" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.531526 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljp95\" (UniqueName: \"kubernetes.io/projected/7b3ecf60-8687-4af2-a477-f2f058b854ea-kube-api-access-ljp95\") pod \"ceilometer-0\" (UID: \"7b3ecf60-8687-4af2-a477-f2f058b854ea\") " pod="openstack/ceilometer-0" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.531574 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b3ecf60-8687-4af2-a477-f2f058b854ea-run-httpd\") pod \"ceilometer-0\" (UID: \"7b3ecf60-8687-4af2-a477-f2f058b854ea\") " pod="openstack/ceilometer-0" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.531616 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7b3ecf60-8687-4af2-a477-f2f058b854ea-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7b3ecf60-8687-4af2-a477-f2f058b854ea\") " pod="openstack/ceilometer-0" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.531688 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b3ecf60-8687-4af2-a477-f2f058b854ea-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7b3ecf60-8687-4af2-a477-f2f058b854ea\") " pod="openstack/ceilometer-0" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.633826 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b3ecf60-8687-4af2-a477-f2f058b854ea-config-data\") pod \"ceilometer-0\" (UID: \"7b3ecf60-8687-4af2-a477-f2f058b854ea\") " pod="openstack/ceilometer-0" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.633878 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljp95\" (UniqueName: \"kubernetes.io/projected/7b3ecf60-8687-4af2-a477-f2f058b854ea-kube-api-access-ljp95\") pod \"ceilometer-0\" (UID: \"7b3ecf60-8687-4af2-a477-f2f058b854ea\") " pod="openstack/ceilometer-0" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.633937 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b3ecf60-8687-4af2-a477-f2f058b854ea-run-httpd\") pod \"ceilometer-0\" (UID: \"7b3ecf60-8687-4af2-a477-f2f058b854ea\") " pod="openstack/ceilometer-0" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.633990 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7b3ecf60-8687-4af2-a477-f2f058b854ea-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7b3ecf60-8687-4af2-a477-f2f058b854ea\") " pod="openstack/ceilometer-0" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.634022 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b3ecf60-8687-4af2-a477-f2f058b854ea-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7b3ecf60-8687-4af2-a477-f2f058b854ea\") " pod="openstack/ceilometer-0" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.634162 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b3ecf60-8687-4af2-a477-f2f058b854ea-log-httpd\") pod \"ceilometer-0\" (UID: \"7b3ecf60-8687-4af2-a477-f2f058b854ea\") " pod="openstack/ceilometer-0" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.634189 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b3ecf60-8687-4af2-a477-f2f058b854ea-scripts\") pod \"ceilometer-0\" (UID: \"7b3ecf60-8687-4af2-a477-f2f058b854ea\") " pod="openstack/ceilometer-0" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.635830 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b3ecf60-8687-4af2-a477-f2f058b854ea-log-httpd\") pod \"ceilometer-0\" (UID: \"7b3ecf60-8687-4af2-a477-f2f058b854ea\") " pod="openstack/ceilometer-0" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.635916 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b3ecf60-8687-4af2-a477-f2f058b854ea-run-httpd\") pod \"ceilometer-0\" (UID: \"7b3ecf60-8687-4af2-a477-f2f058b854ea\") " pod="openstack/ceilometer-0" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.639166 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b3ecf60-8687-4af2-a477-f2f058b854ea-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7b3ecf60-8687-4af2-a477-f2f058b854ea\") " pod="openstack/ceilometer-0" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.640118 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b3ecf60-8687-4af2-a477-f2f058b854ea-config-data\") pod \"ceilometer-0\" (UID: \"7b3ecf60-8687-4af2-a477-f2f058b854ea\") " pod="openstack/ceilometer-0" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.640487 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7b3ecf60-8687-4af2-a477-f2f058b854ea-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7b3ecf60-8687-4af2-a477-f2f058b854ea\") " pod="openstack/ceilometer-0" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.641483 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b3ecf60-8687-4af2-a477-f2f058b854ea-scripts\") pod \"ceilometer-0\" (UID: \"7b3ecf60-8687-4af2-a477-f2f058b854ea\") " pod="openstack/ceilometer-0" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.657029 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljp95\" (UniqueName: \"kubernetes.io/projected/7b3ecf60-8687-4af2-a477-f2f058b854ea-kube-api-access-ljp95\") pod \"ceilometer-0\" (UID: \"7b3ecf60-8687-4af2-a477-f2f058b854ea\") " pod="openstack/ceilometer-0" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.720263 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-55c4d98986-689lr" podUID="2481a421-76c9-4baa-bde8-c93eebdc4403" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.201:9311/healthcheck\": read tcp 10.217.0.2:43970->10.217.0.201:9311: read: connection reset by peer" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.720268 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-55c4d98986-689lr" podUID="2481a421-76c9-4baa-bde8-c93eebdc4403" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.201:9311/healthcheck\": read tcp 10.217.0.2:43960->10.217.0.201:9311: read: connection reset by peer" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.763686 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.846702 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.928849 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-79cf8b54b6-vks4f"] Jan 03 06:05:35 crc kubenswrapper[4854]: E0103 06:05:35.929439 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cab04439-797a-489b-a4a7-d7cd3c23ccec" containerName="probe" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.929452 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="cab04439-797a-489b-a4a7-d7cd3c23ccec" containerName="probe" Jan 03 06:05:35 crc kubenswrapper[4854]: E0103 06:05:35.929658 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cab04439-797a-489b-a4a7-d7cd3c23ccec" containerName="cinder-scheduler" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.929664 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="cab04439-797a-489b-a4a7-d7cd3c23ccec" containerName="cinder-scheduler" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.929862 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="cab04439-797a-489b-a4a7-d7cd3c23ccec" containerName="probe" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.929880 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="cab04439-797a-489b-a4a7-d7cd3c23ccec" containerName="cinder-scheduler" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.930747 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-79cf8b54b6-vks4f" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.935880 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.936316 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Jan 03 06:05:35 crc kubenswrapper[4854]: I0103 06:05:35.938400 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-pvtl7" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.014224 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-79cf8b54b6-vks4f"] Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.040907 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cab04439-797a-489b-a4a7-d7cd3c23ccec-combined-ca-bundle\") pod \"cab04439-797a-489b-a4a7-d7cd3c23ccec\" (UID: \"cab04439-797a-489b-a4a7-d7cd3c23ccec\") " Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.041020 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cab04439-797a-489b-a4a7-d7cd3c23ccec-scripts\") pod \"cab04439-797a-489b-a4a7-d7cd3c23ccec\" (UID: \"cab04439-797a-489b-a4a7-d7cd3c23ccec\") " Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.041108 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cab04439-797a-489b-a4a7-d7cd3c23ccec-config-data\") pod \"cab04439-797a-489b-a4a7-d7cd3c23ccec\" (UID: \"cab04439-797a-489b-a4a7-d7cd3c23ccec\") " Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.041214 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdlzs\" (UniqueName: \"kubernetes.io/projected/cab04439-797a-489b-a4a7-d7cd3c23ccec-kube-api-access-sdlzs\") pod \"cab04439-797a-489b-a4a7-d7cd3c23ccec\" (UID: \"cab04439-797a-489b-a4a7-d7cd3c23ccec\") " Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.041285 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cab04439-797a-489b-a4a7-d7cd3c23ccec-etc-machine-id\") pod \"cab04439-797a-489b-a4a7-d7cd3c23ccec\" (UID: \"cab04439-797a-489b-a4a7-d7cd3c23ccec\") " Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.041407 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cab04439-797a-489b-a4a7-d7cd3c23ccec-config-data-custom\") pod \"cab04439-797a-489b-a4a7-d7cd3c23ccec\" (UID: \"cab04439-797a-489b-a4a7-d7cd3c23ccec\") " Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.041698 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57dd35dc-074c-4a29-92f6-afebc0f9fad3-config-data\") pod \"heat-engine-79cf8b54b6-vks4f\" (UID: \"57dd35dc-074c-4a29-92f6-afebc0f9fad3\") " pod="openstack/heat-engine-79cf8b54b6-vks4f" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.041781 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwc4d\" (UniqueName: \"kubernetes.io/projected/57dd35dc-074c-4a29-92f6-afebc0f9fad3-kube-api-access-cwc4d\") pod \"heat-engine-79cf8b54b6-vks4f\" (UID: \"57dd35dc-074c-4a29-92f6-afebc0f9fad3\") " pod="openstack/heat-engine-79cf8b54b6-vks4f" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.041847 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57dd35dc-074c-4a29-92f6-afebc0f9fad3-config-data-custom\") pod \"heat-engine-79cf8b54b6-vks4f\" (UID: \"57dd35dc-074c-4a29-92f6-afebc0f9fad3\") " pod="openstack/heat-engine-79cf8b54b6-vks4f" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.041919 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57dd35dc-074c-4a29-92f6-afebc0f9fad3-combined-ca-bundle\") pod \"heat-engine-79cf8b54b6-vks4f\" (UID: \"57dd35dc-074c-4a29-92f6-afebc0f9fad3\") " pod="openstack/heat-engine-79cf8b54b6-vks4f" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.046674 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cab04439-797a-489b-a4a7-d7cd3c23ccec-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "cab04439-797a-489b-a4a7-d7cd3c23ccec" (UID: "cab04439-797a-489b-a4a7-d7cd3c23ccec"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.057475 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cab04439-797a-489b-a4a7-d7cd3c23ccec-kube-api-access-sdlzs" (OuterVolumeSpecName: "kube-api-access-sdlzs") pod "cab04439-797a-489b-a4a7-d7cd3c23ccec" (UID: "cab04439-797a-489b-a4a7-d7cd3c23ccec"). InnerVolumeSpecName "kube-api-access-sdlzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.062225 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cab04439-797a-489b-a4a7-d7cd3c23ccec-scripts" (OuterVolumeSpecName: "scripts") pod "cab04439-797a-489b-a4a7-d7cd3c23ccec" (UID: "cab04439-797a-489b-a4a7-d7cd3c23ccec"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.076911 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cab04439-797a-489b-a4a7-d7cd3c23ccec-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "cab04439-797a-489b-a4a7-d7cd3c23ccec" (UID: "cab04439-797a-489b-a4a7-d7cd3c23ccec"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.090269 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-b57b6"] Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.092607 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-b57b6" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.110364 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-86b9b59fd6-bgwdc"] Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.172050 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-86b9b59fd6-bgwdc" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.181266 4854 generic.go:334] "Generic (PLEG): container finished" podID="2481a421-76c9-4baa-bde8-c93eebdc4403" containerID="077594c8c17b300edaa8fa8bc9934b5db9958d98469ec182dda06e83f37e2da3" exitCode=0 Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.182751 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwc4d\" (UniqueName: \"kubernetes.io/projected/57dd35dc-074c-4a29-92f6-afebc0f9fad3-kube-api-access-cwc4d\") pod \"heat-engine-79cf8b54b6-vks4f\" (UID: \"57dd35dc-074c-4a29-92f6-afebc0f9fad3\") " pod="openstack/heat-engine-79cf8b54b6-vks4f" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.182957 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57dd35dc-074c-4a29-92f6-afebc0f9fad3-config-data-custom\") pod \"heat-engine-79cf8b54b6-vks4f\" (UID: \"57dd35dc-074c-4a29-92f6-afebc0f9fad3\") " pod="openstack/heat-engine-79cf8b54b6-vks4f" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.183192 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57dd35dc-074c-4a29-92f6-afebc0f9fad3-combined-ca-bundle\") pod \"heat-engine-79cf8b54b6-vks4f\" (UID: \"57dd35dc-074c-4a29-92f6-afebc0f9fad3\") " pod="openstack/heat-engine-79cf8b54b6-vks4f" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.183540 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57dd35dc-074c-4a29-92f6-afebc0f9fad3-config-data\") pod \"heat-engine-79cf8b54b6-vks4f\" (UID: \"57dd35dc-074c-4a29-92f6-afebc0f9fad3\") " pod="openstack/heat-engine-79cf8b54b6-vks4f" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.188895 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.205853 4854 generic.go:334] "Generic (PLEG): container finished" podID="cab04439-797a-489b-a4a7-d7cd3c23ccec" containerID="cebbbfc964da1c9663fa0ce2ecb5dec47511ccc057528f8a595230d975c97e5a" exitCode=0 Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.205883 4854 generic.go:334] "Generic (PLEG): container finished" podID="cab04439-797a-489b-a4a7-d7cd3c23ccec" containerID="8077bc9993ac2bb6a7b10086e56be2c094ec7f21cd49b833a329e9a19b88a14a" exitCode=0 Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.205987 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.222881 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdlzs\" (UniqueName: \"kubernetes.io/projected/cab04439-797a-489b-a4a7-d7cd3c23ccec-kube-api-access-sdlzs\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.236002 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cab04439-797a-489b-a4a7-d7cd3c23ccec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cab04439-797a-489b-a4a7-d7cd3c23ccec" (UID: "cab04439-797a-489b-a4a7-d7cd3c23ccec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.243735 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cac862a-2a43-44ed-903a-8d7b09100ac3" path="/var/lib/kubelet/pods/0cac862a-2a43-44ed-903a-8d7b09100ac3/volumes" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.251643 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57dd35dc-074c-4a29-92f6-afebc0f9fad3-combined-ca-bundle\") pod \"heat-engine-79cf8b54b6-vks4f\" (UID: \"57dd35dc-074c-4a29-92f6-afebc0f9fad3\") " pod="openstack/heat-engine-79cf8b54b6-vks4f" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.250280 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57dd35dc-074c-4a29-92f6-afebc0f9fad3-config-data\") pod \"heat-engine-79cf8b54b6-vks4f\" (UID: \"57dd35dc-074c-4a29-92f6-afebc0f9fad3\") " pod="openstack/heat-engine-79cf8b54b6-vks4f" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.252720 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57dd35dc-074c-4a29-92f6-afebc0f9fad3-config-data-custom\") pod \"heat-engine-79cf8b54b6-vks4f\" (UID: \"57dd35dc-074c-4a29-92f6-afebc0f9fad3\") " pod="openstack/heat-engine-79cf8b54b6-vks4f" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.244129 4854 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cab04439-797a-489b-a4a7-d7cd3c23ccec-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.252873 4854 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cab04439-797a-489b-a4a7-d7cd3c23ccec-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.252884 4854 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cab04439-797a-489b-a4a7-d7cd3c23ccec-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.253223 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52c5887-6e07-4241-b543-55f19941dde9" path="/var/lib/kubelet/pods/a52c5887-6e07-4241-b543-55f19941dde9/volumes" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.254520 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwc4d\" (UniqueName: \"kubernetes.io/projected/57dd35dc-074c-4a29-92f6-afebc0f9fad3-kube-api-access-cwc4d\") pod \"heat-engine-79cf8b54b6-vks4f\" (UID: \"57dd35dc-074c-4a29-92f6-afebc0f9fad3\") " pod="openstack/heat-engine-79cf8b54b6-vks4f" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.281115 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cab04439-797a-489b-a4a7-d7cd3c23ccec-config-data" (OuterVolumeSpecName: "config-data") pod "cab04439-797a-489b-a4a7-d7cd3c23ccec" (UID: "cab04439-797a-489b-a4a7-d7cd3c23ccec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.294786 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-b57b6"] Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.294836 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-86b9b59fd6-bgwdc"] Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.294906 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-5486c4bcf9-6vs2g"] Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.305983 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55c4d98986-689lr" event={"ID":"2481a421-76c9-4baa-bde8-c93eebdc4403","Type":"ContainerDied","Data":"077594c8c17b300edaa8fa8bc9934b5db9958d98469ec182dda06e83f37e2da3"} Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.306858 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5486c4bcf9-6vs2g"] Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.306946 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cab04439-797a-489b-a4a7-d7cd3c23ccec","Type":"ContainerDied","Data":"cebbbfc964da1c9663fa0ce2ecb5dec47511ccc057528f8a595230d975c97e5a"} Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.307015 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cab04439-797a-489b-a4a7-d7cd3c23ccec","Type":"ContainerDied","Data":"8077bc9993ac2bb6a7b10086e56be2c094ec7f21cd49b833a329e9a19b88a14a"} Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.307092 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cab04439-797a-489b-a4a7-d7cd3c23ccec","Type":"ContainerDied","Data":"9c0c6a7a4c84ea88a2306a2989e750167ae033953609621d1822cb80aa70e3c4"} Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.307149 4854 scope.go:117] "RemoveContainer" containerID="cebbbfc964da1c9663fa0ce2ecb5dec47511ccc057528f8a595230d975c97e5a" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.307208 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5486c4bcf9-6vs2g" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.311524 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.353951 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5cec124c-cb6f-4b93-a398-6c766bbc6c19-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-b57b6\" (UID: \"5cec124c-cb6f-4b93-a398-6c766bbc6c19\") " pod="openstack/dnsmasq-dns-7756b9d78c-b57b6" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.354737 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/793c54a0-e893-4837-9039-3ff439b66296-config-data\") pod \"heat-cfnapi-86b9b59fd6-bgwdc\" (UID: \"793c54a0-e893-4837-9039-3ff439b66296\") " pod="openstack/heat-cfnapi-86b9b59fd6-bgwdc" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.354851 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/793c54a0-e893-4837-9039-3ff439b66296-combined-ca-bundle\") pod \"heat-cfnapi-86b9b59fd6-bgwdc\" (UID: \"793c54a0-e893-4837-9039-3ff439b66296\") " pod="openstack/heat-cfnapi-86b9b59fd6-bgwdc" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.354939 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5cec124c-cb6f-4b93-a398-6c766bbc6c19-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-b57b6\" (UID: \"5cec124c-cb6f-4b93-a398-6c766bbc6c19\") " pod="openstack/dnsmasq-dns-7756b9d78c-b57b6" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.355020 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9644f7d8-d9bb-436b-a629-81cbc06323be-config-data\") pod \"heat-api-5486c4bcf9-6vs2g\" (UID: \"9644f7d8-d9bb-436b-a629-81cbc06323be\") " pod="openstack/heat-api-5486c4bcf9-6vs2g" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.355167 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlmg9\" (UniqueName: \"kubernetes.io/projected/9644f7d8-d9bb-436b-a629-81cbc06323be-kube-api-access-qlmg9\") pod \"heat-api-5486c4bcf9-6vs2g\" (UID: \"9644f7d8-d9bb-436b-a629-81cbc06323be\") " pod="openstack/heat-api-5486c4bcf9-6vs2g" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.355330 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9644f7d8-d9bb-436b-a629-81cbc06323be-config-data-custom\") pod \"heat-api-5486c4bcf9-6vs2g\" (UID: \"9644f7d8-d9bb-436b-a629-81cbc06323be\") " pod="openstack/heat-api-5486c4bcf9-6vs2g" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.355513 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/793c54a0-e893-4837-9039-3ff439b66296-config-data-custom\") pod \"heat-cfnapi-86b9b59fd6-bgwdc\" (UID: \"793c54a0-e893-4837-9039-3ff439b66296\") " pod="openstack/heat-cfnapi-86b9b59fd6-bgwdc" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.355711 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cec124c-cb6f-4b93-a398-6c766bbc6c19-config\") pod \"dnsmasq-dns-7756b9d78c-b57b6\" (UID: \"5cec124c-cb6f-4b93-a398-6c766bbc6c19\") " pod="openstack/dnsmasq-dns-7756b9d78c-b57b6" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.355983 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5cec124c-cb6f-4b93-a398-6c766bbc6c19-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-b57b6\" (UID: \"5cec124c-cb6f-4b93-a398-6c766bbc6c19\") " pod="openstack/dnsmasq-dns-7756b9d78c-b57b6" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.356119 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5cec124c-cb6f-4b93-a398-6c766bbc6c19-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-b57b6\" (UID: \"5cec124c-cb6f-4b93-a398-6c766bbc6c19\") " pod="openstack/dnsmasq-dns-7756b9d78c-b57b6" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.356290 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhnc9\" (UniqueName: \"kubernetes.io/projected/793c54a0-e893-4837-9039-3ff439b66296-kube-api-access-fhnc9\") pod \"heat-cfnapi-86b9b59fd6-bgwdc\" (UID: \"793c54a0-e893-4837-9039-3ff439b66296\") " pod="openstack/heat-cfnapi-86b9b59fd6-bgwdc" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.357744 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9644f7d8-d9bb-436b-a629-81cbc06323be-combined-ca-bundle\") pod \"heat-api-5486c4bcf9-6vs2g\" (UID: \"9644f7d8-d9bb-436b-a629-81cbc06323be\") " pod="openstack/heat-api-5486c4bcf9-6vs2g" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.357810 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dfbf\" (UniqueName: \"kubernetes.io/projected/5cec124c-cb6f-4b93-a398-6c766bbc6c19-kube-api-access-9dfbf\") pod \"dnsmasq-dns-7756b9d78c-b57b6\" (UID: \"5cec124c-cb6f-4b93-a398-6c766bbc6c19\") " pod="openstack/dnsmasq-dns-7756b9d78c-b57b6" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.358001 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cab04439-797a-489b-a4a7-d7cd3c23ccec-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.358026 4854 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cab04439-797a-489b-a4a7-d7cd3c23ccec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.366058 4854 scope.go:117] "RemoveContainer" containerID="8077bc9993ac2bb6a7b10086e56be2c094ec7f21cd49b833a329e9a19b88a14a" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.432884 4854 scope.go:117] "RemoveContainer" containerID="cebbbfc964da1c9663fa0ce2ecb5dec47511ccc057528f8a595230d975c97e5a" Jan 03 06:05:36 crc kubenswrapper[4854]: E0103 06:05:36.433386 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cebbbfc964da1c9663fa0ce2ecb5dec47511ccc057528f8a595230d975c97e5a\": container with ID starting with cebbbfc964da1c9663fa0ce2ecb5dec47511ccc057528f8a595230d975c97e5a not found: ID does not exist" containerID="cebbbfc964da1c9663fa0ce2ecb5dec47511ccc057528f8a595230d975c97e5a" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.433425 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cebbbfc964da1c9663fa0ce2ecb5dec47511ccc057528f8a595230d975c97e5a"} err="failed to get container status \"cebbbfc964da1c9663fa0ce2ecb5dec47511ccc057528f8a595230d975c97e5a\": rpc error: code = NotFound desc = could not find container \"cebbbfc964da1c9663fa0ce2ecb5dec47511ccc057528f8a595230d975c97e5a\": container with ID starting with cebbbfc964da1c9663fa0ce2ecb5dec47511ccc057528f8a595230d975c97e5a not found: ID does not exist" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.433453 4854 scope.go:117] "RemoveContainer" containerID="8077bc9993ac2bb6a7b10086e56be2c094ec7f21cd49b833a329e9a19b88a14a" Jan 03 06:05:36 crc kubenswrapper[4854]: E0103 06:05:36.433921 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8077bc9993ac2bb6a7b10086e56be2c094ec7f21cd49b833a329e9a19b88a14a\": container with ID starting with 8077bc9993ac2bb6a7b10086e56be2c094ec7f21cd49b833a329e9a19b88a14a not found: ID does not exist" containerID="8077bc9993ac2bb6a7b10086e56be2c094ec7f21cd49b833a329e9a19b88a14a" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.433947 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8077bc9993ac2bb6a7b10086e56be2c094ec7f21cd49b833a329e9a19b88a14a"} err="failed to get container status \"8077bc9993ac2bb6a7b10086e56be2c094ec7f21cd49b833a329e9a19b88a14a\": rpc error: code = NotFound desc = could not find container \"8077bc9993ac2bb6a7b10086e56be2c094ec7f21cd49b833a329e9a19b88a14a\": container with ID starting with 8077bc9993ac2bb6a7b10086e56be2c094ec7f21cd49b833a329e9a19b88a14a not found: ID does not exist" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.433965 4854 scope.go:117] "RemoveContainer" containerID="cebbbfc964da1c9663fa0ce2ecb5dec47511ccc057528f8a595230d975c97e5a" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.440803 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cebbbfc964da1c9663fa0ce2ecb5dec47511ccc057528f8a595230d975c97e5a"} err="failed to get container status \"cebbbfc964da1c9663fa0ce2ecb5dec47511ccc057528f8a595230d975c97e5a\": rpc error: code = NotFound desc = could not find container \"cebbbfc964da1c9663fa0ce2ecb5dec47511ccc057528f8a595230d975c97e5a\": container with ID starting with cebbbfc964da1c9663fa0ce2ecb5dec47511ccc057528f8a595230d975c97e5a not found: ID does not exist" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.440862 4854 scope.go:117] "RemoveContainer" containerID="8077bc9993ac2bb6a7b10086e56be2c094ec7f21cd49b833a329e9a19b88a14a" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.442607 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8077bc9993ac2bb6a7b10086e56be2c094ec7f21cd49b833a329e9a19b88a14a"} err="failed to get container status \"8077bc9993ac2bb6a7b10086e56be2c094ec7f21cd49b833a329e9a19b88a14a\": rpc error: code = NotFound desc = could not find container \"8077bc9993ac2bb6a7b10086e56be2c094ec7f21cd49b833a329e9a19b88a14a\": container with ID starting with 8077bc9993ac2bb6a7b10086e56be2c094ec7f21cd49b833a329e9a19b88a14a not found: ID does not exist" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.473535 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9644f7d8-d9bb-436b-a629-81cbc06323be-config-data-custom\") pod \"heat-api-5486c4bcf9-6vs2g\" (UID: \"9644f7d8-d9bb-436b-a629-81cbc06323be\") " pod="openstack/heat-api-5486c4bcf9-6vs2g" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.484745 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/793c54a0-e893-4837-9039-3ff439b66296-config-data-custom\") pod \"heat-cfnapi-86b9b59fd6-bgwdc\" (UID: \"793c54a0-e893-4837-9039-3ff439b66296\") " pod="openstack/heat-cfnapi-86b9b59fd6-bgwdc" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.484908 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9644f7d8-d9bb-436b-a629-81cbc06323be-config-data-custom\") pod \"heat-api-5486c4bcf9-6vs2g\" (UID: \"9644f7d8-d9bb-436b-a629-81cbc06323be\") " pod="openstack/heat-api-5486c4bcf9-6vs2g" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.484583 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-79cf8b54b6-vks4f" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.485104 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cec124c-cb6f-4b93-a398-6c766bbc6c19-config\") pod \"dnsmasq-dns-7756b9d78c-b57b6\" (UID: \"5cec124c-cb6f-4b93-a398-6c766bbc6c19\") " pod="openstack/dnsmasq-dns-7756b9d78c-b57b6" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.485286 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5cec124c-cb6f-4b93-a398-6c766bbc6c19-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-b57b6\" (UID: \"5cec124c-cb6f-4b93-a398-6c766bbc6c19\") " pod="openstack/dnsmasq-dns-7756b9d78c-b57b6" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.485390 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5cec124c-cb6f-4b93-a398-6c766bbc6c19-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-b57b6\" (UID: \"5cec124c-cb6f-4b93-a398-6c766bbc6c19\") " pod="openstack/dnsmasq-dns-7756b9d78c-b57b6" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.485505 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhnc9\" (UniqueName: \"kubernetes.io/projected/793c54a0-e893-4837-9039-3ff439b66296-kube-api-access-fhnc9\") pod \"heat-cfnapi-86b9b59fd6-bgwdc\" (UID: \"793c54a0-e893-4837-9039-3ff439b66296\") " pod="openstack/heat-cfnapi-86b9b59fd6-bgwdc" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.485687 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9644f7d8-d9bb-436b-a629-81cbc06323be-combined-ca-bundle\") pod \"heat-api-5486c4bcf9-6vs2g\" (UID: \"9644f7d8-d9bb-436b-a629-81cbc06323be\") " pod="openstack/heat-api-5486c4bcf9-6vs2g" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.485787 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dfbf\" (UniqueName: \"kubernetes.io/projected/5cec124c-cb6f-4b93-a398-6c766bbc6c19-kube-api-access-9dfbf\") pod \"dnsmasq-dns-7756b9d78c-b57b6\" (UID: \"5cec124c-cb6f-4b93-a398-6c766bbc6c19\") " pod="openstack/dnsmasq-dns-7756b9d78c-b57b6" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.485926 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5cec124c-cb6f-4b93-a398-6c766bbc6c19-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-b57b6\" (UID: \"5cec124c-cb6f-4b93-a398-6c766bbc6c19\") " pod="openstack/dnsmasq-dns-7756b9d78c-b57b6" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.486048 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/793c54a0-e893-4837-9039-3ff439b66296-config-data\") pod \"heat-cfnapi-86b9b59fd6-bgwdc\" (UID: \"793c54a0-e893-4837-9039-3ff439b66296\") " pod="openstack/heat-cfnapi-86b9b59fd6-bgwdc" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.486180 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/793c54a0-e893-4837-9039-3ff439b66296-combined-ca-bundle\") pod \"heat-cfnapi-86b9b59fd6-bgwdc\" (UID: \"793c54a0-e893-4837-9039-3ff439b66296\") " pod="openstack/heat-cfnapi-86b9b59fd6-bgwdc" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.486300 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5cec124c-cb6f-4b93-a398-6c766bbc6c19-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-b57b6\" (UID: \"5cec124c-cb6f-4b93-a398-6c766bbc6c19\") " pod="openstack/dnsmasq-dns-7756b9d78c-b57b6" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.486406 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9644f7d8-d9bb-436b-a629-81cbc06323be-config-data\") pod \"heat-api-5486c4bcf9-6vs2g\" (UID: \"9644f7d8-d9bb-436b-a629-81cbc06323be\") " pod="openstack/heat-api-5486c4bcf9-6vs2g" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.489801 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlmg9\" (UniqueName: \"kubernetes.io/projected/9644f7d8-d9bb-436b-a629-81cbc06323be-kube-api-access-qlmg9\") pod \"heat-api-5486c4bcf9-6vs2g\" (UID: \"9644f7d8-d9bb-436b-a629-81cbc06323be\") " pod="openstack/heat-api-5486c4bcf9-6vs2g" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.491362 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5cec124c-cb6f-4b93-a398-6c766bbc6c19-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-b57b6\" (UID: \"5cec124c-cb6f-4b93-a398-6c766bbc6c19\") " pod="openstack/dnsmasq-dns-7756b9d78c-b57b6" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.492978 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/793c54a0-e893-4837-9039-3ff439b66296-config-data-custom\") pod \"heat-cfnapi-86b9b59fd6-bgwdc\" (UID: \"793c54a0-e893-4837-9039-3ff439b66296\") " pod="openstack/heat-cfnapi-86b9b59fd6-bgwdc" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.486351 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5cec124c-cb6f-4b93-a398-6c766bbc6c19-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-b57b6\" (UID: \"5cec124c-cb6f-4b93-a398-6c766bbc6c19\") " pod="openstack/dnsmasq-dns-7756b9d78c-b57b6" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.493996 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5cec124c-cb6f-4b93-a398-6c766bbc6c19-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-b57b6\" (UID: \"5cec124c-cb6f-4b93-a398-6c766bbc6c19\") " pod="openstack/dnsmasq-dns-7756b9d78c-b57b6" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.494810 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cec124c-cb6f-4b93-a398-6c766bbc6c19-config\") pod \"dnsmasq-dns-7756b9d78c-b57b6\" (UID: \"5cec124c-cb6f-4b93-a398-6c766bbc6c19\") " pod="openstack/dnsmasq-dns-7756b9d78c-b57b6" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.494637 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5cec124c-cb6f-4b93-a398-6c766bbc6c19-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-b57b6\" (UID: \"5cec124c-cb6f-4b93-a398-6c766bbc6c19\") " pod="openstack/dnsmasq-dns-7756b9d78c-b57b6" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.502921 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9644f7d8-d9bb-436b-a629-81cbc06323be-combined-ca-bundle\") pod \"heat-api-5486c4bcf9-6vs2g\" (UID: \"9644f7d8-d9bb-436b-a629-81cbc06323be\") " pod="openstack/heat-api-5486c4bcf9-6vs2g" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.507005 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlmg9\" (UniqueName: \"kubernetes.io/projected/9644f7d8-d9bb-436b-a629-81cbc06323be-kube-api-access-qlmg9\") pod \"heat-api-5486c4bcf9-6vs2g\" (UID: \"9644f7d8-d9bb-436b-a629-81cbc06323be\") " pod="openstack/heat-api-5486c4bcf9-6vs2g" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.509467 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/793c54a0-e893-4837-9039-3ff439b66296-combined-ca-bundle\") pod \"heat-cfnapi-86b9b59fd6-bgwdc\" (UID: \"793c54a0-e893-4837-9039-3ff439b66296\") " pod="openstack/heat-cfnapi-86b9b59fd6-bgwdc" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.511767 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9644f7d8-d9bb-436b-a629-81cbc06323be-config-data\") pod \"heat-api-5486c4bcf9-6vs2g\" (UID: \"9644f7d8-d9bb-436b-a629-81cbc06323be\") " pod="openstack/heat-api-5486c4bcf9-6vs2g" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.516469 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dfbf\" (UniqueName: \"kubernetes.io/projected/5cec124c-cb6f-4b93-a398-6c766bbc6c19-kube-api-access-9dfbf\") pod \"dnsmasq-dns-7756b9d78c-b57b6\" (UID: \"5cec124c-cb6f-4b93-a398-6c766bbc6c19\") " pod="openstack/dnsmasq-dns-7756b9d78c-b57b6" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.516645 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhnc9\" (UniqueName: \"kubernetes.io/projected/793c54a0-e893-4837-9039-3ff439b66296-kube-api-access-fhnc9\") pod \"heat-cfnapi-86b9b59fd6-bgwdc\" (UID: \"793c54a0-e893-4837-9039-3ff439b66296\") " pod="openstack/heat-cfnapi-86b9b59fd6-bgwdc" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.516678 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/793c54a0-e893-4837-9039-3ff439b66296-config-data\") pod \"heat-cfnapi-86b9b59fd6-bgwdc\" (UID: \"793c54a0-e893-4837-9039-3ff439b66296\") " pod="openstack/heat-cfnapi-86b9b59fd6-bgwdc" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.580627 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-55c4d98986-689lr" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.616439 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-b57b6" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.617987 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.644421 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.654435 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-86b9b59fd6-bgwdc" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.670043 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5486c4bcf9-6vs2g" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.696942 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2481a421-76c9-4baa-bde8-c93eebdc4403-combined-ca-bundle\") pod \"2481a421-76c9-4baa-bde8-c93eebdc4403\" (UID: \"2481a421-76c9-4baa-bde8-c93eebdc4403\") " Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.697000 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2481a421-76c9-4baa-bde8-c93eebdc4403-config-data-custom\") pod \"2481a421-76c9-4baa-bde8-c93eebdc4403\" (UID: \"2481a421-76c9-4baa-bde8-c93eebdc4403\") " Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.697220 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mb4zd\" (UniqueName: \"kubernetes.io/projected/2481a421-76c9-4baa-bde8-c93eebdc4403-kube-api-access-mb4zd\") pod \"2481a421-76c9-4baa-bde8-c93eebdc4403\" (UID: \"2481a421-76c9-4baa-bde8-c93eebdc4403\") " Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.697348 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2481a421-76c9-4baa-bde8-c93eebdc4403-config-data\") pod \"2481a421-76c9-4baa-bde8-c93eebdc4403\" (UID: \"2481a421-76c9-4baa-bde8-c93eebdc4403\") " Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.697440 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2481a421-76c9-4baa-bde8-c93eebdc4403-logs\") pod \"2481a421-76c9-4baa-bde8-c93eebdc4403\" (UID: \"2481a421-76c9-4baa-bde8-c93eebdc4403\") " Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.699456 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2481a421-76c9-4baa-bde8-c93eebdc4403-logs" (OuterVolumeSpecName: "logs") pod "2481a421-76c9-4baa-bde8-c93eebdc4403" (UID: "2481a421-76c9-4baa-bde8-c93eebdc4403"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.710623 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 03 06:05:36 crc kubenswrapper[4854]: E0103 06:05:36.711240 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2481a421-76c9-4baa-bde8-c93eebdc4403" containerName="barbican-api-log" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.711257 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="2481a421-76c9-4baa-bde8-c93eebdc4403" containerName="barbican-api-log" Jan 03 06:05:36 crc kubenswrapper[4854]: E0103 06:05:36.711285 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2481a421-76c9-4baa-bde8-c93eebdc4403" containerName="barbican-api" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.711348 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="2481a421-76c9-4baa-bde8-c93eebdc4403" containerName="barbican-api" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.711591 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="2481a421-76c9-4baa-bde8-c93eebdc4403" containerName="barbican-api" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.711612 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="2481a421-76c9-4baa-bde8-c93eebdc4403" containerName="barbican-api-log" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.721713 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.724453 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.724749 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2481a421-76c9-4baa-bde8-c93eebdc4403-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2481a421-76c9-4baa-bde8-c93eebdc4403" (UID: "2481a421-76c9-4baa-bde8-c93eebdc4403"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.736383 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2481a421-76c9-4baa-bde8-c93eebdc4403-kube-api-access-mb4zd" (OuterVolumeSpecName: "kube-api-access-mb4zd") pod "2481a421-76c9-4baa-bde8-c93eebdc4403" (UID: "2481a421-76c9-4baa-bde8-c93eebdc4403"). InnerVolumeSpecName "kube-api-access-mb4zd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.741677 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.756178 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.771252 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2481a421-76c9-4baa-bde8-c93eebdc4403-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2481a421-76c9-4baa-bde8-c93eebdc4403" (UID: "2481a421-76c9-4baa-bde8-c93eebdc4403"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.802911 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d802db5-d336-4639-8264-e628fa15d820-scripts\") pod \"cinder-scheduler-0\" (UID: \"2d802db5-d336-4639-8264-e628fa15d820\") " pod="openstack/cinder-scheduler-0" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.802951 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d802db5-d336-4639-8264-e628fa15d820-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2d802db5-d336-4639-8264-e628fa15d820\") " pod="openstack/cinder-scheduler-0" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.802980 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d802db5-d336-4639-8264-e628fa15d820-config-data\") pod \"cinder-scheduler-0\" (UID: \"2d802db5-d336-4639-8264-e628fa15d820\") " pod="openstack/cinder-scheduler-0" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.803147 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2d802db5-d336-4639-8264-e628fa15d820-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2d802db5-d336-4639-8264-e628fa15d820\") " pod="openstack/cinder-scheduler-0" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.803195 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvzv9\" (UniqueName: \"kubernetes.io/projected/2d802db5-d336-4639-8264-e628fa15d820-kube-api-access-zvzv9\") pod \"cinder-scheduler-0\" (UID: \"2d802db5-d336-4639-8264-e628fa15d820\") " pod="openstack/cinder-scheduler-0" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.803219 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d802db5-d336-4639-8264-e628fa15d820-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2d802db5-d336-4639-8264-e628fa15d820\") " pod="openstack/cinder-scheduler-0" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.803291 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mb4zd\" (UniqueName: \"kubernetes.io/projected/2481a421-76c9-4baa-bde8-c93eebdc4403-kube-api-access-mb4zd\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.803302 4854 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2481a421-76c9-4baa-bde8-c93eebdc4403-logs\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.803312 4854 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2481a421-76c9-4baa-bde8-c93eebdc4403-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.803321 4854 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2481a421-76c9-4baa-bde8-c93eebdc4403-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.855190 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2481a421-76c9-4baa-bde8-c93eebdc4403-config-data" (OuterVolumeSpecName: "config-data") pod "2481a421-76c9-4baa-bde8-c93eebdc4403" (UID: "2481a421-76c9-4baa-bde8-c93eebdc4403"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.906800 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2d802db5-d336-4639-8264-e628fa15d820-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2d802db5-d336-4639-8264-e628fa15d820\") " pod="openstack/cinder-scheduler-0" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.906871 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvzv9\" (UniqueName: \"kubernetes.io/projected/2d802db5-d336-4639-8264-e628fa15d820-kube-api-access-zvzv9\") pod \"cinder-scheduler-0\" (UID: \"2d802db5-d336-4639-8264-e628fa15d820\") " pod="openstack/cinder-scheduler-0" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.906902 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d802db5-d336-4639-8264-e628fa15d820-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2d802db5-d336-4639-8264-e628fa15d820\") " pod="openstack/cinder-scheduler-0" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.906941 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d802db5-d336-4639-8264-e628fa15d820-scripts\") pod \"cinder-scheduler-0\" (UID: \"2d802db5-d336-4639-8264-e628fa15d820\") " pod="openstack/cinder-scheduler-0" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.906960 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d802db5-d336-4639-8264-e628fa15d820-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2d802db5-d336-4639-8264-e628fa15d820\") " pod="openstack/cinder-scheduler-0" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.906980 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d802db5-d336-4639-8264-e628fa15d820-config-data\") pod \"cinder-scheduler-0\" (UID: \"2d802db5-d336-4639-8264-e628fa15d820\") " pod="openstack/cinder-scheduler-0" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.907129 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2481a421-76c9-4baa-bde8-c93eebdc4403-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.909062 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2d802db5-d336-4639-8264-e628fa15d820-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2d802db5-d336-4639-8264-e628fa15d820\") " pod="openstack/cinder-scheduler-0" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.915057 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d802db5-d336-4639-8264-e628fa15d820-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2d802db5-d336-4639-8264-e628fa15d820\") " pod="openstack/cinder-scheduler-0" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.915264 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d802db5-d336-4639-8264-e628fa15d820-scripts\") pod \"cinder-scheduler-0\" (UID: \"2d802db5-d336-4639-8264-e628fa15d820\") " pod="openstack/cinder-scheduler-0" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.915299 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d802db5-d336-4639-8264-e628fa15d820-config-data\") pod \"cinder-scheduler-0\" (UID: \"2d802db5-d336-4639-8264-e628fa15d820\") " pod="openstack/cinder-scheduler-0" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.922807 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d802db5-d336-4639-8264-e628fa15d820-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2d802db5-d336-4639-8264-e628fa15d820\") " pod="openstack/cinder-scheduler-0" Jan 03 06:05:36 crc kubenswrapper[4854]: I0103 06:05:36.938602 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvzv9\" (UniqueName: \"kubernetes.io/projected/2d802db5-d336-4639-8264-e628fa15d820-kube-api-access-zvzv9\") pod \"cinder-scheduler-0\" (UID: \"2d802db5-d336-4639-8264-e628fa15d820\") " pod="openstack/cinder-scheduler-0" Jan 03 06:05:37 crc kubenswrapper[4854]: I0103 06:05:37.058745 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 03 06:05:37 crc kubenswrapper[4854]: I0103 06:05:37.145261 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-79cf8b54b6-vks4f"] Jan 03 06:05:37 crc kubenswrapper[4854]: I0103 06:05:37.249176 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55c4d98986-689lr" event={"ID":"2481a421-76c9-4baa-bde8-c93eebdc4403","Type":"ContainerDied","Data":"e59b61bb699f38a2ee510ee28ab7a005aba2f1546b065316b5520ada8a173cfc"} Jan 03 06:05:37 crc kubenswrapper[4854]: I0103 06:05:37.249410 4854 scope.go:117] "RemoveContainer" containerID="077594c8c17b300edaa8fa8bc9934b5db9958d98469ec182dda06e83f37e2da3" Jan 03 06:05:37 crc kubenswrapper[4854]: I0103 06:05:37.249507 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-55c4d98986-689lr" Jan 03 06:05:37 crc kubenswrapper[4854]: I0103 06:05:37.296212 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b3ecf60-8687-4af2-a477-f2f058b854ea","Type":"ContainerStarted","Data":"7178d50e98e855e6c84b083efd4c905e5183195ec87ae0243e3cfa551a910ec7"} Jan 03 06:05:37 crc kubenswrapper[4854]: I0103 06:05:37.307767 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-79cf8b54b6-vks4f" event={"ID":"57dd35dc-074c-4a29-92f6-afebc0f9fad3","Type":"ContainerStarted","Data":"58da2c540f51305af2b3dc6e981c31aed48347582e1b215dc37ded0dc9bcabbf"} Jan 03 06:05:37 crc kubenswrapper[4854]: I0103 06:05:37.381329 4854 scope.go:117] "RemoveContainer" containerID="7613dcbe587227c5c7f61de9f7ee2d2a1e76ab284f23afa6fe78b635af035ceb" Jan 03 06:05:37 crc kubenswrapper[4854]: I0103 06:05:37.477944 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-55c4d98986-689lr"] Jan 03 06:05:37 crc kubenswrapper[4854]: I0103 06:05:37.518828 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-55c4d98986-689lr"] Jan 03 06:05:37 crc kubenswrapper[4854]: I0103 06:05:37.552869 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5486c4bcf9-6vs2g"] Jan 03 06:05:37 crc kubenswrapper[4854]: I0103 06:05:37.915591 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-b57b6"] Jan 03 06:05:37 crc kubenswrapper[4854]: I0103 06:05:37.936656 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-86b9b59fd6-bgwdc"] Jan 03 06:05:37 crc kubenswrapper[4854]: W0103 06:05:37.955448 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5cec124c_cb6f_4b93_a398_6c766bbc6c19.slice/crio-2b57228bbacbe5beefb4c2d8277742fa5b1f137600954bda7f674edcc7dd5ea7 WatchSource:0}: Error finding container 2b57228bbacbe5beefb4c2d8277742fa5b1f137600954bda7f674edcc7dd5ea7: Status 404 returned error can't find the container with id 2b57228bbacbe5beefb4c2d8277742fa5b1f137600954bda7f674edcc7dd5ea7 Jan 03 06:05:38 crc kubenswrapper[4854]: I0103 06:05:38.206168 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2481a421-76c9-4baa-bde8-c93eebdc4403" path="/var/lib/kubelet/pods/2481a421-76c9-4baa-bde8-c93eebdc4403/volumes" Jan 03 06:05:38 crc kubenswrapper[4854]: I0103 06:05:38.209445 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cab04439-797a-489b-a4a7-d7cd3c23ccec" path="/var/lib/kubelet/pods/cab04439-797a-489b-a4a7-d7cd3c23ccec/volumes" Jan 03 06:05:38 crc kubenswrapper[4854]: I0103 06:05:38.211406 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 03 06:05:38 crc kubenswrapper[4854]: I0103 06:05:38.334381 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5486c4bcf9-6vs2g" event={"ID":"9644f7d8-d9bb-436b-a629-81cbc06323be","Type":"ContainerStarted","Data":"a618b72abb930e7cb9a423c21a86f28af4d58c546a92497c6ea02b0ef573f07b"} Jan 03 06:05:38 crc kubenswrapper[4854]: I0103 06:05:38.336435 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-86b9b59fd6-bgwdc" event={"ID":"793c54a0-e893-4837-9039-3ff439b66296","Type":"ContainerStarted","Data":"4cada4a0633768863f3db61539197f8b9dcdc96572cddc87268b1a0f1d1754a2"} Jan 03 06:05:38 crc kubenswrapper[4854]: I0103 06:05:38.338377 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-b57b6" event={"ID":"5cec124c-cb6f-4b93-a398-6c766bbc6c19","Type":"ContainerStarted","Data":"2b57228bbacbe5beefb4c2d8277742fa5b1f137600954bda7f674edcc7dd5ea7"} Jan 03 06:05:38 crc kubenswrapper[4854]: I0103 06:05:38.343761 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2d802db5-d336-4639-8264-e628fa15d820","Type":"ContainerStarted","Data":"5cc5595f748b9ba41cf25fbbc63f389dce21d97c42332d06b5dd9e6fbfff12cf"} Jan 03 06:05:38 crc kubenswrapper[4854]: I0103 06:05:38.346136 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b3ecf60-8687-4af2-a477-f2f058b854ea","Type":"ContainerStarted","Data":"a61f5169a0987f63c9d47cea68269915733d1753687b224360f78db5169e6c49"} Jan 03 06:05:38 crc kubenswrapper[4854]: I0103 06:05:38.348346 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-79cf8b54b6-vks4f" event={"ID":"57dd35dc-074c-4a29-92f6-afebc0f9fad3","Type":"ContainerStarted","Data":"86ef38ad3fc6e7960ff9d969c9c7b497087e21b1c52cb1332c14406c64ec98b1"} Jan 03 06:05:38 crc kubenswrapper[4854]: I0103 06:05:38.349053 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-79cf8b54b6-vks4f" Jan 03 06:05:38 crc kubenswrapper[4854]: I0103 06:05:38.389153 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-79cf8b54b6-vks4f" podStartSLOduration=3.389125758 podStartE2EDuration="3.389125758s" podCreationTimestamp="2026-01-03 06:05:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:05:38.38220012 +0000 UTC m=+1516.708776702" watchObservedRunningTime="2026-01-03 06:05:38.389125758 +0000 UTC m=+1516.715702340" Jan 03 06:05:38 crc kubenswrapper[4854]: I0103 06:05:38.825379 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-fb745b69-f55rh" podUID="0cac862a-2a43-44ed-903a-8d7b09100ac3" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.190:5353: i/o timeout" Jan 03 06:05:39 crc kubenswrapper[4854]: I0103 06:05:39.373325 4854 generic.go:334] "Generic (PLEG): container finished" podID="5cec124c-cb6f-4b93-a398-6c766bbc6c19" containerID="9e5a1641fa55ab66b8a3223d27b0cbd7bcca652b583524d49f8e6a005a25b2b6" exitCode=0 Jan 03 06:05:39 crc kubenswrapper[4854]: I0103 06:05:39.373603 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-b57b6" event={"ID":"5cec124c-cb6f-4b93-a398-6c766bbc6c19","Type":"ContainerDied","Data":"9e5a1641fa55ab66b8a3223d27b0cbd7bcca652b583524d49f8e6a005a25b2b6"} Jan 03 06:05:39 crc kubenswrapper[4854]: I0103 06:05:39.381524 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b3ecf60-8687-4af2-a477-f2f058b854ea","Type":"ContainerStarted","Data":"ed7b294cb2965625e010f32e983ff8b51d97872c264442165a36e4a7f06d42fe"} Jan 03 06:05:40 crc kubenswrapper[4854]: I0103 06:05:40.393208 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2d802db5-d336-4639-8264-e628fa15d820","Type":"ContainerStarted","Data":"8d605ff5812a1bb92720d9dfe6ed631408ba09c2d540278cddd1b7b5491d467b"} Jan 03 06:05:40 crc kubenswrapper[4854]: I0103 06:05:40.395406 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b3ecf60-8687-4af2-a477-f2f058b854ea","Type":"ContainerStarted","Data":"1b1ef04d5af8a220b1dd91b1eac12f64374ec802017c272f80b261fd643525a4"} Jan 03 06:05:40 crc kubenswrapper[4854]: I0103 06:05:40.966791 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-85bb9cd67c-w6ss9" Jan 03 06:05:40 crc kubenswrapper[4854]: I0103 06:05:40.983740 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-85bb9cd67c-w6ss9" Jan 03 06:05:41 crc kubenswrapper[4854]: I0103 06:05:41.662657 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 03 06:05:43 crc kubenswrapper[4854]: I0103 06:05:43.493383 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:05:43 crc kubenswrapper[4854]: I0103 06:05:43.503206 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5486c4bcf9-6vs2g" event={"ID":"9644f7d8-d9bb-436b-a629-81cbc06323be","Type":"ContainerStarted","Data":"32d7e9ad4520e2c4b8c1e0d28094816f9dcf9e6320dcd23f669f0d9ad488f038"} Jan 03 06:05:43 crc kubenswrapper[4854]: I0103 06:05:43.503305 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5486c4bcf9-6vs2g" Jan 03 06:05:43 crc kubenswrapper[4854]: I0103 06:05:43.516646 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-86b9b59fd6-bgwdc" event={"ID":"793c54a0-e893-4837-9039-3ff439b66296","Type":"ContainerStarted","Data":"a65f3aa29d8f6d79ea8ee0b4052fed0469ef6f1c239b16c271203dddee3041ac"} Jan 03 06:05:43 crc kubenswrapper[4854]: I0103 06:05:43.517175 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-86b9b59fd6-bgwdc" Jan 03 06:05:43 crc kubenswrapper[4854]: I0103 06:05:43.540336 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-5486c4bcf9-6vs2g" podStartSLOduration=3.004640646 podStartE2EDuration="7.540308854s" podCreationTimestamp="2026-01-03 06:05:36 +0000 UTC" firstStartedPulling="2026-01-03 06:05:37.527238375 +0000 UTC m=+1515.853814947" lastFinishedPulling="2026-01-03 06:05:42.062906593 +0000 UTC m=+1520.389483155" observedRunningTime="2026-01-03 06:05:43.533171181 +0000 UTC m=+1521.859747763" watchObservedRunningTime="2026-01-03 06:05:43.540308854 +0000 UTC m=+1521.866885426" Jan 03 06:05:43 crc kubenswrapper[4854]: I0103 06:05:43.560817 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-b57b6" event={"ID":"5cec124c-cb6f-4b93-a398-6c766bbc6c19","Type":"ContainerStarted","Data":"f988a279c8fb2ded71d0f512dbaa1059e65ef2a1dba26d3e8a2c68b1eef14c3a"} Jan 03 06:05:43 crc kubenswrapper[4854]: I0103 06:05:43.562414 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7756b9d78c-b57b6" Jan 03 06:05:43 crc kubenswrapper[4854]: I0103 06:05:43.602016 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-86b9b59fd6-bgwdc" podStartSLOduration=3.530553839 podStartE2EDuration="7.601993872s" podCreationTimestamp="2026-01-03 06:05:36 +0000 UTC" firstStartedPulling="2026-01-03 06:05:37.99517349 +0000 UTC m=+1516.321750052" lastFinishedPulling="2026-01-03 06:05:42.066613513 +0000 UTC m=+1520.393190085" observedRunningTime="2026-01-03 06:05:43.58419542 +0000 UTC m=+1521.910771992" watchObservedRunningTime="2026-01-03 06:05:43.601993872 +0000 UTC m=+1521.928570454" Jan 03 06:05:43 crc kubenswrapper[4854]: I0103 06:05:43.631116 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7756b9d78c-b57b6" podStartSLOduration=7.631096779 podStartE2EDuration="7.631096779s" podCreationTimestamp="2026-01-03 06:05:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:05:43.606881221 +0000 UTC m=+1521.933457793" watchObservedRunningTime="2026-01-03 06:05:43.631096779 +0000 UTC m=+1521.957673351" Jan 03 06:05:43 crc kubenswrapper[4854]: I0103 06:05:43.645310 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2d802db5-d336-4639-8264-e628fa15d820","Type":"ContainerStarted","Data":"a93923cdad6a3418c62058c1f643078abeab82b31f8129dc6af195a0d6730973"} Jan 03 06:05:43 crc kubenswrapper[4854]: I0103 06:05:43.680322 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=7.680304024 podStartE2EDuration="7.680304024s" podCreationTimestamp="2026-01-03 06:05:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:05:43.675402955 +0000 UTC m=+1522.001979527" watchObservedRunningTime="2026-01-03 06:05:43.680304024 +0000 UTC m=+1522.006880596" Jan 03 06:05:44 crc kubenswrapper[4854]: I0103 06:05:44.696037 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-5fd96db984-rmm7s"] Jan 03 06:05:44 crc kubenswrapper[4854]: I0103 06:05:44.703585 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5fd96db984-rmm7s" Jan 03 06:05:44 crc kubenswrapper[4854]: I0103 06:05:44.708274 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1aee80e-651a-4434-a1da-34bd6dbd83bd-combined-ca-bundle\") pod \"heat-api-5fd96db984-rmm7s\" (UID: \"b1aee80e-651a-4434-a1da-34bd6dbd83bd\") " pod="openstack/heat-api-5fd96db984-rmm7s" Jan 03 06:05:44 crc kubenswrapper[4854]: I0103 06:05:44.708528 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b1aee80e-651a-4434-a1da-34bd6dbd83bd-config-data-custom\") pod \"heat-api-5fd96db984-rmm7s\" (UID: \"b1aee80e-651a-4434-a1da-34bd6dbd83bd\") " pod="openstack/heat-api-5fd96db984-rmm7s" Jan 03 06:05:44 crc kubenswrapper[4854]: I0103 06:05:44.708708 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1aee80e-651a-4434-a1da-34bd6dbd83bd-config-data\") pod \"heat-api-5fd96db984-rmm7s\" (UID: \"b1aee80e-651a-4434-a1da-34bd6dbd83bd\") " pod="openstack/heat-api-5fd96db984-rmm7s" Jan 03 06:05:44 crc kubenswrapper[4854]: I0103 06:05:44.708903 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4chkk\" (UniqueName: \"kubernetes.io/projected/b1aee80e-651a-4434-a1da-34bd6dbd83bd-kube-api-access-4chkk\") pod \"heat-api-5fd96db984-rmm7s\" (UID: \"b1aee80e-651a-4434-a1da-34bd6dbd83bd\") " pod="openstack/heat-api-5fd96db984-rmm7s" Jan 03 06:05:44 crc kubenswrapper[4854]: I0103 06:05:44.710984 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-6749994886-zsx65"] Jan 03 06:05:44 crc kubenswrapper[4854]: I0103 06:05:44.712537 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6749994886-zsx65" Jan 03 06:05:44 crc kubenswrapper[4854]: I0103 06:05:44.724872 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-6d569589c9-7q7mv"] Jan 03 06:05:44 crc kubenswrapper[4854]: I0103 06:05:44.727046 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6d569589c9-7q7mv" Jan 03 06:05:44 crc kubenswrapper[4854]: I0103 06:05:44.733765 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6749994886-zsx65"] Jan 03 06:05:44 crc kubenswrapper[4854]: I0103 06:05:44.747048 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6d569589c9-7q7mv"] Jan 03 06:05:44 crc kubenswrapper[4854]: I0103 06:05:44.767390 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5fd96db984-rmm7s"] Jan 03 06:05:44 crc kubenswrapper[4854]: I0103 06:05:44.814488 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4chkk\" (UniqueName: \"kubernetes.io/projected/b1aee80e-651a-4434-a1da-34bd6dbd83bd-kube-api-access-4chkk\") pod \"heat-api-5fd96db984-rmm7s\" (UID: \"b1aee80e-651a-4434-a1da-34bd6dbd83bd\") " pod="openstack/heat-api-5fd96db984-rmm7s" Jan 03 06:05:44 crc kubenswrapper[4854]: I0103 06:05:44.814856 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1aee80e-651a-4434-a1da-34bd6dbd83bd-combined-ca-bundle\") pod \"heat-api-5fd96db984-rmm7s\" (UID: \"b1aee80e-651a-4434-a1da-34bd6dbd83bd\") " pod="openstack/heat-api-5fd96db984-rmm7s" Jan 03 06:05:44 crc kubenswrapper[4854]: I0103 06:05:44.814924 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b1aee80e-651a-4434-a1da-34bd6dbd83bd-config-data-custom\") pod \"heat-api-5fd96db984-rmm7s\" (UID: \"b1aee80e-651a-4434-a1da-34bd6dbd83bd\") " pod="openstack/heat-api-5fd96db984-rmm7s" Jan 03 06:05:44 crc kubenswrapper[4854]: I0103 06:05:44.815004 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1aee80e-651a-4434-a1da-34bd6dbd83bd-config-data\") pod \"heat-api-5fd96db984-rmm7s\" (UID: \"b1aee80e-651a-4434-a1da-34bd6dbd83bd\") " pod="openstack/heat-api-5fd96db984-rmm7s" Jan 03 06:05:44 crc kubenswrapper[4854]: I0103 06:05:44.823163 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1aee80e-651a-4434-a1da-34bd6dbd83bd-config-data\") pod \"heat-api-5fd96db984-rmm7s\" (UID: \"b1aee80e-651a-4434-a1da-34bd6dbd83bd\") " pod="openstack/heat-api-5fd96db984-rmm7s" Jan 03 06:05:44 crc kubenswrapper[4854]: I0103 06:05:44.829849 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1aee80e-651a-4434-a1da-34bd6dbd83bd-combined-ca-bundle\") pod \"heat-api-5fd96db984-rmm7s\" (UID: \"b1aee80e-651a-4434-a1da-34bd6dbd83bd\") " pod="openstack/heat-api-5fd96db984-rmm7s" Jan 03 06:05:44 crc kubenswrapper[4854]: I0103 06:05:44.834844 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4chkk\" (UniqueName: \"kubernetes.io/projected/b1aee80e-651a-4434-a1da-34bd6dbd83bd-kube-api-access-4chkk\") pod \"heat-api-5fd96db984-rmm7s\" (UID: \"b1aee80e-651a-4434-a1da-34bd6dbd83bd\") " pod="openstack/heat-api-5fd96db984-rmm7s" Jan 03 06:05:44 crc kubenswrapper[4854]: I0103 06:05:44.882632 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b1aee80e-651a-4434-a1da-34bd6dbd83bd-config-data-custom\") pod \"heat-api-5fd96db984-rmm7s\" (UID: \"b1aee80e-651a-4434-a1da-34bd6dbd83bd\") " pod="openstack/heat-api-5fd96db984-rmm7s" Jan 03 06:05:44 crc kubenswrapper[4854]: I0103 06:05:44.923806 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a290934-d2f6-475a-814a-209a27b7e897-config-data-custom\") pod \"heat-cfnapi-6d569589c9-7q7mv\" (UID: \"3a290934-d2f6-475a-814a-209a27b7e897\") " pod="openstack/heat-cfnapi-6d569589c9-7q7mv" Jan 03 06:05:44 crc kubenswrapper[4854]: I0103 06:05:44.923922 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d346baaf-3040-4209-9049-e92c7b033015-config-data-custom\") pod \"heat-engine-6749994886-zsx65\" (UID: \"d346baaf-3040-4209-9049-e92c7b033015\") " pod="openstack/heat-engine-6749994886-zsx65" Jan 03 06:05:44 crc kubenswrapper[4854]: I0103 06:05:44.923966 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjfm7\" (UniqueName: \"kubernetes.io/projected/3a290934-d2f6-475a-814a-209a27b7e897-kube-api-access-jjfm7\") pod \"heat-cfnapi-6d569589c9-7q7mv\" (UID: \"3a290934-d2f6-475a-814a-209a27b7e897\") " pod="openstack/heat-cfnapi-6d569589c9-7q7mv" Jan 03 06:05:44 crc kubenswrapper[4854]: I0103 06:05:44.923990 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d346baaf-3040-4209-9049-e92c7b033015-combined-ca-bundle\") pod \"heat-engine-6749994886-zsx65\" (UID: \"d346baaf-3040-4209-9049-e92c7b033015\") " pod="openstack/heat-engine-6749994886-zsx65" Jan 03 06:05:44 crc kubenswrapper[4854]: I0103 06:05:44.924018 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a290934-d2f6-475a-814a-209a27b7e897-combined-ca-bundle\") pod \"heat-cfnapi-6d569589c9-7q7mv\" (UID: \"3a290934-d2f6-475a-814a-209a27b7e897\") " pod="openstack/heat-cfnapi-6d569589c9-7q7mv" Jan 03 06:05:44 crc kubenswrapper[4854]: I0103 06:05:44.924192 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d346baaf-3040-4209-9049-e92c7b033015-config-data\") pod \"heat-engine-6749994886-zsx65\" (UID: \"d346baaf-3040-4209-9049-e92c7b033015\") " pod="openstack/heat-engine-6749994886-zsx65" Jan 03 06:05:44 crc kubenswrapper[4854]: I0103 06:05:44.924353 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a290934-d2f6-475a-814a-209a27b7e897-config-data\") pod \"heat-cfnapi-6d569589c9-7q7mv\" (UID: \"3a290934-d2f6-475a-814a-209a27b7e897\") " pod="openstack/heat-cfnapi-6d569589c9-7q7mv" Jan 03 06:05:44 crc kubenswrapper[4854]: I0103 06:05:44.924430 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg62d\" (UniqueName: \"kubernetes.io/projected/d346baaf-3040-4209-9049-e92c7b033015-kube-api-access-kg62d\") pod \"heat-engine-6749994886-zsx65\" (UID: \"d346baaf-3040-4209-9049-e92c7b033015\") " pod="openstack/heat-engine-6749994886-zsx65" Jan 03 06:05:45 crc kubenswrapper[4854]: I0103 06:05:45.027047 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a290934-d2f6-475a-814a-209a27b7e897-config-data-custom\") pod \"heat-cfnapi-6d569589c9-7q7mv\" (UID: \"3a290934-d2f6-475a-814a-209a27b7e897\") " pod="openstack/heat-cfnapi-6d569589c9-7q7mv" Jan 03 06:05:45 crc kubenswrapper[4854]: I0103 06:05:45.027124 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d346baaf-3040-4209-9049-e92c7b033015-config-data-custom\") pod \"heat-engine-6749994886-zsx65\" (UID: \"d346baaf-3040-4209-9049-e92c7b033015\") " pod="openstack/heat-engine-6749994886-zsx65" Jan 03 06:05:45 crc kubenswrapper[4854]: I0103 06:05:45.027150 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjfm7\" (UniqueName: \"kubernetes.io/projected/3a290934-d2f6-475a-814a-209a27b7e897-kube-api-access-jjfm7\") pod \"heat-cfnapi-6d569589c9-7q7mv\" (UID: \"3a290934-d2f6-475a-814a-209a27b7e897\") " pod="openstack/heat-cfnapi-6d569589c9-7q7mv" Jan 03 06:05:45 crc kubenswrapper[4854]: I0103 06:05:45.027167 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d346baaf-3040-4209-9049-e92c7b033015-combined-ca-bundle\") pod \"heat-engine-6749994886-zsx65\" (UID: \"d346baaf-3040-4209-9049-e92c7b033015\") " pod="openstack/heat-engine-6749994886-zsx65" Jan 03 06:05:45 crc kubenswrapper[4854]: I0103 06:05:45.027190 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a290934-d2f6-475a-814a-209a27b7e897-combined-ca-bundle\") pod \"heat-cfnapi-6d569589c9-7q7mv\" (UID: \"3a290934-d2f6-475a-814a-209a27b7e897\") " pod="openstack/heat-cfnapi-6d569589c9-7q7mv" Jan 03 06:05:45 crc kubenswrapper[4854]: I0103 06:05:45.027269 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d346baaf-3040-4209-9049-e92c7b033015-config-data\") pod \"heat-engine-6749994886-zsx65\" (UID: \"d346baaf-3040-4209-9049-e92c7b033015\") " pod="openstack/heat-engine-6749994886-zsx65" Jan 03 06:05:45 crc kubenswrapper[4854]: I0103 06:05:45.027348 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a290934-d2f6-475a-814a-209a27b7e897-config-data\") pod \"heat-cfnapi-6d569589c9-7q7mv\" (UID: \"3a290934-d2f6-475a-814a-209a27b7e897\") " pod="openstack/heat-cfnapi-6d569589c9-7q7mv" Jan 03 06:05:45 crc kubenswrapper[4854]: I0103 06:05:45.027387 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kg62d\" (UniqueName: \"kubernetes.io/projected/d346baaf-3040-4209-9049-e92c7b033015-kube-api-access-kg62d\") pod \"heat-engine-6749994886-zsx65\" (UID: \"d346baaf-3040-4209-9049-e92c7b033015\") " pod="openstack/heat-engine-6749994886-zsx65" Jan 03 06:05:45 crc kubenswrapper[4854]: I0103 06:05:45.032768 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d346baaf-3040-4209-9049-e92c7b033015-config-data-custom\") pod \"heat-engine-6749994886-zsx65\" (UID: \"d346baaf-3040-4209-9049-e92c7b033015\") " pod="openstack/heat-engine-6749994886-zsx65" Jan 03 06:05:45 crc kubenswrapper[4854]: I0103 06:05:45.034710 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a290934-d2f6-475a-814a-209a27b7e897-combined-ca-bundle\") pod \"heat-cfnapi-6d569589c9-7q7mv\" (UID: \"3a290934-d2f6-475a-814a-209a27b7e897\") " pod="openstack/heat-cfnapi-6d569589c9-7q7mv" Jan 03 06:05:45 crc kubenswrapper[4854]: I0103 06:05:45.035616 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a290934-d2f6-475a-814a-209a27b7e897-config-data\") pod \"heat-cfnapi-6d569589c9-7q7mv\" (UID: \"3a290934-d2f6-475a-814a-209a27b7e897\") " pod="openstack/heat-cfnapi-6d569589c9-7q7mv" Jan 03 06:05:45 crc kubenswrapper[4854]: I0103 06:05:45.037646 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d346baaf-3040-4209-9049-e92c7b033015-combined-ca-bundle\") pod \"heat-engine-6749994886-zsx65\" (UID: \"d346baaf-3040-4209-9049-e92c7b033015\") " pod="openstack/heat-engine-6749994886-zsx65" Jan 03 06:05:45 crc kubenswrapper[4854]: I0103 06:05:45.038475 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a290934-d2f6-475a-814a-209a27b7e897-config-data-custom\") pod \"heat-cfnapi-6d569589c9-7q7mv\" (UID: \"3a290934-d2f6-475a-814a-209a27b7e897\") " pod="openstack/heat-cfnapi-6d569589c9-7q7mv" Jan 03 06:05:45 crc kubenswrapper[4854]: I0103 06:05:45.055554 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5fd96db984-rmm7s" Jan 03 06:05:45 crc kubenswrapper[4854]: I0103 06:05:45.055756 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kg62d\" (UniqueName: \"kubernetes.io/projected/d346baaf-3040-4209-9049-e92c7b033015-kube-api-access-kg62d\") pod \"heat-engine-6749994886-zsx65\" (UID: \"d346baaf-3040-4209-9049-e92c7b033015\") " pod="openstack/heat-engine-6749994886-zsx65" Jan 03 06:05:45 crc kubenswrapper[4854]: I0103 06:05:45.055942 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjfm7\" (UniqueName: \"kubernetes.io/projected/3a290934-d2f6-475a-814a-209a27b7e897-kube-api-access-jjfm7\") pod \"heat-cfnapi-6d569589c9-7q7mv\" (UID: \"3a290934-d2f6-475a-814a-209a27b7e897\") " pod="openstack/heat-cfnapi-6d569589c9-7q7mv" Jan 03 06:05:45 crc kubenswrapper[4854]: I0103 06:05:45.059067 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d346baaf-3040-4209-9049-e92c7b033015-config-data\") pod \"heat-engine-6749994886-zsx65\" (UID: \"d346baaf-3040-4209-9049-e92c7b033015\") " pod="openstack/heat-engine-6749994886-zsx65" Jan 03 06:05:45 crc kubenswrapper[4854]: I0103 06:05:45.068858 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6749994886-zsx65" Jan 03 06:05:45 crc kubenswrapper[4854]: I0103 06:05:45.102290 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6d569589c9-7q7mv" Jan 03 06:05:47 crc kubenswrapper[4854]: I0103 06:05:47.048012 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5486c4bcf9-6vs2g"] Jan 03 06:05:47 crc kubenswrapper[4854]: I0103 06:05:47.048861 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-5486c4bcf9-6vs2g" podUID="9644f7d8-d9bb-436b-a629-81cbc06323be" containerName="heat-api" containerID="cri-o://32d7e9ad4520e2c4b8c1e0d28094816f9dcf9e6320dcd23f669f0d9ad488f038" gracePeriod=60 Jan 03 06:05:47 crc kubenswrapper[4854]: I0103 06:05:47.060606 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 03 06:05:47 crc kubenswrapper[4854]: I0103 06:05:47.060869 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-5486c4bcf9-6vs2g" podUID="9644f7d8-d9bb-436b-a629-81cbc06323be" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.214:8004/healthcheck\": EOF" Jan 03 06:05:47 crc kubenswrapper[4854]: I0103 06:05:47.076368 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-86b9b59fd6-bgwdc"] Jan 03 06:05:47 crc kubenswrapper[4854]: I0103 06:05:47.076609 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-86b9b59fd6-bgwdc" podUID="793c54a0-e893-4837-9039-3ff439b66296" containerName="heat-cfnapi" containerID="cri-o://a65f3aa29d8f6d79ea8ee0b4052fed0469ef6f1c239b16c271203dddee3041ac" gracePeriod=60 Jan 03 06:05:47 crc kubenswrapper[4854]: I0103 06:05:47.102760 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-c795c8675-ng42x"] Jan 03 06:05:47 crc kubenswrapper[4854]: I0103 06:05:47.104713 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-c795c8675-ng42x" Jan 03 06:05:47 crc kubenswrapper[4854]: I0103 06:05:47.113532 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Jan 03 06:05:47 crc kubenswrapper[4854]: I0103 06:05:47.113743 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Jan 03 06:05:47 crc kubenswrapper[4854]: I0103 06:05:47.120008 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-c795c8675-ng42x"] Jan 03 06:05:47 crc kubenswrapper[4854]: I0103 06:05:47.213459 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-94fd9f97f-bcw2n"] Jan 03 06:05:47 crc kubenswrapper[4854]: I0103 06:05:47.215163 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-94fd9f97f-bcw2n" Jan 03 06:05:47 crc kubenswrapper[4854]: I0103 06:05:47.229318 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Jan 03 06:05:47 crc kubenswrapper[4854]: I0103 06:05:47.229549 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Jan 03 06:05:47 crc kubenswrapper[4854]: I0103 06:05:47.253419 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-94fd9f97f-bcw2n"] Jan 03 06:05:47 crc kubenswrapper[4854]: I0103 06:05:47.283705 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1af8fd7-a6f1-40f2-b5bc-e0d15e865698-internal-tls-certs\") pod \"heat-api-c795c8675-ng42x\" (UID: \"a1af8fd7-a6f1-40f2-b5bc-e0d15e865698\") " pod="openstack/heat-api-c795c8675-ng42x" Jan 03 06:05:47 crc kubenswrapper[4854]: I0103 06:05:47.283764 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1af8fd7-a6f1-40f2-b5bc-e0d15e865698-public-tls-certs\") pod \"heat-api-c795c8675-ng42x\" (UID: \"a1af8fd7-a6f1-40f2-b5bc-e0d15e865698\") " pod="openstack/heat-api-c795c8675-ng42x" Jan 03 06:05:47 crc kubenswrapper[4854]: I0103 06:05:47.283793 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1af8fd7-a6f1-40f2-b5bc-e0d15e865698-config-data\") pod \"heat-api-c795c8675-ng42x\" (UID: \"a1af8fd7-a6f1-40f2-b5bc-e0d15e865698\") " pod="openstack/heat-api-c795c8675-ng42x" Jan 03 06:05:47 crc kubenswrapper[4854]: I0103 06:05:47.283832 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1af8fd7-a6f1-40f2-b5bc-e0d15e865698-combined-ca-bundle\") pod \"heat-api-c795c8675-ng42x\" (UID: \"a1af8fd7-a6f1-40f2-b5bc-e0d15e865698\") " pod="openstack/heat-api-c795c8675-ng42x" Jan 03 06:05:47 crc kubenswrapper[4854]: I0103 06:05:47.283932 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a1af8fd7-a6f1-40f2-b5bc-e0d15e865698-config-data-custom\") pod \"heat-api-c795c8675-ng42x\" (UID: \"a1af8fd7-a6f1-40f2-b5bc-e0d15e865698\") " pod="openstack/heat-api-c795c8675-ng42x" Jan 03 06:05:47 crc kubenswrapper[4854]: I0103 06:05:47.283974 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxk8z\" (UniqueName: \"kubernetes.io/projected/a1af8fd7-a6f1-40f2-b5bc-e0d15e865698-kube-api-access-rxk8z\") pod \"heat-api-c795c8675-ng42x\" (UID: \"a1af8fd7-a6f1-40f2-b5bc-e0d15e865698\") " pod="openstack/heat-api-c795c8675-ng42x" Jan 03 06:05:47 crc kubenswrapper[4854]: I0103 06:05:47.387195 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1af8fd7-a6f1-40f2-b5bc-e0d15e865698-internal-tls-certs\") pod \"heat-api-c795c8675-ng42x\" (UID: \"a1af8fd7-a6f1-40f2-b5bc-e0d15e865698\") " pod="openstack/heat-api-c795c8675-ng42x" Jan 03 06:05:47 crc kubenswrapper[4854]: I0103 06:05:47.387297 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1af8fd7-a6f1-40f2-b5bc-e0d15e865698-public-tls-certs\") pod \"heat-api-c795c8675-ng42x\" (UID: \"a1af8fd7-a6f1-40f2-b5bc-e0d15e865698\") " pod="openstack/heat-api-c795c8675-ng42x" Jan 03 06:05:47 crc kubenswrapper[4854]: I0103 06:05:47.387325 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b49c2220-2581-4c4f-a034-10f34ddc8f80-config-data-custom\") pod \"heat-cfnapi-94fd9f97f-bcw2n\" (UID: \"b49c2220-2581-4c4f-a034-10f34ddc8f80\") " pod="openstack/heat-cfnapi-94fd9f97f-bcw2n" Jan 03 06:05:47 crc kubenswrapper[4854]: I0103 06:05:47.387359 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1af8fd7-a6f1-40f2-b5bc-e0d15e865698-config-data\") pod \"heat-api-c795c8675-ng42x\" (UID: \"a1af8fd7-a6f1-40f2-b5bc-e0d15e865698\") " pod="openstack/heat-api-c795c8675-ng42x" Jan 03 06:05:47 crc kubenswrapper[4854]: I0103 06:05:47.387422 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1af8fd7-a6f1-40f2-b5bc-e0d15e865698-combined-ca-bundle\") pod \"heat-api-c795c8675-ng42x\" (UID: \"a1af8fd7-a6f1-40f2-b5bc-e0d15e865698\") " pod="openstack/heat-api-c795c8675-ng42x" Jan 03 06:05:47 crc kubenswrapper[4854]: I0103 06:05:47.387484 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b49c2220-2581-4c4f-a034-10f34ddc8f80-combined-ca-bundle\") pod \"heat-cfnapi-94fd9f97f-bcw2n\" (UID: \"b49c2220-2581-4c4f-a034-10f34ddc8f80\") " pod="openstack/heat-cfnapi-94fd9f97f-bcw2n" Jan 03 06:05:47 crc kubenswrapper[4854]: I0103 06:05:47.387561 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b49c2220-2581-4c4f-a034-10f34ddc8f80-internal-tls-certs\") pod \"heat-cfnapi-94fd9f97f-bcw2n\" (UID: \"b49c2220-2581-4c4f-a034-10f34ddc8f80\") " pod="openstack/heat-cfnapi-94fd9f97f-bcw2n" Jan 03 06:05:47 crc kubenswrapper[4854]: I0103 06:05:47.387609 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b49c2220-2581-4c4f-a034-10f34ddc8f80-config-data\") pod \"heat-cfnapi-94fd9f97f-bcw2n\" (UID: \"b49c2220-2581-4c4f-a034-10f34ddc8f80\") " pod="openstack/heat-cfnapi-94fd9f97f-bcw2n" Jan 03 06:05:47 crc kubenswrapper[4854]: I0103 06:05:47.387657 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a1af8fd7-a6f1-40f2-b5bc-e0d15e865698-config-data-custom\") pod \"heat-api-c795c8675-ng42x\" (UID: \"a1af8fd7-a6f1-40f2-b5bc-e0d15e865698\") " pod="openstack/heat-api-c795c8675-ng42x" Jan 03 06:05:47 crc kubenswrapper[4854]: I0103 06:05:47.387741 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxk8z\" (UniqueName: \"kubernetes.io/projected/a1af8fd7-a6f1-40f2-b5bc-e0d15e865698-kube-api-access-rxk8z\") pod \"heat-api-c795c8675-ng42x\" (UID: \"a1af8fd7-a6f1-40f2-b5bc-e0d15e865698\") " pod="openstack/heat-api-c795c8675-ng42x" Jan 03 06:05:47 crc kubenswrapper[4854]: I0103 06:05:47.387792 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b49c2220-2581-4c4f-a034-10f34ddc8f80-public-tls-certs\") pod \"heat-cfnapi-94fd9f97f-bcw2n\" (UID: \"b49c2220-2581-4c4f-a034-10f34ddc8f80\") " pod="openstack/heat-cfnapi-94fd9f97f-bcw2n" Jan 03 06:05:47 crc kubenswrapper[4854]: I0103 06:05:47.387844 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c722r\" (UniqueName: \"kubernetes.io/projected/b49c2220-2581-4c4f-a034-10f34ddc8f80-kube-api-access-c722r\") pod \"heat-cfnapi-94fd9f97f-bcw2n\" (UID: \"b49c2220-2581-4c4f-a034-10f34ddc8f80\") " pod="openstack/heat-cfnapi-94fd9f97f-bcw2n" Jan 03 06:05:47 crc kubenswrapper[4854]: I0103 06:05:47.420785 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1af8fd7-a6f1-40f2-b5bc-e0d15e865698-public-tls-certs\") pod \"heat-api-c795c8675-ng42x\" (UID: \"a1af8fd7-a6f1-40f2-b5bc-e0d15e865698\") " pod="openstack/heat-api-c795c8675-ng42x" Jan 03 06:05:47 crc kubenswrapper[4854]: I0103 06:05:47.430283 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1af8fd7-a6f1-40f2-b5bc-e0d15e865698-internal-tls-certs\") pod \"heat-api-c795c8675-ng42x\" (UID: \"a1af8fd7-a6f1-40f2-b5bc-e0d15e865698\") " pod="openstack/heat-api-c795c8675-ng42x" Jan 03 06:05:47 crc kubenswrapper[4854]: I0103 06:05:47.431339 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1af8fd7-a6f1-40f2-b5bc-e0d15e865698-combined-ca-bundle\") pod \"heat-api-c795c8675-ng42x\" (UID: \"a1af8fd7-a6f1-40f2-b5bc-e0d15e865698\") " pod="openstack/heat-api-c795c8675-ng42x" Jan 03 06:05:47 crc kubenswrapper[4854]: I0103 06:05:47.431732 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxk8z\" (UniqueName: \"kubernetes.io/projected/a1af8fd7-a6f1-40f2-b5bc-e0d15e865698-kube-api-access-rxk8z\") pod \"heat-api-c795c8675-ng42x\" (UID: \"a1af8fd7-a6f1-40f2-b5bc-e0d15e865698\") " pod="openstack/heat-api-c795c8675-ng42x" Jan 03 06:05:47 crc kubenswrapper[4854]: I0103 06:05:47.431902 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1af8fd7-a6f1-40f2-b5bc-e0d15e865698-config-data\") pod \"heat-api-c795c8675-ng42x\" (UID: \"a1af8fd7-a6f1-40f2-b5bc-e0d15e865698\") " pod="openstack/heat-api-c795c8675-ng42x" Jan 03 06:05:47 crc kubenswrapper[4854]: I0103 06:05:47.461808 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a1af8fd7-a6f1-40f2-b5bc-e0d15e865698-config-data-custom\") pod \"heat-api-c795c8675-ng42x\" (UID: \"a1af8fd7-a6f1-40f2-b5bc-e0d15e865698\") " pod="openstack/heat-api-c795c8675-ng42x" Jan 03 06:05:47 crc kubenswrapper[4854]: I0103 06:05:47.474289 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-c795c8675-ng42x" Jan 03 06:05:47 crc kubenswrapper[4854]: I0103 06:05:47.495782 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b49c2220-2581-4c4f-a034-10f34ddc8f80-public-tls-certs\") pod \"heat-cfnapi-94fd9f97f-bcw2n\" (UID: \"b49c2220-2581-4c4f-a034-10f34ddc8f80\") " pod="openstack/heat-cfnapi-94fd9f97f-bcw2n" Jan 03 06:05:47 crc kubenswrapper[4854]: I0103 06:05:47.495847 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c722r\" (UniqueName: \"kubernetes.io/projected/b49c2220-2581-4c4f-a034-10f34ddc8f80-kube-api-access-c722r\") pod \"heat-cfnapi-94fd9f97f-bcw2n\" (UID: \"b49c2220-2581-4c4f-a034-10f34ddc8f80\") " pod="openstack/heat-cfnapi-94fd9f97f-bcw2n" Jan 03 06:05:47 crc kubenswrapper[4854]: I0103 06:05:47.495894 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b49c2220-2581-4c4f-a034-10f34ddc8f80-config-data-custom\") pod \"heat-cfnapi-94fd9f97f-bcw2n\" (UID: \"b49c2220-2581-4c4f-a034-10f34ddc8f80\") " pod="openstack/heat-cfnapi-94fd9f97f-bcw2n" Jan 03 06:05:47 crc kubenswrapper[4854]: I0103 06:05:47.495960 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b49c2220-2581-4c4f-a034-10f34ddc8f80-combined-ca-bundle\") pod \"heat-cfnapi-94fd9f97f-bcw2n\" (UID: \"b49c2220-2581-4c4f-a034-10f34ddc8f80\") " pod="openstack/heat-cfnapi-94fd9f97f-bcw2n" Jan 03 06:05:47 crc kubenswrapper[4854]: I0103 06:05:47.496009 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b49c2220-2581-4c4f-a034-10f34ddc8f80-internal-tls-certs\") pod \"heat-cfnapi-94fd9f97f-bcw2n\" (UID: \"b49c2220-2581-4c4f-a034-10f34ddc8f80\") " pod="openstack/heat-cfnapi-94fd9f97f-bcw2n" Jan 03 06:05:47 crc kubenswrapper[4854]: I0103 06:05:47.496041 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b49c2220-2581-4c4f-a034-10f34ddc8f80-config-data\") pod \"heat-cfnapi-94fd9f97f-bcw2n\" (UID: \"b49c2220-2581-4c4f-a034-10f34ddc8f80\") " pod="openstack/heat-cfnapi-94fd9f97f-bcw2n" Jan 03 06:05:47 crc kubenswrapper[4854]: I0103 06:05:47.506109 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b49c2220-2581-4c4f-a034-10f34ddc8f80-config-data-custom\") pod \"heat-cfnapi-94fd9f97f-bcw2n\" (UID: \"b49c2220-2581-4c4f-a034-10f34ddc8f80\") " pod="openstack/heat-cfnapi-94fd9f97f-bcw2n" Jan 03 06:05:47 crc kubenswrapper[4854]: I0103 06:05:47.510867 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b49c2220-2581-4c4f-a034-10f34ddc8f80-combined-ca-bundle\") pod \"heat-cfnapi-94fd9f97f-bcw2n\" (UID: \"b49c2220-2581-4c4f-a034-10f34ddc8f80\") " pod="openstack/heat-cfnapi-94fd9f97f-bcw2n" Jan 03 06:05:47 crc kubenswrapper[4854]: I0103 06:05:47.511651 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b49c2220-2581-4c4f-a034-10f34ddc8f80-public-tls-certs\") pod \"heat-cfnapi-94fd9f97f-bcw2n\" (UID: \"b49c2220-2581-4c4f-a034-10f34ddc8f80\") " pod="openstack/heat-cfnapi-94fd9f97f-bcw2n" Jan 03 06:05:47 crc kubenswrapper[4854]: I0103 06:05:47.514861 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b49c2220-2581-4c4f-a034-10f34ddc8f80-internal-tls-certs\") pod \"heat-cfnapi-94fd9f97f-bcw2n\" (UID: \"b49c2220-2581-4c4f-a034-10f34ddc8f80\") " pod="openstack/heat-cfnapi-94fd9f97f-bcw2n" Jan 03 06:05:47 crc kubenswrapper[4854]: I0103 06:05:47.519023 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b49c2220-2581-4c4f-a034-10f34ddc8f80-config-data\") pod \"heat-cfnapi-94fd9f97f-bcw2n\" (UID: \"b49c2220-2581-4c4f-a034-10f34ddc8f80\") " pod="openstack/heat-cfnapi-94fd9f97f-bcw2n" Jan 03 06:05:47 crc kubenswrapper[4854]: I0103 06:05:47.547744 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c722r\" (UniqueName: \"kubernetes.io/projected/b49c2220-2581-4c4f-a034-10f34ddc8f80-kube-api-access-c722r\") pod \"heat-cfnapi-94fd9f97f-bcw2n\" (UID: \"b49c2220-2581-4c4f-a034-10f34ddc8f80\") " pod="openstack/heat-cfnapi-94fd9f97f-bcw2n" Jan 03 06:05:47 crc kubenswrapper[4854]: I0103 06:05:47.594113 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-94fd9f97f-bcw2n" Jan 03 06:05:47 crc kubenswrapper[4854]: I0103 06:05:47.849513 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 03 06:05:49 crc kubenswrapper[4854]: I0103 06:05:49.830819 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-86b9b59fd6-bgwdc" Jan 03 06:05:50 crc kubenswrapper[4854]: I0103 06:05:50.493551 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-65bkj"] Jan 03 06:05:50 crc kubenswrapper[4854]: I0103 06:05:50.496533 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-65bkj" Jan 03 06:05:50 crc kubenswrapper[4854]: I0103 06:05:50.525451 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-65bkj"] Jan 03 06:05:50 crc kubenswrapper[4854]: I0103 06:05:50.602503 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e54bce74-4b1b-463a-a4ef-fee9f38a5cfa-catalog-content\") pod \"community-operators-65bkj\" (UID: \"e54bce74-4b1b-463a-a4ef-fee9f38a5cfa\") " pod="openshift-marketplace/community-operators-65bkj" Jan 03 06:05:50 crc kubenswrapper[4854]: I0103 06:05:50.602546 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e54bce74-4b1b-463a-a4ef-fee9f38a5cfa-utilities\") pod \"community-operators-65bkj\" (UID: \"e54bce74-4b1b-463a-a4ef-fee9f38a5cfa\") " pod="openshift-marketplace/community-operators-65bkj" Jan 03 06:05:50 crc kubenswrapper[4854]: I0103 06:05:50.602623 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv5xg\" (UniqueName: \"kubernetes.io/projected/e54bce74-4b1b-463a-a4ef-fee9f38a5cfa-kube-api-access-gv5xg\") pod \"community-operators-65bkj\" (UID: \"e54bce74-4b1b-463a-a4ef-fee9f38a5cfa\") " pod="openshift-marketplace/community-operators-65bkj" Jan 03 06:05:50 crc kubenswrapper[4854]: I0103 06:05:50.705444 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv5xg\" (UniqueName: \"kubernetes.io/projected/e54bce74-4b1b-463a-a4ef-fee9f38a5cfa-kube-api-access-gv5xg\") pod \"community-operators-65bkj\" (UID: \"e54bce74-4b1b-463a-a4ef-fee9f38a5cfa\") " pod="openshift-marketplace/community-operators-65bkj" Jan 03 06:05:50 crc kubenswrapper[4854]: I0103 06:05:50.705772 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e54bce74-4b1b-463a-a4ef-fee9f38a5cfa-catalog-content\") pod \"community-operators-65bkj\" (UID: \"e54bce74-4b1b-463a-a4ef-fee9f38a5cfa\") " pod="openshift-marketplace/community-operators-65bkj" Jan 03 06:05:50 crc kubenswrapper[4854]: I0103 06:05:50.705809 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e54bce74-4b1b-463a-a4ef-fee9f38a5cfa-utilities\") pod \"community-operators-65bkj\" (UID: \"e54bce74-4b1b-463a-a4ef-fee9f38a5cfa\") " pod="openshift-marketplace/community-operators-65bkj" Jan 03 06:05:50 crc kubenswrapper[4854]: I0103 06:05:50.706333 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e54bce74-4b1b-463a-a4ef-fee9f38a5cfa-utilities\") pod \"community-operators-65bkj\" (UID: \"e54bce74-4b1b-463a-a4ef-fee9f38a5cfa\") " pod="openshift-marketplace/community-operators-65bkj" Jan 03 06:05:50 crc kubenswrapper[4854]: I0103 06:05:50.706386 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e54bce74-4b1b-463a-a4ef-fee9f38a5cfa-catalog-content\") pod \"community-operators-65bkj\" (UID: \"e54bce74-4b1b-463a-a4ef-fee9f38a5cfa\") " pod="openshift-marketplace/community-operators-65bkj" Jan 03 06:05:50 crc kubenswrapper[4854]: I0103 06:05:50.726911 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv5xg\" (UniqueName: \"kubernetes.io/projected/e54bce74-4b1b-463a-a4ef-fee9f38a5cfa-kube-api-access-gv5xg\") pod \"community-operators-65bkj\" (UID: \"e54bce74-4b1b-463a-a4ef-fee9f38a5cfa\") " pod="openshift-marketplace/community-operators-65bkj" Jan 03 06:05:50 crc kubenswrapper[4854]: I0103 06:05:50.838514 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-65bkj" Jan 03 06:05:51 crc kubenswrapper[4854]: I0103 06:05:51.618226 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7756b9d78c-b57b6" Jan 03 06:05:51 crc kubenswrapper[4854]: I0103 06:05:51.695961 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-klgkr"] Jan 03 06:05:51 crc kubenswrapper[4854]: I0103 06:05:51.696453 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-klgkr" podUID="1f4da9c0-58a1-41d0-9d97-6cf376e6233d" containerName="dnsmasq-dns" containerID="cri-o://5dde7ca3d26f94eae20a208ed1bfb1b2061d103aca875cd6846d10ec38acc813" gracePeriod=10 Jan 03 06:05:51 crc kubenswrapper[4854]: I0103 06:05:51.837973 4854 generic.go:334] "Generic (PLEG): container finished" podID="1f4da9c0-58a1-41d0-9d97-6cf376e6233d" containerID="5dde7ca3d26f94eae20a208ed1bfb1b2061d103aca875cd6846d10ec38acc813" exitCode=0 Jan 03 06:05:51 crc kubenswrapper[4854]: I0103 06:05:51.838011 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-klgkr" event={"ID":"1f4da9c0-58a1-41d0-9d97-6cf376e6233d","Type":"ContainerDied","Data":"5dde7ca3d26f94eae20a208ed1bfb1b2061d103aca875cd6846d10ec38acc813"} Jan 03 06:05:53 crc kubenswrapper[4854]: I0103 06:05:53.145217 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-65bkj"] Jan 03 06:05:53 crc kubenswrapper[4854]: I0103 06:05:53.154979 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-94fd9f97f-bcw2n"] Jan 03 06:05:53 crc kubenswrapper[4854]: I0103 06:05:53.164510 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5fd96db984-rmm7s"] Jan 03 06:05:53 crc kubenswrapper[4854]: I0103 06:05:53.173944 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-c795c8675-ng42x"] Jan 03 06:05:53 crc kubenswrapper[4854]: I0103 06:05:53.183780 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6d569589c9-7q7mv"] Jan 03 06:05:53 crc kubenswrapper[4854]: I0103 06:05:53.192800 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6749994886-zsx65"] Jan 03 06:05:53 crc kubenswrapper[4854]: W0103 06:05:53.331540 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1af8fd7_a6f1_40f2_b5bc_e0d15e865698.slice/crio-e816e8736bfda71b5755ce5a173078db6e7df3cd62f311a37c75d5f199e2952f WatchSource:0}: Error finding container e816e8736bfda71b5755ce5a173078db6e7df3cd62f311a37c75d5f199e2952f: Status 404 returned error can't find the container with id e816e8736bfda71b5755ce5a173078db6e7df3cd62f311a37c75d5f199e2952f Jan 03 06:05:53 crc kubenswrapper[4854]: W0103 06:05:53.332023 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1aee80e_651a_4434_a1da_34bd6dbd83bd.slice/crio-9de06276a04ebe13a7314bbf8d03bf799dc868d3a0c6db676c6c5b8203034250 WatchSource:0}: Error finding container 9de06276a04ebe13a7314bbf8d03bf799dc868d3a0c6db676c6c5b8203034250: Status 404 returned error can't find the container with id 9de06276a04ebe13a7314bbf8d03bf799dc868d3a0c6db676c6c5b8203034250 Jan 03 06:05:53 crc kubenswrapper[4854]: I0103 06:05:53.548794 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-86b9b59fd6-bgwdc" podUID="793c54a0-e893-4837-9039-3ff439b66296" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.213:8000/healthcheck\": read tcp 10.217.0.2:52414->10.217.0.213:8000: read: connection reset by peer" Jan 03 06:05:53 crc kubenswrapper[4854]: I0103 06:05:53.563938 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-5486c4bcf9-6vs2g" podUID="9644f7d8-d9bb-436b-a629-81cbc06323be" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.214:8004/healthcheck\": read tcp 10.217.0.2:36094->10.217.0.214:8004: read: connection reset by peer" Jan 03 06:05:53 crc kubenswrapper[4854]: I0103 06:05:53.828155 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-klgkr" Jan 03 06:05:53 crc kubenswrapper[4854]: I0103 06:05:53.896299 4854 generic.go:334] "Generic (PLEG): container finished" podID="9644f7d8-d9bb-436b-a629-81cbc06323be" containerID="32d7e9ad4520e2c4b8c1e0d28094816f9dcf9e6320dcd23f669f0d9ad488f038" exitCode=0 Jan 03 06:05:53 crc kubenswrapper[4854]: I0103 06:05:53.896408 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5486c4bcf9-6vs2g" event={"ID":"9644f7d8-d9bb-436b-a629-81cbc06323be","Type":"ContainerDied","Data":"32d7e9ad4520e2c4b8c1e0d28094816f9dcf9e6320dcd23f669f0d9ad488f038"} Jan 03 06:05:53 crc kubenswrapper[4854]: I0103 06:05:53.899257 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5fd96db984-rmm7s" event={"ID":"b1aee80e-651a-4434-a1da-34bd6dbd83bd","Type":"ContainerStarted","Data":"9de06276a04ebe13a7314bbf8d03bf799dc868d3a0c6db676c6c5b8203034250"} Jan 03 06:05:53 crc kubenswrapper[4854]: I0103 06:05:53.900877 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6749994886-zsx65" event={"ID":"d346baaf-3040-4209-9049-e92c7b033015","Type":"ContainerStarted","Data":"e8fed35324b766976ad7e2b21295c85175c80803b9cd488abf74f10595a77f49"} Jan 03 06:05:53 crc kubenswrapper[4854]: I0103 06:05:53.903651 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b3ecf60-8687-4af2-a477-f2f058b854ea","Type":"ContainerStarted","Data":"ea8cf45186e7e4af93fd758b01ec5631225555dd39457d45e226b699c7469023"} Jan 03 06:05:53 crc kubenswrapper[4854]: I0103 06:05:53.903716 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7b3ecf60-8687-4af2-a477-f2f058b854ea" containerName="ceilometer-central-agent" containerID="cri-o://a61f5169a0987f63c9d47cea68269915733d1753687b224360f78db5169e6c49" gracePeriod=30 Jan 03 06:05:53 crc kubenswrapper[4854]: I0103 06:05:53.903749 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 03 06:05:53 crc kubenswrapper[4854]: I0103 06:05:53.903797 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7b3ecf60-8687-4af2-a477-f2f058b854ea" containerName="proxy-httpd" containerID="cri-o://ea8cf45186e7e4af93fd758b01ec5631225555dd39457d45e226b699c7469023" gracePeriod=30 Jan 03 06:05:53 crc kubenswrapper[4854]: I0103 06:05:53.903835 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7b3ecf60-8687-4af2-a477-f2f058b854ea" containerName="sg-core" containerID="cri-o://1b1ef04d5af8a220b1dd91b1eac12f64374ec802017c272f80b261fd643525a4" gracePeriod=30 Jan 03 06:05:53 crc kubenswrapper[4854]: I0103 06:05:53.903870 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7b3ecf60-8687-4af2-a477-f2f058b854ea" containerName="ceilometer-notification-agent" containerID="cri-o://ed7b294cb2965625e010f32e983ff8b51d97872c264442165a36e4a7f06d42fe" gracePeriod=30 Jan 03 06:05:53 crc kubenswrapper[4854]: I0103 06:05:53.910123 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f4da9c0-58a1-41d0-9d97-6cf376e6233d-ovsdbserver-nb\") pod \"1f4da9c0-58a1-41d0-9d97-6cf376e6233d\" (UID: \"1f4da9c0-58a1-41d0-9d97-6cf376e6233d\") " Jan 03 06:05:53 crc kubenswrapper[4854]: I0103 06:05:53.910208 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wx6m\" (UniqueName: \"kubernetes.io/projected/1f4da9c0-58a1-41d0-9d97-6cf376e6233d-kube-api-access-5wx6m\") pod \"1f4da9c0-58a1-41d0-9d97-6cf376e6233d\" (UID: \"1f4da9c0-58a1-41d0-9d97-6cf376e6233d\") " Jan 03 06:05:53 crc kubenswrapper[4854]: I0103 06:05:53.910338 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f4da9c0-58a1-41d0-9d97-6cf376e6233d-config\") pod \"1f4da9c0-58a1-41d0-9d97-6cf376e6233d\" (UID: \"1f4da9c0-58a1-41d0-9d97-6cf376e6233d\") " Jan 03 06:05:53 crc kubenswrapper[4854]: I0103 06:05:53.910463 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f4da9c0-58a1-41d0-9d97-6cf376e6233d-dns-svc\") pod \"1f4da9c0-58a1-41d0-9d97-6cf376e6233d\" (UID: \"1f4da9c0-58a1-41d0-9d97-6cf376e6233d\") " Jan 03 06:05:53 crc kubenswrapper[4854]: I0103 06:05:53.910543 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1f4da9c0-58a1-41d0-9d97-6cf376e6233d-dns-swift-storage-0\") pod \"1f4da9c0-58a1-41d0-9d97-6cf376e6233d\" (UID: \"1f4da9c0-58a1-41d0-9d97-6cf376e6233d\") " Jan 03 06:05:53 crc kubenswrapper[4854]: I0103 06:05:53.910560 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f4da9c0-58a1-41d0-9d97-6cf376e6233d-ovsdbserver-sb\") pod \"1f4da9c0-58a1-41d0-9d97-6cf376e6233d\" (UID: \"1f4da9c0-58a1-41d0-9d97-6cf376e6233d\") " Jan 03 06:05:53 crc kubenswrapper[4854]: I0103 06:05:53.934377 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-klgkr" event={"ID":"1f4da9c0-58a1-41d0-9d97-6cf376e6233d","Type":"ContainerDied","Data":"86e7461bf666df7d18a14b87ed1e808de533e7504191a20e3dd9d82fd0eb142c"} Jan 03 06:05:53 crc kubenswrapper[4854]: I0103 06:05:53.934428 4854 scope.go:117] "RemoveContainer" containerID="5dde7ca3d26f94eae20a208ed1bfb1b2061d103aca875cd6846d10ec38acc813" Jan 03 06:05:53 crc kubenswrapper[4854]: I0103 06:05:53.935312 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-klgkr" Jan 03 06:05:53 crc kubenswrapper[4854]: I0103 06:05:53.938414 4854 generic.go:334] "Generic (PLEG): container finished" podID="793c54a0-e893-4837-9039-3ff439b66296" containerID="a65f3aa29d8f6d79ea8ee0b4052fed0469ef6f1c239b16c271203dddee3041ac" exitCode=0 Jan 03 06:05:53 crc kubenswrapper[4854]: I0103 06:05:53.938473 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-86b9b59fd6-bgwdc" event={"ID":"793c54a0-e893-4837-9039-3ff439b66296","Type":"ContainerDied","Data":"a65f3aa29d8f6d79ea8ee0b4052fed0469ef6f1c239b16c271203dddee3041ac"} Jan 03 06:05:53 crc kubenswrapper[4854]: I0103 06:05:53.944259 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-65bkj" event={"ID":"e54bce74-4b1b-463a-a4ef-fee9f38a5cfa","Type":"ContainerStarted","Data":"6bcc2df716abf7248a61defd54f00ec6f18851f022aa55ae52d2f344d43697b7"} Jan 03 06:05:53 crc kubenswrapper[4854]: I0103 06:05:53.947324 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6d569589c9-7q7mv" event={"ID":"3a290934-d2f6-475a-814a-209a27b7e897","Type":"ContainerStarted","Data":"75fdfe4bd8f4858407c7adc62ebd4bc322ebbd9f69636d7cfeef7b5fad17acd4"} Jan 03 06:05:53 crc kubenswrapper[4854]: I0103 06:05:53.949616 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-c795c8675-ng42x" event={"ID":"a1af8fd7-a6f1-40f2-b5bc-e0d15e865698","Type":"ContainerStarted","Data":"e816e8736bfda71b5755ce5a173078db6e7df3cd62f311a37c75d5f199e2952f"} Jan 03 06:05:53 crc kubenswrapper[4854]: I0103 06:05:53.951544 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-94fd9f97f-bcw2n" event={"ID":"b49c2220-2581-4c4f-a034-10f34ddc8f80","Type":"ContainerStarted","Data":"d71884cffe7d197e9abf639f17d8e8a438b16a078f4eb76df293628fee061bcf"} Jan 03 06:05:53 crc kubenswrapper[4854]: I0103 06:05:53.951900 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=11.428452355 podStartE2EDuration="18.951887977s" podCreationTimestamp="2026-01-03 06:05:35 +0000 UTC" firstStartedPulling="2026-01-03 06:05:36.820330356 +0000 UTC m=+1515.146906928" lastFinishedPulling="2026-01-03 06:05:44.343765978 +0000 UTC m=+1522.670342550" observedRunningTime="2026-01-03 06:05:53.921596572 +0000 UTC m=+1532.248173154" watchObservedRunningTime="2026-01-03 06:05:53.951887977 +0000 UTC m=+1532.278464549" Jan 03 06:05:53 crc kubenswrapper[4854]: I0103 06:05:53.958607 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f4da9c0-58a1-41d0-9d97-6cf376e6233d-kube-api-access-5wx6m" (OuterVolumeSpecName: "kube-api-access-5wx6m") pod "1f4da9c0-58a1-41d0-9d97-6cf376e6233d" (UID: "1f4da9c0-58a1-41d0-9d97-6cf376e6233d"). InnerVolumeSpecName "kube-api-access-5wx6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:05:53 crc kubenswrapper[4854]: E0103 06:05:53.968581 4854 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9644f7d8_d9bb_436b_a629_81cbc06323be.slice/crio-conmon-32d7e9ad4520e2c4b8c1e0d28094816f9dcf9e6320dcd23f669f0d9ad488f038.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9644f7d8_d9bb_436b_a629_81cbc06323be.slice/crio-32d7e9ad4520e2c4b8c1e0d28094816f9dcf9e6320dcd23f669f0d9ad488f038.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod793c54a0_e893_4837_9039_3ff439b66296.slice/crio-conmon-a65f3aa29d8f6d79ea8ee0b4052fed0469ef6f1c239b16c271203dddee3041ac.scope\": RecentStats: unable to find data in memory cache]" Jan 03 06:05:54 crc kubenswrapper[4854]: I0103 06:05:54.002881 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f4da9c0-58a1-41d0-9d97-6cf376e6233d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1f4da9c0-58a1-41d0-9d97-6cf376e6233d" (UID: "1f4da9c0-58a1-41d0-9d97-6cf376e6233d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:05:54 crc kubenswrapper[4854]: I0103 06:05:54.018306 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wx6m\" (UniqueName: \"kubernetes.io/projected/1f4da9c0-58a1-41d0-9d97-6cf376e6233d-kube-api-access-5wx6m\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:54 crc kubenswrapper[4854]: I0103 06:05:54.018338 4854 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f4da9c0-58a1-41d0-9d97-6cf376e6233d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:54 crc kubenswrapper[4854]: I0103 06:05:54.054756 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f4da9c0-58a1-41d0-9d97-6cf376e6233d-config" (OuterVolumeSpecName: "config") pod "1f4da9c0-58a1-41d0-9d97-6cf376e6233d" (UID: "1f4da9c0-58a1-41d0-9d97-6cf376e6233d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:05:54 crc kubenswrapper[4854]: I0103 06:05:54.092913 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f4da9c0-58a1-41d0-9d97-6cf376e6233d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1f4da9c0-58a1-41d0-9d97-6cf376e6233d" (UID: "1f4da9c0-58a1-41d0-9d97-6cf376e6233d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:05:54 crc kubenswrapper[4854]: I0103 06:05:54.103586 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f4da9c0-58a1-41d0-9d97-6cf376e6233d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1f4da9c0-58a1-41d0-9d97-6cf376e6233d" (UID: "1f4da9c0-58a1-41d0-9d97-6cf376e6233d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:05:54 crc kubenswrapper[4854]: I0103 06:05:54.109041 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f4da9c0-58a1-41d0-9d97-6cf376e6233d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1f4da9c0-58a1-41d0-9d97-6cf376e6233d" (UID: "1f4da9c0-58a1-41d0-9d97-6cf376e6233d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:05:54 crc kubenswrapper[4854]: I0103 06:05:54.122478 4854 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f4da9c0-58a1-41d0-9d97-6cf376e6233d-config\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:54 crc kubenswrapper[4854]: I0103 06:05:54.122609 4854 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f4da9c0-58a1-41d0-9d97-6cf376e6233d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:54 crc kubenswrapper[4854]: I0103 06:05:54.122641 4854 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1f4da9c0-58a1-41d0-9d97-6cf376e6233d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:54 crc kubenswrapper[4854]: I0103 06:05:54.122655 4854 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f4da9c0-58a1-41d0-9d97-6cf376e6233d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:54 crc kubenswrapper[4854]: I0103 06:05:54.347457 4854 scope.go:117] "RemoveContainer" containerID="5e1dbf25860fd5885f0a1a369acd074c344d95ffafb3709b272c0b5769b94715" Jan 03 06:05:54 crc kubenswrapper[4854]: I0103 06:05:54.390728 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-86b9b59fd6-bgwdc" Jan 03 06:05:54 crc kubenswrapper[4854]: I0103 06:05:54.435827 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/793c54a0-e893-4837-9039-3ff439b66296-config-data-custom\") pod \"793c54a0-e893-4837-9039-3ff439b66296\" (UID: \"793c54a0-e893-4837-9039-3ff439b66296\") " Jan 03 06:05:54 crc kubenswrapper[4854]: I0103 06:05:54.436144 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/793c54a0-e893-4837-9039-3ff439b66296-config-data\") pod \"793c54a0-e893-4837-9039-3ff439b66296\" (UID: \"793c54a0-e893-4837-9039-3ff439b66296\") " Jan 03 06:05:54 crc kubenswrapper[4854]: I0103 06:05:54.437039 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/793c54a0-e893-4837-9039-3ff439b66296-combined-ca-bundle\") pod \"793c54a0-e893-4837-9039-3ff439b66296\" (UID: \"793c54a0-e893-4837-9039-3ff439b66296\") " Jan 03 06:05:54 crc kubenswrapper[4854]: I0103 06:05:54.437175 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhnc9\" (UniqueName: \"kubernetes.io/projected/793c54a0-e893-4837-9039-3ff439b66296-kube-api-access-fhnc9\") pod \"793c54a0-e893-4837-9039-3ff439b66296\" (UID: \"793c54a0-e893-4837-9039-3ff439b66296\") " Jan 03 06:05:54 crc kubenswrapper[4854]: I0103 06:05:54.446995 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/793c54a0-e893-4837-9039-3ff439b66296-kube-api-access-fhnc9" (OuterVolumeSpecName: "kube-api-access-fhnc9") pod "793c54a0-e893-4837-9039-3ff439b66296" (UID: "793c54a0-e893-4837-9039-3ff439b66296"). InnerVolumeSpecName "kube-api-access-fhnc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:05:54 crc kubenswrapper[4854]: I0103 06:05:54.449059 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/793c54a0-e893-4837-9039-3ff439b66296-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "793c54a0-e893-4837-9039-3ff439b66296" (UID: "793c54a0-e893-4837-9039-3ff439b66296"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:05:54 crc kubenswrapper[4854]: I0103 06:05:54.454906 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5486c4bcf9-6vs2g" Jan 03 06:05:54 crc kubenswrapper[4854]: I0103 06:05:54.469255 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-klgkr"] Jan 03 06:05:54 crc kubenswrapper[4854]: I0103 06:05:54.473808 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-klgkr"] Jan 03 06:05:54 crc kubenswrapper[4854]: I0103 06:05:54.539765 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9644f7d8-d9bb-436b-a629-81cbc06323be-combined-ca-bundle\") pod \"9644f7d8-d9bb-436b-a629-81cbc06323be\" (UID: \"9644f7d8-d9bb-436b-a629-81cbc06323be\") " Jan 03 06:05:54 crc kubenswrapper[4854]: I0103 06:05:54.539840 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9644f7d8-d9bb-436b-a629-81cbc06323be-config-data-custom\") pod \"9644f7d8-d9bb-436b-a629-81cbc06323be\" (UID: \"9644f7d8-d9bb-436b-a629-81cbc06323be\") " Jan 03 06:05:54 crc kubenswrapper[4854]: I0103 06:05:54.540224 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlmg9\" (UniqueName: \"kubernetes.io/projected/9644f7d8-d9bb-436b-a629-81cbc06323be-kube-api-access-qlmg9\") pod \"9644f7d8-d9bb-436b-a629-81cbc06323be\" (UID: \"9644f7d8-d9bb-436b-a629-81cbc06323be\") " Jan 03 06:05:54 crc kubenswrapper[4854]: I0103 06:05:54.540279 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9644f7d8-d9bb-436b-a629-81cbc06323be-config-data\") pod \"9644f7d8-d9bb-436b-a629-81cbc06323be\" (UID: \"9644f7d8-d9bb-436b-a629-81cbc06323be\") " Jan 03 06:05:54 crc kubenswrapper[4854]: I0103 06:05:54.541139 4854 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/793c54a0-e893-4837-9039-3ff439b66296-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:54 crc kubenswrapper[4854]: I0103 06:05:54.541165 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhnc9\" (UniqueName: \"kubernetes.io/projected/793c54a0-e893-4837-9039-3ff439b66296-kube-api-access-fhnc9\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:54 crc kubenswrapper[4854]: I0103 06:05:54.553812 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9644f7d8-d9bb-436b-a629-81cbc06323be-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9644f7d8-d9bb-436b-a629-81cbc06323be" (UID: "9644f7d8-d9bb-436b-a629-81cbc06323be"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:05:54 crc kubenswrapper[4854]: I0103 06:05:54.560847 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9644f7d8-d9bb-436b-a629-81cbc06323be-kube-api-access-qlmg9" (OuterVolumeSpecName: "kube-api-access-qlmg9") pod "9644f7d8-d9bb-436b-a629-81cbc06323be" (UID: "9644f7d8-d9bb-436b-a629-81cbc06323be"). InnerVolumeSpecName "kube-api-access-qlmg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:05:54 crc kubenswrapper[4854]: I0103 06:05:54.638300 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/793c54a0-e893-4837-9039-3ff439b66296-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "793c54a0-e893-4837-9039-3ff439b66296" (UID: "793c54a0-e893-4837-9039-3ff439b66296"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:05:54 crc kubenswrapper[4854]: I0103 06:05:54.650255 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlmg9\" (UniqueName: \"kubernetes.io/projected/9644f7d8-d9bb-436b-a629-81cbc06323be-kube-api-access-qlmg9\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:54 crc kubenswrapper[4854]: I0103 06:05:54.650298 4854 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9644f7d8-d9bb-436b-a629-81cbc06323be-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:54 crc kubenswrapper[4854]: I0103 06:05:54.650314 4854 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/793c54a0-e893-4837-9039-3ff439b66296-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:54 crc kubenswrapper[4854]: I0103 06:05:54.667200 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9644f7d8-d9bb-436b-a629-81cbc06323be-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9644f7d8-d9bb-436b-a629-81cbc06323be" (UID: "9644f7d8-d9bb-436b-a629-81cbc06323be"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:05:54 crc kubenswrapper[4854]: I0103 06:05:54.669908 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/793c54a0-e893-4837-9039-3ff439b66296-config-data" (OuterVolumeSpecName: "config-data") pod "793c54a0-e893-4837-9039-3ff439b66296" (UID: "793c54a0-e893-4837-9039-3ff439b66296"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:05:54 crc kubenswrapper[4854]: I0103 06:05:54.707405 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9644f7d8-d9bb-436b-a629-81cbc06323be-config-data" (OuterVolumeSpecName: "config-data") pod "9644f7d8-d9bb-436b-a629-81cbc06323be" (UID: "9644f7d8-d9bb-436b-a629-81cbc06323be"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:05:54 crc kubenswrapper[4854]: I0103 06:05:54.752561 4854 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9644f7d8-d9bb-436b-a629-81cbc06323be-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:54 crc kubenswrapper[4854]: I0103 06:05:54.752598 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9644f7d8-d9bb-436b-a629-81cbc06323be-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:54 crc kubenswrapper[4854]: I0103 06:05:54.752610 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/793c54a0-e893-4837-9039-3ff439b66296-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:54 crc kubenswrapper[4854]: I0103 06:05:54.968140 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6d569589c9-7q7mv" event={"ID":"3a290934-d2f6-475a-814a-209a27b7e897","Type":"ContainerStarted","Data":"55feecbf9b856f9f17589db2c47e1a9f36434fc5e6c51a11c3eb47994cc72cbd"} Jan 03 06:05:54 crc kubenswrapper[4854]: I0103 06:05:54.968545 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6d569589c9-7q7mv" Jan 03 06:05:54 crc kubenswrapper[4854]: I0103 06:05:54.971512 4854 generic.go:334] "Generic (PLEG): container finished" podID="7b3ecf60-8687-4af2-a477-f2f058b854ea" containerID="ea8cf45186e7e4af93fd758b01ec5631225555dd39457d45e226b699c7469023" exitCode=0 Jan 03 06:05:54 crc kubenswrapper[4854]: I0103 06:05:54.971541 4854 generic.go:334] "Generic (PLEG): container finished" podID="7b3ecf60-8687-4af2-a477-f2f058b854ea" containerID="1b1ef04d5af8a220b1dd91b1eac12f64374ec802017c272f80b261fd643525a4" exitCode=2 Jan 03 06:05:54 crc kubenswrapper[4854]: I0103 06:05:54.971552 4854 generic.go:334] "Generic (PLEG): container finished" podID="7b3ecf60-8687-4af2-a477-f2f058b854ea" containerID="a61f5169a0987f63c9d47cea68269915733d1753687b224360f78db5169e6c49" exitCode=0 Jan 03 06:05:54 crc kubenswrapper[4854]: I0103 06:05:54.971591 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b3ecf60-8687-4af2-a477-f2f058b854ea","Type":"ContainerDied","Data":"ea8cf45186e7e4af93fd758b01ec5631225555dd39457d45e226b699c7469023"} Jan 03 06:05:54 crc kubenswrapper[4854]: I0103 06:05:54.971621 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b3ecf60-8687-4af2-a477-f2f058b854ea","Type":"ContainerDied","Data":"1b1ef04d5af8a220b1dd91b1eac12f64374ec802017c272f80b261fd643525a4"} Jan 03 06:05:54 crc kubenswrapper[4854]: I0103 06:05:54.971648 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b3ecf60-8687-4af2-a477-f2f058b854ea","Type":"ContainerDied","Data":"a61f5169a0987f63c9d47cea68269915733d1753687b224360f78db5169e6c49"} Jan 03 06:05:54 crc kubenswrapper[4854]: I0103 06:05:54.973631 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-c795c8675-ng42x" event={"ID":"a1af8fd7-a6f1-40f2-b5bc-e0d15e865698","Type":"ContainerStarted","Data":"547285aff9ff3cd0b00a39340f508011d742c064d6f1bc64f70bcd294cc71028"} Jan 03 06:05:54 crc kubenswrapper[4854]: I0103 06:05:54.974928 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-c795c8675-ng42x" Jan 03 06:05:54 crc kubenswrapper[4854]: I0103 06:05:54.979474 4854 generic.go:334] "Generic (PLEG): container finished" podID="e54bce74-4b1b-463a-a4ef-fee9f38a5cfa" containerID="b200980c1d7c5dd8c40db49246a201934f0525b93633a06876657471fc126903" exitCode=0 Jan 03 06:05:54 crc kubenswrapper[4854]: I0103 06:05:54.979533 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-65bkj" event={"ID":"e54bce74-4b1b-463a-a4ef-fee9f38a5cfa","Type":"ContainerDied","Data":"b200980c1d7c5dd8c40db49246a201934f0525b93633a06876657471fc126903"} Jan 03 06:05:54 crc kubenswrapper[4854]: I0103 06:05:54.984521 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-86b9b59fd6-bgwdc" event={"ID":"793c54a0-e893-4837-9039-3ff439b66296","Type":"ContainerDied","Data":"4cada4a0633768863f3db61539197f8b9dcdc96572cddc87268b1a0f1d1754a2"} Jan 03 06:05:54 crc kubenswrapper[4854]: I0103 06:05:54.984580 4854 scope.go:117] "RemoveContainer" containerID="a65f3aa29d8f6d79ea8ee0b4052fed0469ef6f1c239b16c271203dddee3041ac" Jan 03 06:05:54 crc kubenswrapper[4854]: I0103 06:05:54.984737 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-86b9b59fd6-bgwdc" Jan 03 06:05:54 crc kubenswrapper[4854]: I0103 06:05:54.996154 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5fd96db984-rmm7s" event={"ID":"b1aee80e-651a-4434-a1da-34bd6dbd83bd","Type":"ContainerStarted","Data":"53aa05da04023d734a8bafdf7834827581c4fbf31e56de16749307037962cd8d"} Jan 03 06:05:54 crc kubenswrapper[4854]: I0103 06:05:54.997004 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5fd96db984-rmm7s" Jan 03 06:05:55 crc kubenswrapper[4854]: I0103 06:05:55.001518 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-6d569589c9-7q7mv" podStartSLOduration=11.00149745 podStartE2EDuration="11.00149745s" podCreationTimestamp="2026-01-03 06:05:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:05:54.996859587 +0000 UTC m=+1533.323436159" watchObservedRunningTime="2026-01-03 06:05:55.00149745 +0000 UTC m=+1533.328074032" Jan 03 06:05:55 crc kubenswrapper[4854]: I0103 06:05:55.012431 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6749994886-zsx65" event={"ID":"d346baaf-3040-4209-9049-e92c7b033015","Type":"ContainerStarted","Data":"d59a70be9d0375e47163a3d0b8327ed47cd9ba0c8844425faf08091c6be4990b"} Jan 03 06:05:55 crc kubenswrapper[4854]: I0103 06:05:55.013482 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-6749994886-zsx65" Jan 03 06:05:55 crc kubenswrapper[4854]: I0103 06:05:55.020690 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5486c4bcf9-6vs2g" event={"ID":"9644f7d8-d9bb-436b-a629-81cbc06323be","Type":"ContainerDied","Data":"a618b72abb930e7cb9a423c21a86f28af4d58c546a92497c6ea02b0ef573f07b"} Jan 03 06:05:55 crc kubenswrapper[4854]: I0103 06:05:55.020784 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5486c4bcf9-6vs2g" Jan 03 06:05:55 crc kubenswrapper[4854]: I0103 06:05:55.027906 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-5fd96db984-rmm7s" podStartSLOduration=11.027886221 podStartE2EDuration="11.027886221s" podCreationTimestamp="2026-01-03 06:05:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:05:55.018499333 +0000 UTC m=+1533.345075935" watchObservedRunningTime="2026-01-03 06:05:55.027886221 +0000 UTC m=+1533.354462803" Jan 03 06:05:55 crc kubenswrapper[4854]: I0103 06:05:55.031881 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-94fd9f97f-bcw2n" event={"ID":"b49c2220-2581-4c4f-a034-10f34ddc8f80","Type":"ContainerStarted","Data":"1836a121cfa508a6a36da14b8065b7815a3f0a82cd94413993def51986cb2c3d"} Jan 03 06:05:55 crc kubenswrapper[4854]: I0103 06:05:55.031935 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-94fd9f97f-bcw2n" Jan 03 06:05:55 crc kubenswrapper[4854]: I0103 06:05:55.039631 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"5d443f98-7ca4-4ea2-bf9b-c64182525733","Type":"ContainerStarted","Data":"34e64851240ec0c9fee881990abe620b065ef8cd728f51c88d873105643857c1"} Jan 03 06:05:55 crc kubenswrapper[4854]: I0103 06:05:55.077512 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-c795c8675-ng42x" podStartSLOduration=8.077490266 podStartE2EDuration="8.077490266s" podCreationTimestamp="2026-01-03 06:05:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:05:55.062664386 +0000 UTC m=+1533.389240978" watchObservedRunningTime="2026-01-03 06:05:55.077490266 +0000 UTC m=+1533.404066838" Jan 03 06:05:55 crc kubenswrapper[4854]: I0103 06:05:55.080552 4854 scope.go:117] "RemoveContainer" containerID="32d7e9ad4520e2c4b8c1e0d28094816f9dcf9e6320dcd23f669f0d9ad488f038" Jan 03 06:05:55 crc kubenswrapper[4854]: I0103 06:05:55.101737 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-94fd9f97f-bcw2n" podStartSLOduration=8.101713164 podStartE2EDuration="8.101713164s" podCreationTimestamp="2026-01-03 06:05:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:05:55.084559657 +0000 UTC m=+1533.411136239" watchObservedRunningTime="2026-01-03 06:05:55.101713164 +0000 UTC m=+1533.428289746" Jan 03 06:05:55 crc kubenswrapper[4854]: I0103 06:05:55.129452 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-6749994886-zsx65" podStartSLOduration=11.129426957 podStartE2EDuration="11.129426957s" podCreationTimestamp="2026-01-03 06:05:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:05:55.105937687 +0000 UTC m=+1533.432514259" watchObservedRunningTime="2026-01-03 06:05:55.129426957 +0000 UTC m=+1533.456003529" Jan 03 06:05:55 crc kubenswrapper[4854]: I0103 06:05:55.153575 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=4.643466421 podStartE2EDuration="27.153555043s" podCreationTimestamp="2026-01-03 06:05:28 +0000 UTC" firstStartedPulling="2026-01-03 06:05:29.579787165 +0000 UTC m=+1507.906363737" lastFinishedPulling="2026-01-03 06:05:52.089875787 +0000 UTC m=+1530.416452359" observedRunningTime="2026-01-03 06:05:55.120035299 +0000 UTC m=+1533.446611881" watchObservedRunningTime="2026-01-03 06:05:55.153555043 +0000 UTC m=+1533.480131615" Jan 03 06:05:55 crc kubenswrapper[4854]: I0103 06:05:55.192337 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-86b9b59fd6-bgwdc"] Jan 03 06:05:55 crc kubenswrapper[4854]: I0103 06:05:55.209509 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-86b9b59fd6-bgwdc"] Jan 03 06:05:55 crc kubenswrapper[4854]: I0103 06:05:55.220729 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5486c4bcf9-6vs2g"] Jan 03 06:05:55 crc kubenswrapper[4854]: I0103 06:05:55.233669 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-5486c4bcf9-6vs2g"] Jan 03 06:05:55 crc kubenswrapper[4854]: I0103 06:05:55.831669 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 06:05:55 crc kubenswrapper[4854]: I0103 06:05:55.889743 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b3ecf60-8687-4af2-a477-f2f058b854ea-combined-ca-bundle\") pod \"7b3ecf60-8687-4af2-a477-f2f058b854ea\" (UID: \"7b3ecf60-8687-4af2-a477-f2f058b854ea\") " Jan 03 06:05:55 crc kubenswrapper[4854]: I0103 06:05:55.889826 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b3ecf60-8687-4af2-a477-f2f058b854ea-scripts\") pod \"7b3ecf60-8687-4af2-a477-f2f058b854ea\" (UID: \"7b3ecf60-8687-4af2-a477-f2f058b854ea\") " Jan 03 06:05:55 crc kubenswrapper[4854]: I0103 06:05:55.889885 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b3ecf60-8687-4af2-a477-f2f058b854ea-config-data\") pod \"7b3ecf60-8687-4af2-a477-f2f058b854ea\" (UID: \"7b3ecf60-8687-4af2-a477-f2f058b854ea\") " Jan 03 06:05:55 crc kubenswrapper[4854]: I0103 06:05:55.889931 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b3ecf60-8687-4af2-a477-f2f058b854ea-log-httpd\") pod \"7b3ecf60-8687-4af2-a477-f2f058b854ea\" (UID: \"7b3ecf60-8687-4af2-a477-f2f058b854ea\") " Jan 03 06:05:55 crc kubenswrapper[4854]: I0103 06:05:55.889949 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljp95\" (UniqueName: \"kubernetes.io/projected/7b3ecf60-8687-4af2-a477-f2f058b854ea-kube-api-access-ljp95\") pod \"7b3ecf60-8687-4af2-a477-f2f058b854ea\" (UID: \"7b3ecf60-8687-4af2-a477-f2f058b854ea\") " Jan 03 06:05:55 crc kubenswrapper[4854]: I0103 06:05:55.890003 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b3ecf60-8687-4af2-a477-f2f058b854ea-run-httpd\") pod \"7b3ecf60-8687-4af2-a477-f2f058b854ea\" (UID: \"7b3ecf60-8687-4af2-a477-f2f058b854ea\") " Jan 03 06:05:55 crc kubenswrapper[4854]: I0103 06:05:55.890099 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7b3ecf60-8687-4af2-a477-f2f058b854ea-sg-core-conf-yaml\") pod \"7b3ecf60-8687-4af2-a477-f2f058b854ea\" (UID: \"7b3ecf60-8687-4af2-a477-f2f058b854ea\") " Jan 03 06:05:55 crc kubenswrapper[4854]: I0103 06:05:55.890565 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b3ecf60-8687-4af2-a477-f2f058b854ea-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7b3ecf60-8687-4af2-a477-f2f058b854ea" (UID: "7b3ecf60-8687-4af2-a477-f2f058b854ea"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:05:55 crc kubenswrapper[4854]: I0103 06:05:55.890612 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b3ecf60-8687-4af2-a477-f2f058b854ea-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7b3ecf60-8687-4af2-a477-f2f058b854ea" (UID: "7b3ecf60-8687-4af2-a477-f2f058b854ea"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:05:55 crc kubenswrapper[4854]: I0103 06:05:55.890693 4854 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b3ecf60-8687-4af2-a477-f2f058b854ea-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:55 crc kubenswrapper[4854]: I0103 06:05:55.954838 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b3ecf60-8687-4af2-a477-f2f058b854ea-kube-api-access-ljp95" (OuterVolumeSpecName: "kube-api-access-ljp95") pod "7b3ecf60-8687-4af2-a477-f2f058b854ea" (UID: "7b3ecf60-8687-4af2-a477-f2f058b854ea"). InnerVolumeSpecName "kube-api-access-ljp95". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:05:55 crc kubenswrapper[4854]: I0103 06:05:55.962284 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b3ecf60-8687-4af2-a477-f2f058b854ea-scripts" (OuterVolumeSpecName: "scripts") pod "7b3ecf60-8687-4af2-a477-f2f058b854ea" (UID: "7b3ecf60-8687-4af2-a477-f2f058b854ea"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:05:55 crc kubenswrapper[4854]: I0103 06:05:55.994546 4854 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b3ecf60-8687-4af2-a477-f2f058b854ea-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:55 crc kubenswrapper[4854]: I0103 06:05:55.994599 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljp95\" (UniqueName: \"kubernetes.io/projected/7b3ecf60-8687-4af2-a477-f2f058b854ea-kube-api-access-ljp95\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:55 crc kubenswrapper[4854]: I0103 06:05:55.994609 4854 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b3ecf60-8687-4af2-a477-f2f058b854ea-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.029982 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b3ecf60-8687-4af2-a477-f2f058b854ea-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7b3ecf60-8687-4af2-a477-f2f058b854ea" (UID: "7b3ecf60-8687-4af2-a477-f2f058b854ea"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.069351 4854 generic.go:334] "Generic (PLEG): container finished" podID="3a290934-d2f6-475a-814a-209a27b7e897" containerID="55feecbf9b856f9f17589db2c47e1a9f36434fc5e6c51a11c3eb47994cc72cbd" exitCode=1 Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.069447 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6d569589c9-7q7mv" event={"ID":"3a290934-d2f6-475a-814a-209a27b7e897","Type":"ContainerDied","Data":"55feecbf9b856f9f17589db2c47e1a9f36434fc5e6c51a11c3eb47994cc72cbd"} Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.070229 4854 scope.go:117] "RemoveContainer" containerID="55feecbf9b856f9f17589db2c47e1a9f36434fc5e6c51a11c3eb47994cc72cbd" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.078411 4854 generic.go:334] "Generic (PLEG): container finished" podID="7b3ecf60-8687-4af2-a477-f2f058b854ea" containerID="ed7b294cb2965625e010f32e983ff8b51d97872c264442165a36e4a7f06d42fe" exitCode=0 Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.078539 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b3ecf60-8687-4af2-a477-f2f058b854ea","Type":"ContainerDied","Data":"ed7b294cb2965625e010f32e983ff8b51d97872c264442165a36e4a7f06d42fe"} Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.078601 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b3ecf60-8687-4af2-a477-f2f058b854ea","Type":"ContainerDied","Data":"7178d50e98e855e6c84b083efd4c905e5183195ec87ae0243e3cfa551a910ec7"} Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.078623 4854 scope.go:117] "RemoveContainer" containerID="ea8cf45186e7e4af93fd758b01ec5631225555dd39457d45e226b699c7469023" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.078930 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.097857 4854 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7b3ecf60-8687-4af2-a477-f2f058b854ea-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.102329 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-65bkj" event={"ID":"e54bce74-4b1b-463a-a4ef-fee9f38a5cfa","Type":"ContainerStarted","Data":"7a82c8094fd604ca23534b9561fd77ef2f99e86aecd1adf784fa02acecb563e6"} Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.134072 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b3ecf60-8687-4af2-a477-f2f058b854ea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7b3ecf60-8687-4af2-a477-f2f058b854ea" (UID: "7b3ecf60-8687-4af2-a477-f2f058b854ea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.145983 4854 generic.go:334] "Generic (PLEG): container finished" podID="b1aee80e-651a-4434-a1da-34bd6dbd83bd" containerID="53aa05da04023d734a8bafdf7834827581c4fbf31e56de16749307037962cd8d" exitCode=1 Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.147367 4854 scope.go:117] "RemoveContainer" containerID="53aa05da04023d734a8bafdf7834827581c4fbf31e56de16749307037962cd8d" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.156197 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b3ecf60-8687-4af2-a477-f2f058b854ea-config-data" (OuterVolumeSpecName: "config-data") pod "7b3ecf60-8687-4af2-a477-f2f058b854ea" (UID: "7b3ecf60-8687-4af2-a477-f2f058b854ea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.162519 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f4da9c0-58a1-41d0-9d97-6cf376e6233d" path="/var/lib/kubelet/pods/1f4da9c0-58a1-41d0-9d97-6cf376e6233d/volumes" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.163724 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="793c54a0-e893-4837-9039-3ff439b66296" path="/var/lib/kubelet/pods/793c54a0-e893-4837-9039-3ff439b66296/volumes" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.164508 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9644f7d8-d9bb-436b-a629-81cbc06323be" path="/var/lib/kubelet/pods/9644f7d8-d9bb-436b-a629-81cbc06323be/volumes" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.169806 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5fd96db984-rmm7s" event={"ID":"b1aee80e-651a-4434-a1da-34bd6dbd83bd","Type":"ContainerDied","Data":"53aa05da04023d734a8bafdf7834827581c4fbf31e56de16749307037962cd8d"} Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.201018 4854 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b3ecf60-8687-4af2-a477-f2f058b854ea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.201055 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b3ecf60-8687-4af2-a477-f2f058b854ea-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.303394 4854 scope.go:117] "RemoveContainer" containerID="1b1ef04d5af8a220b1dd91b1eac12f64374ec802017c272f80b261fd643525a4" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.360435 4854 scope.go:117] "RemoveContainer" containerID="ed7b294cb2965625e010f32e983ff8b51d97872c264442165a36e4a7f06d42fe" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.392262 4854 scope.go:117] "RemoveContainer" containerID="a61f5169a0987f63c9d47cea68269915733d1753687b224360f78db5169e6c49" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.456034 4854 scope.go:117] "RemoveContainer" containerID="ea8cf45186e7e4af93fd758b01ec5631225555dd39457d45e226b699c7469023" Jan 03 06:05:56 crc kubenswrapper[4854]: E0103 06:05:56.460222 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea8cf45186e7e4af93fd758b01ec5631225555dd39457d45e226b699c7469023\": container with ID starting with ea8cf45186e7e4af93fd758b01ec5631225555dd39457d45e226b699c7469023 not found: ID does not exist" containerID="ea8cf45186e7e4af93fd758b01ec5631225555dd39457d45e226b699c7469023" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.460270 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea8cf45186e7e4af93fd758b01ec5631225555dd39457d45e226b699c7469023"} err="failed to get container status \"ea8cf45186e7e4af93fd758b01ec5631225555dd39457d45e226b699c7469023\": rpc error: code = NotFound desc = could not find container \"ea8cf45186e7e4af93fd758b01ec5631225555dd39457d45e226b699c7469023\": container with ID starting with ea8cf45186e7e4af93fd758b01ec5631225555dd39457d45e226b699c7469023 not found: ID does not exist" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.460298 4854 scope.go:117] "RemoveContainer" containerID="1b1ef04d5af8a220b1dd91b1eac12f64374ec802017c272f80b261fd643525a4" Jan 03 06:05:56 crc kubenswrapper[4854]: E0103 06:05:56.464220 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b1ef04d5af8a220b1dd91b1eac12f64374ec802017c272f80b261fd643525a4\": container with ID starting with 1b1ef04d5af8a220b1dd91b1eac12f64374ec802017c272f80b261fd643525a4 not found: ID does not exist" containerID="1b1ef04d5af8a220b1dd91b1eac12f64374ec802017c272f80b261fd643525a4" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.464268 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b1ef04d5af8a220b1dd91b1eac12f64374ec802017c272f80b261fd643525a4"} err="failed to get container status \"1b1ef04d5af8a220b1dd91b1eac12f64374ec802017c272f80b261fd643525a4\": rpc error: code = NotFound desc = could not find container \"1b1ef04d5af8a220b1dd91b1eac12f64374ec802017c272f80b261fd643525a4\": container with ID starting with 1b1ef04d5af8a220b1dd91b1eac12f64374ec802017c272f80b261fd643525a4 not found: ID does not exist" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.464294 4854 scope.go:117] "RemoveContainer" containerID="ed7b294cb2965625e010f32e983ff8b51d97872c264442165a36e4a7f06d42fe" Jan 03 06:05:56 crc kubenswrapper[4854]: E0103 06:05:56.465409 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed7b294cb2965625e010f32e983ff8b51d97872c264442165a36e4a7f06d42fe\": container with ID starting with ed7b294cb2965625e010f32e983ff8b51d97872c264442165a36e4a7f06d42fe not found: ID does not exist" containerID="ed7b294cb2965625e010f32e983ff8b51d97872c264442165a36e4a7f06d42fe" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.465431 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed7b294cb2965625e010f32e983ff8b51d97872c264442165a36e4a7f06d42fe"} err="failed to get container status \"ed7b294cb2965625e010f32e983ff8b51d97872c264442165a36e4a7f06d42fe\": rpc error: code = NotFound desc = could not find container \"ed7b294cb2965625e010f32e983ff8b51d97872c264442165a36e4a7f06d42fe\": container with ID starting with ed7b294cb2965625e010f32e983ff8b51d97872c264442165a36e4a7f06d42fe not found: ID does not exist" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.465446 4854 scope.go:117] "RemoveContainer" containerID="a61f5169a0987f63c9d47cea68269915733d1753687b224360f78db5169e6c49" Jan 03 06:05:56 crc kubenswrapper[4854]: E0103 06:05:56.469189 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a61f5169a0987f63c9d47cea68269915733d1753687b224360f78db5169e6c49\": container with ID starting with a61f5169a0987f63c9d47cea68269915733d1753687b224360f78db5169e6c49 not found: ID does not exist" containerID="a61f5169a0987f63c9d47cea68269915733d1753687b224360f78db5169e6c49" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.469226 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a61f5169a0987f63c9d47cea68269915733d1753687b224360f78db5169e6c49"} err="failed to get container status \"a61f5169a0987f63c9d47cea68269915733d1753687b224360f78db5169e6c49\": rpc error: code = NotFound desc = could not find container \"a61f5169a0987f63c9d47cea68269915733d1753687b224360f78db5169e6c49\": container with ID starting with a61f5169a0987f63c9d47cea68269915733d1753687b224360f78db5169e6c49 not found: ID does not exist" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.477426 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.507852 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.523145 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:05:56 crc kubenswrapper[4854]: E0103 06:05:56.523754 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b3ecf60-8687-4af2-a477-f2f058b854ea" containerName="proxy-httpd" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.523774 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b3ecf60-8687-4af2-a477-f2f058b854ea" containerName="proxy-httpd" Jan 03 06:05:56 crc kubenswrapper[4854]: E0103 06:05:56.523791 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f4da9c0-58a1-41d0-9d97-6cf376e6233d" containerName="dnsmasq-dns" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.523801 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f4da9c0-58a1-41d0-9d97-6cf376e6233d" containerName="dnsmasq-dns" Jan 03 06:05:56 crc kubenswrapper[4854]: E0103 06:05:56.523819 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b3ecf60-8687-4af2-a477-f2f058b854ea" containerName="ceilometer-central-agent" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.523826 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b3ecf60-8687-4af2-a477-f2f058b854ea" containerName="ceilometer-central-agent" Jan 03 06:05:56 crc kubenswrapper[4854]: E0103 06:05:56.523839 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b3ecf60-8687-4af2-a477-f2f058b854ea" containerName="sg-core" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.523845 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b3ecf60-8687-4af2-a477-f2f058b854ea" containerName="sg-core" Jan 03 06:05:56 crc kubenswrapper[4854]: E0103 06:05:56.523855 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b3ecf60-8687-4af2-a477-f2f058b854ea" containerName="ceilometer-notification-agent" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.523863 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b3ecf60-8687-4af2-a477-f2f058b854ea" containerName="ceilometer-notification-agent" Jan 03 06:05:56 crc kubenswrapper[4854]: E0103 06:05:56.523882 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="793c54a0-e893-4837-9039-3ff439b66296" containerName="heat-cfnapi" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.523890 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="793c54a0-e893-4837-9039-3ff439b66296" containerName="heat-cfnapi" Jan 03 06:05:56 crc kubenswrapper[4854]: E0103 06:05:56.523919 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f4da9c0-58a1-41d0-9d97-6cf376e6233d" containerName="init" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.523925 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f4da9c0-58a1-41d0-9d97-6cf376e6233d" containerName="init" Jan 03 06:05:56 crc kubenswrapper[4854]: E0103 06:05:56.523945 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9644f7d8-d9bb-436b-a629-81cbc06323be" containerName="heat-api" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.523950 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="9644f7d8-d9bb-436b-a629-81cbc06323be" containerName="heat-api" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.524164 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b3ecf60-8687-4af2-a477-f2f058b854ea" containerName="proxy-httpd" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.524177 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b3ecf60-8687-4af2-a477-f2f058b854ea" containerName="sg-core" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.524191 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="9644f7d8-d9bb-436b-a629-81cbc06323be" containerName="heat-api" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.524202 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b3ecf60-8687-4af2-a477-f2f058b854ea" containerName="ceilometer-central-agent" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.524212 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f4da9c0-58a1-41d0-9d97-6cf376e6233d" containerName="dnsmasq-dns" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.524224 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="793c54a0-e893-4837-9039-3ff439b66296" containerName="heat-cfnapi" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.524232 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b3ecf60-8687-4af2-a477-f2f058b854ea" containerName="ceilometer-notification-agent" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.526301 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.532567 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.532875 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.550597 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.592050 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-79cf8b54b6-vks4f" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.616328 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b2nm\" (UniqueName: \"kubernetes.io/projected/b0121a5f-35a0-4d35-94f8-0438121c73c7-kube-api-access-6b2nm\") pod \"ceilometer-0\" (UID: \"b0121a5f-35a0-4d35-94f8-0438121c73c7\") " pod="openstack/ceilometer-0" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.616587 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0121a5f-35a0-4d35-94f8-0438121c73c7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b0121a5f-35a0-4d35-94f8-0438121c73c7\") " pod="openstack/ceilometer-0" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.616867 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0121a5f-35a0-4d35-94f8-0438121c73c7-log-httpd\") pod \"ceilometer-0\" (UID: \"b0121a5f-35a0-4d35-94f8-0438121c73c7\") " pod="openstack/ceilometer-0" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.617027 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b0121a5f-35a0-4d35-94f8-0438121c73c7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b0121a5f-35a0-4d35-94f8-0438121c73c7\") " pod="openstack/ceilometer-0" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.617130 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0121a5f-35a0-4d35-94f8-0438121c73c7-scripts\") pod \"ceilometer-0\" (UID: \"b0121a5f-35a0-4d35-94f8-0438121c73c7\") " pod="openstack/ceilometer-0" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.617225 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0121a5f-35a0-4d35-94f8-0438121c73c7-config-data\") pod \"ceilometer-0\" (UID: \"b0121a5f-35a0-4d35-94f8-0438121c73c7\") " pod="openstack/ceilometer-0" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.617332 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0121a5f-35a0-4d35-94f8-0438121c73c7-run-httpd\") pod \"ceilometer-0\" (UID: \"b0121a5f-35a0-4d35-94f8-0438121c73c7\") " pod="openstack/ceilometer-0" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.719838 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0121a5f-35a0-4d35-94f8-0438121c73c7-run-httpd\") pod \"ceilometer-0\" (UID: \"b0121a5f-35a0-4d35-94f8-0438121c73c7\") " pod="openstack/ceilometer-0" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.719926 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6b2nm\" (UniqueName: \"kubernetes.io/projected/b0121a5f-35a0-4d35-94f8-0438121c73c7-kube-api-access-6b2nm\") pod \"ceilometer-0\" (UID: \"b0121a5f-35a0-4d35-94f8-0438121c73c7\") " pod="openstack/ceilometer-0" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.719988 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0121a5f-35a0-4d35-94f8-0438121c73c7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b0121a5f-35a0-4d35-94f8-0438121c73c7\") " pod="openstack/ceilometer-0" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.720164 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0121a5f-35a0-4d35-94f8-0438121c73c7-log-httpd\") pod \"ceilometer-0\" (UID: \"b0121a5f-35a0-4d35-94f8-0438121c73c7\") " pod="openstack/ceilometer-0" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.720215 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b0121a5f-35a0-4d35-94f8-0438121c73c7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b0121a5f-35a0-4d35-94f8-0438121c73c7\") " pod="openstack/ceilometer-0" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.720246 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0121a5f-35a0-4d35-94f8-0438121c73c7-scripts\") pod \"ceilometer-0\" (UID: \"b0121a5f-35a0-4d35-94f8-0438121c73c7\") " pod="openstack/ceilometer-0" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.720285 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0121a5f-35a0-4d35-94f8-0438121c73c7-config-data\") pod \"ceilometer-0\" (UID: \"b0121a5f-35a0-4d35-94f8-0438121c73c7\") " pod="openstack/ceilometer-0" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.721500 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0121a5f-35a0-4d35-94f8-0438121c73c7-run-httpd\") pod \"ceilometer-0\" (UID: \"b0121a5f-35a0-4d35-94f8-0438121c73c7\") " pod="openstack/ceilometer-0" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.721642 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0121a5f-35a0-4d35-94f8-0438121c73c7-log-httpd\") pod \"ceilometer-0\" (UID: \"b0121a5f-35a0-4d35-94f8-0438121c73c7\") " pod="openstack/ceilometer-0" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.726042 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0121a5f-35a0-4d35-94f8-0438121c73c7-scripts\") pod \"ceilometer-0\" (UID: \"b0121a5f-35a0-4d35-94f8-0438121c73c7\") " pod="openstack/ceilometer-0" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.726966 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0121a5f-35a0-4d35-94f8-0438121c73c7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b0121a5f-35a0-4d35-94f8-0438121c73c7\") " pod="openstack/ceilometer-0" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.729049 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0121a5f-35a0-4d35-94f8-0438121c73c7-config-data\") pod \"ceilometer-0\" (UID: \"b0121a5f-35a0-4d35-94f8-0438121c73c7\") " pod="openstack/ceilometer-0" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.729455 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b0121a5f-35a0-4d35-94f8-0438121c73c7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b0121a5f-35a0-4d35-94f8-0438121c73c7\") " pod="openstack/ceilometer-0" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.739993 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6b2nm\" (UniqueName: \"kubernetes.io/projected/b0121a5f-35a0-4d35-94f8-0438121c73c7-kube-api-access-6b2nm\") pod \"ceilometer-0\" (UID: \"b0121a5f-35a0-4d35-94f8-0438121c73c7\") " pod="openstack/ceilometer-0" Jan 03 06:05:56 crc kubenswrapper[4854]: I0103 06:05:56.851125 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 06:05:57 crc kubenswrapper[4854]: I0103 06:05:57.159287 4854 generic.go:334] "Generic (PLEG): container finished" podID="e54bce74-4b1b-463a-a4ef-fee9f38a5cfa" containerID="7a82c8094fd604ca23534b9561fd77ef2f99e86aecd1adf784fa02acecb563e6" exitCode=0 Jan 03 06:05:57 crc kubenswrapper[4854]: I0103 06:05:57.161184 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-65bkj" event={"ID":"e54bce74-4b1b-463a-a4ef-fee9f38a5cfa","Type":"ContainerDied","Data":"7a82c8094fd604ca23534b9561fd77ef2f99e86aecd1adf784fa02acecb563e6"} Jan 03 06:05:57 crc kubenswrapper[4854]: I0103 06:05:57.167514 4854 generic.go:334] "Generic (PLEG): container finished" podID="b1aee80e-651a-4434-a1da-34bd6dbd83bd" containerID="e606ed260a56ef24ec6b68d343d6a99a1cbe6140a17972664be5ee735ad4811f" exitCode=1 Jan 03 06:05:57 crc kubenswrapper[4854]: I0103 06:05:57.167618 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5fd96db984-rmm7s" event={"ID":"b1aee80e-651a-4434-a1da-34bd6dbd83bd","Type":"ContainerDied","Data":"e606ed260a56ef24ec6b68d343d6a99a1cbe6140a17972664be5ee735ad4811f"} Jan 03 06:05:57 crc kubenswrapper[4854]: I0103 06:05:57.167692 4854 scope.go:117] "RemoveContainer" containerID="53aa05da04023d734a8bafdf7834827581c4fbf31e56de16749307037962cd8d" Jan 03 06:05:57 crc kubenswrapper[4854]: I0103 06:05:57.168652 4854 scope.go:117] "RemoveContainer" containerID="e606ed260a56ef24ec6b68d343d6a99a1cbe6140a17972664be5ee735ad4811f" Jan 03 06:05:57 crc kubenswrapper[4854]: E0103 06:05:57.169231 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-5fd96db984-rmm7s_openstack(b1aee80e-651a-4434-a1da-34bd6dbd83bd)\"" pod="openstack/heat-api-5fd96db984-rmm7s" podUID="b1aee80e-651a-4434-a1da-34bd6dbd83bd" Jan 03 06:05:57 crc kubenswrapper[4854]: I0103 06:05:57.203359 4854 generic.go:334] "Generic (PLEG): container finished" podID="3a290934-d2f6-475a-814a-209a27b7e897" containerID="d7f3eff565029ff58c943c330d55ab1ded2a373b273715a6cf8ef2ffe0c6fdce" exitCode=1 Jan 03 06:05:57 crc kubenswrapper[4854]: I0103 06:05:57.203436 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6d569589c9-7q7mv" event={"ID":"3a290934-d2f6-475a-814a-209a27b7e897","Type":"ContainerDied","Data":"d7f3eff565029ff58c943c330d55ab1ded2a373b273715a6cf8ef2ffe0c6fdce"} Jan 03 06:05:57 crc kubenswrapper[4854]: I0103 06:05:57.204332 4854 scope.go:117] "RemoveContainer" containerID="d7f3eff565029ff58c943c330d55ab1ded2a373b273715a6cf8ef2ffe0c6fdce" Jan 03 06:05:57 crc kubenswrapper[4854]: E0103 06:05:57.204634 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-6d569589c9-7q7mv_openstack(3a290934-d2f6-475a-814a-209a27b7e897)\"" pod="openstack/heat-cfnapi-6d569589c9-7q7mv" podUID="3a290934-d2f6-475a-814a-209a27b7e897" Jan 03 06:05:57 crc kubenswrapper[4854]: W0103 06:05:57.353685 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb0121a5f_35a0_4d35_94f8_0438121c73c7.slice/crio-8e98163000400c4a3a823138b61e728123bddc1f77cb7b68539f76d523165909 WatchSource:0}: Error finding container 8e98163000400c4a3a823138b61e728123bddc1f77cb7b68539f76d523165909: Status 404 returned error can't find the container with id 8e98163000400c4a3a823138b61e728123bddc1f77cb7b68539f76d523165909 Jan 03 06:05:57 crc kubenswrapper[4854]: I0103 06:05:57.368404 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:05:57 crc kubenswrapper[4854]: I0103 06:05:57.473231 4854 scope.go:117] "RemoveContainer" containerID="55feecbf9b856f9f17589db2c47e1a9f36434fc5e6c51a11c3eb47994cc72cbd" Jan 03 06:05:58 crc kubenswrapper[4854]: I0103 06:05:58.082260 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c9776ccc5-klgkr" podUID="1f4da9c0-58a1-41d0-9d97-6cf376e6233d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.204:5353: i/o timeout" Jan 03 06:05:58 crc kubenswrapper[4854]: I0103 06:05:58.130187 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b3ecf60-8687-4af2-a477-f2f058b854ea" path="/var/lib/kubelet/pods/7b3ecf60-8687-4af2-a477-f2f058b854ea/volumes" Jan 03 06:05:58 crc kubenswrapper[4854]: I0103 06:05:58.226758 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-65bkj" event={"ID":"e54bce74-4b1b-463a-a4ef-fee9f38a5cfa","Type":"ContainerStarted","Data":"04198e36f52fe046ef740462572926f3a9e54e11412d331e19968961667a5a89"} Jan 03 06:05:58 crc kubenswrapper[4854]: I0103 06:05:58.231957 4854 scope.go:117] "RemoveContainer" containerID="e606ed260a56ef24ec6b68d343d6a99a1cbe6140a17972664be5ee735ad4811f" Jan 03 06:05:58 crc kubenswrapper[4854]: E0103 06:05:58.232195 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-5fd96db984-rmm7s_openstack(b1aee80e-651a-4434-a1da-34bd6dbd83bd)\"" pod="openstack/heat-api-5fd96db984-rmm7s" podUID="b1aee80e-651a-4434-a1da-34bd6dbd83bd" Jan 03 06:05:58 crc kubenswrapper[4854]: I0103 06:05:58.239358 4854 scope.go:117] "RemoveContainer" containerID="d7f3eff565029ff58c943c330d55ab1ded2a373b273715a6cf8ef2ffe0c6fdce" Jan 03 06:05:58 crc kubenswrapper[4854]: E0103 06:05:58.239621 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-6d569589c9-7q7mv_openstack(3a290934-d2f6-475a-814a-209a27b7e897)\"" pod="openstack/heat-cfnapi-6d569589c9-7q7mv" podUID="3a290934-d2f6-475a-814a-209a27b7e897" Jan 03 06:05:58 crc kubenswrapper[4854]: I0103 06:05:58.263044 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-65bkj" podStartSLOduration=5.530272783 podStartE2EDuration="8.263026123s" podCreationTimestamp="2026-01-03 06:05:50 +0000 UTC" firstStartedPulling="2026-01-03 06:05:54.981887674 +0000 UTC m=+1533.308464246" lastFinishedPulling="2026-01-03 06:05:57.714641014 +0000 UTC m=+1536.041217586" observedRunningTime="2026-01-03 06:05:58.247456885 +0000 UTC m=+1536.574033467" watchObservedRunningTime="2026-01-03 06:05:58.263026123 +0000 UTC m=+1536.589602695" Jan 03 06:05:58 crc kubenswrapper[4854]: I0103 06:05:58.267158 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b0121a5f-35a0-4d35-94f8-0438121c73c7","Type":"ContainerStarted","Data":"d33d13d566c127c37f5680d5586632dc4aa61a6e49e2aa4a1e527452ae54db25"} Jan 03 06:05:58 crc kubenswrapper[4854]: I0103 06:05:58.267190 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b0121a5f-35a0-4d35-94f8-0438121c73c7","Type":"ContainerStarted","Data":"8e98163000400c4a3a823138b61e728123bddc1f77cb7b68539f76d523165909"} Jan 03 06:05:59 crc kubenswrapper[4854]: I0103 06:05:59.280195 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b0121a5f-35a0-4d35-94f8-0438121c73c7","Type":"ContainerStarted","Data":"8f74b26f4c88d2007763582aa2c604f6724eaa94eb8f121534ad82f1a849bfd1"} Jan 03 06:06:00 crc kubenswrapper[4854]: I0103 06:06:00.056675 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-5fd96db984-rmm7s" Jan 03 06:06:00 crc kubenswrapper[4854]: I0103 06:06:00.057039 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5fd96db984-rmm7s" Jan 03 06:06:00 crc kubenswrapper[4854]: I0103 06:06:00.058032 4854 scope.go:117] "RemoveContainer" containerID="e606ed260a56ef24ec6b68d343d6a99a1cbe6140a17972664be5ee735ad4811f" Jan 03 06:06:00 crc kubenswrapper[4854]: E0103 06:06:00.058395 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-5fd96db984-rmm7s_openstack(b1aee80e-651a-4434-a1da-34bd6dbd83bd)\"" pod="openstack/heat-api-5fd96db984-rmm7s" podUID="b1aee80e-651a-4434-a1da-34bd6dbd83bd" Jan 03 06:06:00 crc kubenswrapper[4854]: I0103 06:06:00.102834 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6d569589c9-7q7mv" Jan 03 06:06:00 crc kubenswrapper[4854]: I0103 06:06:00.102879 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-6d569589c9-7q7mv" Jan 03 06:06:00 crc kubenswrapper[4854]: I0103 06:06:00.103831 4854 scope.go:117] "RemoveContainer" containerID="d7f3eff565029ff58c943c330d55ab1ded2a373b273715a6cf8ef2ffe0c6fdce" Jan 03 06:06:00 crc kubenswrapper[4854]: E0103 06:06:00.104100 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-6d569589c9-7q7mv_openstack(3a290934-d2f6-475a-814a-209a27b7e897)\"" pod="openstack/heat-cfnapi-6d569589c9-7q7mv" podUID="3a290934-d2f6-475a-814a-209a27b7e897" Jan 03 06:06:00 crc kubenswrapper[4854]: I0103 06:06:00.292826 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b0121a5f-35a0-4d35-94f8-0438121c73c7","Type":"ContainerStarted","Data":"b6362ccee04a2d06ba1f861bcebe22f9dcd537a259ee2e1da76d487e037b24fa"} Jan 03 06:06:00 crc kubenswrapper[4854]: I0103 06:06:00.839217 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-65bkj" Jan 03 06:06:00 crc kubenswrapper[4854]: I0103 06:06:00.839258 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-65bkj" Jan 03 06:06:00 crc kubenswrapper[4854]: I0103 06:06:00.956523 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-65bkj" Jan 03 06:06:01 crc kubenswrapper[4854]: I0103 06:06:01.054631 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8rffb"] Jan 03 06:06:01 crc kubenswrapper[4854]: I0103 06:06:01.082489 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8rffb" Jan 03 06:06:01 crc kubenswrapper[4854]: I0103 06:06:01.091987 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8rffb"] Jan 03 06:06:01 crc kubenswrapper[4854]: I0103 06:06:01.160805 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:06:01 crc kubenswrapper[4854]: I0103 06:06:01.255768 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b65c9339-9f97-4c2f-8d6f-4344c7c33395-catalog-content\") pod \"certified-operators-8rffb\" (UID: \"b65c9339-9f97-4c2f-8d6f-4344c7c33395\") " pod="openshift-marketplace/certified-operators-8rffb" Jan 03 06:06:01 crc kubenswrapper[4854]: I0103 06:06:01.256198 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss7c5\" (UniqueName: \"kubernetes.io/projected/b65c9339-9f97-4c2f-8d6f-4344c7c33395-kube-api-access-ss7c5\") pod \"certified-operators-8rffb\" (UID: \"b65c9339-9f97-4c2f-8d6f-4344c7c33395\") " pod="openshift-marketplace/certified-operators-8rffb" Jan 03 06:06:01 crc kubenswrapper[4854]: I0103 06:06:01.256333 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b65c9339-9f97-4c2f-8d6f-4344c7c33395-utilities\") pod \"certified-operators-8rffb\" (UID: \"b65c9339-9f97-4c2f-8d6f-4344c7c33395\") " pod="openshift-marketplace/certified-operators-8rffb" Jan 03 06:06:01 crc kubenswrapper[4854]: I0103 06:06:01.358786 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b65c9339-9f97-4c2f-8d6f-4344c7c33395-utilities\") pod \"certified-operators-8rffb\" (UID: \"b65c9339-9f97-4c2f-8d6f-4344c7c33395\") " pod="openshift-marketplace/certified-operators-8rffb" Jan 03 06:06:01 crc kubenswrapper[4854]: I0103 06:06:01.358943 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b65c9339-9f97-4c2f-8d6f-4344c7c33395-catalog-content\") pod \"certified-operators-8rffb\" (UID: \"b65c9339-9f97-4c2f-8d6f-4344c7c33395\") " pod="openshift-marketplace/certified-operators-8rffb" Jan 03 06:06:01 crc kubenswrapper[4854]: I0103 06:06:01.358971 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ss7c5\" (UniqueName: \"kubernetes.io/projected/b65c9339-9f97-4c2f-8d6f-4344c7c33395-kube-api-access-ss7c5\") pod \"certified-operators-8rffb\" (UID: \"b65c9339-9f97-4c2f-8d6f-4344c7c33395\") " pod="openshift-marketplace/certified-operators-8rffb" Jan 03 06:06:01 crc kubenswrapper[4854]: I0103 06:06:01.359327 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b65c9339-9f97-4c2f-8d6f-4344c7c33395-utilities\") pod \"certified-operators-8rffb\" (UID: \"b65c9339-9f97-4c2f-8d6f-4344c7c33395\") " pod="openshift-marketplace/certified-operators-8rffb" Jan 03 06:06:01 crc kubenswrapper[4854]: I0103 06:06:01.359529 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b65c9339-9f97-4c2f-8d6f-4344c7c33395-catalog-content\") pod \"certified-operators-8rffb\" (UID: \"b65c9339-9f97-4c2f-8d6f-4344c7c33395\") " pod="openshift-marketplace/certified-operators-8rffb" Jan 03 06:06:01 crc kubenswrapper[4854]: I0103 06:06:01.384269 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ss7c5\" (UniqueName: \"kubernetes.io/projected/b65c9339-9f97-4c2f-8d6f-4344c7c33395-kube-api-access-ss7c5\") pod \"certified-operators-8rffb\" (UID: \"b65c9339-9f97-4c2f-8d6f-4344c7c33395\") " pod="openshift-marketplace/certified-operators-8rffb" Jan 03 06:06:01 crc kubenswrapper[4854]: I0103 06:06:01.405724 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8rffb" Jan 03 06:06:02 crc kubenswrapper[4854]: I0103 06:06:02.006427 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8rffb"] Jan 03 06:06:02 crc kubenswrapper[4854]: W0103 06:06:02.007410 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb65c9339_9f97_4c2f_8d6f_4344c7c33395.slice/crio-1dbfc285598c5a140de5b47bfd1a5b5a608707922e0c100b0246e4fd7047b470 WatchSource:0}: Error finding container 1dbfc285598c5a140de5b47bfd1a5b5a608707922e0c100b0246e4fd7047b470: Status 404 returned error can't find the container with id 1dbfc285598c5a140de5b47bfd1a5b5a608707922e0c100b0246e4fd7047b470 Jan 03 06:06:02 crc kubenswrapper[4854]: I0103 06:06:02.319352 4854 generic.go:334] "Generic (PLEG): container finished" podID="b65c9339-9f97-4c2f-8d6f-4344c7c33395" containerID="601f024b2d3b91533561f324dbe76c4b96adf7efc2ce6d80254d83ea7666eaac" exitCode=0 Jan 03 06:06:02 crc kubenswrapper[4854]: I0103 06:06:02.319458 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8rffb" event={"ID":"b65c9339-9f97-4c2f-8d6f-4344c7c33395","Type":"ContainerDied","Data":"601f024b2d3b91533561f324dbe76c4b96adf7efc2ce6d80254d83ea7666eaac"} Jan 03 06:06:02 crc kubenswrapper[4854]: I0103 06:06:02.319503 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8rffb" event={"ID":"b65c9339-9f97-4c2f-8d6f-4344c7c33395","Type":"ContainerStarted","Data":"1dbfc285598c5a140de5b47bfd1a5b5a608707922e0c100b0246e4fd7047b470"} Jan 03 06:06:02 crc kubenswrapper[4854]: I0103 06:06:02.326807 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b0121a5f-35a0-4d35-94f8-0438121c73c7","Type":"ContainerStarted","Data":"c39e5208b69156226e178ce80faa26196af87e3f48d10417fd4504413248c32d"} Jan 03 06:06:02 crc kubenswrapper[4854]: I0103 06:06:02.327024 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b0121a5f-35a0-4d35-94f8-0438121c73c7" containerName="proxy-httpd" containerID="cri-o://c39e5208b69156226e178ce80faa26196af87e3f48d10417fd4504413248c32d" gracePeriod=30 Jan 03 06:06:02 crc kubenswrapper[4854]: I0103 06:06:02.327055 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b0121a5f-35a0-4d35-94f8-0438121c73c7" containerName="sg-core" containerID="cri-o://b6362ccee04a2d06ba1f861bcebe22f9dcd537a259ee2e1da76d487e037b24fa" gracePeriod=30 Jan 03 06:06:02 crc kubenswrapper[4854]: I0103 06:06:02.327113 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 03 06:06:02 crc kubenswrapper[4854]: I0103 06:06:02.327020 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b0121a5f-35a0-4d35-94f8-0438121c73c7" containerName="ceilometer-central-agent" containerID="cri-o://d33d13d566c127c37f5680d5586632dc4aa61a6e49e2aa4a1e527452ae54db25" gracePeriod=30 Jan 03 06:06:02 crc kubenswrapper[4854]: I0103 06:06:02.327067 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b0121a5f-35a0-4d35-94f8-0438121c73c7" containerName="ceilometer-notification-agent" containerID="cri-o://8f74b26f4c88d2007763582aa2c604f6724eaa94eb8f121534ad82f1a849bfd1" gracePeriod=30 Jan 03 06:06:02 crc kubenswrapper[4854]: I0103 06:06:02.382690 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.527832135 podStartE2EDuration="6.382665127s" podCreationTimestamp="2026-01-03 06:05:56 +0000 UTC" firstStartedPulling="2026-01-03 06:05:57.357003278 +0000 UTC m=+1535.683579850" lastFinishedPulling="2026-01-03 06:06:01.21183627 +0000 UTC m=+1539.538412842" observedRunningTime="2026-01-03 06:06:02.373178606 +0000 UTC m=+1540.699755208" watchObservedRunningTime="2026-01-03 06:06:02.382665127 +0000 UTC m=+1540.709241709" Jan 03 06:06:02 crc kubenswrapper[4854]: I0103 06:06:02.787199 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-lr4nt"] Jan 03 06:06:02 crc kubenswrapper[4854]: I0103 06:06:02.790305 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-lr4nt" Jan 03 06:06:02 crc kubenswrapper[4854]: I0103 06:06:02.845552 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-lr4nt"] Jan 03 06:06:02 crc kubenswrapper[4854]: I0103 06:06:02.858311 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1aa6af8e-d27e-4727-a8bc-4a2e5690cc88-operator-scripts\") pod \"nova-api-db-create-lr4nt\" (UID: \"1aa6af8e-d27e-4727-a8bc-4a2e5690cc88\") " pod="openstack/nova-api-db-create-lr4nt" Jan 03 06:06:02 crc kubenswrapper[4854]: I0103 06:06:02.858471 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56vks\" (UniqueName: \"kubernetes.io/projected/1aa6af8e-d27e-4727-a8bc-4a2e5690cc88-kube-api-access-56vks\") pod \"nova-api-db-create-lr4nt\" (UID: \"1aa6af8e-d27e-4727-a8bc-4a2e5690cc88\") " pod="openstack/nova-api-db-create-lr4nt" Jan 03 06:06:02 crc kubenswrapper[4854]: I0103 06:06:02.909371 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-5jcbw"] Jan 03 06:06:02 crc kubenswrapper[4854]: I0103 06:06:02.938297 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-5jcbw" Jan 03 06:06:02 crc kubenswrapper[4854]: I0103 06:06:02.986836 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-5jcbw"] Jan 03 06:06:02 crc kubenswrapper[4854]: I0103 06:06:02.990489 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56vks\" (UniqueName: \"kubernetes.io/projected/1aa6af8e-d27e-4727-a8bc-4a2e5690cc88-kube-api-access-56vks\") pod \"nova-api-db-create-lr4nt\" (UID: \"1aa6af8e-d27e-4727-a8bc-4a2e5690cc88\") " pod="openstack/nova-api-db-create-lr4nt" Jan 03 06:06:02 crc kubenswrapper[4854]: I0103 06:06:02.999078 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1aa6af8e-d27e-4727-a8bc-4a2e5690cc88-operator-scripts\") pod \"nova-api-db-create-lr4nt\" (UID: \"1aa6af8e-d27e-4727-a8bc-4a2e5690cc88\") " pod="openstack/nova-api-db-create-lr4nt" Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.000524 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1aa6af8e-d27e-4727-a8bc-4a2e5690cc88-operator-scripts\") pod \"nova-api-db-create-lr4nt\" (UID: \"1aa6af8e-d27e-4727-a8bc-4a2e5690cc88\") " pod="openstack/nova-api-db-create-lr4nt" Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.019146 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56vks\" (UniqueName: \"kubernetes.io/projected/1aa6af8e-d27e-4727-a8bc-4a2e5690cc88-kube-api-access-56vks\") pod \"nova-api-db-create-lr4nt\" (UID: \"1aa6af8e-d27e-4727-a8bc-4a2e5690cc88\") " pod="openstack/nova-api-db-create-lr4nt" Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.070456 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-e562-account-create-update-k68zp"] Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.073262 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e562-account-create-update-k68zp" Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.082628 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.086134 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-e562-account-create-update-k68zp"] Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.104664 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-jvwwp"] Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.106547 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jvwwp" Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.120717 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82b6b681-415b-40ee-9510-801116f895c8-operator-scripts\") pod \"nova-cell1-db-create-jvwwp\" (UID: \"82b6b681-415b-40ee-9510-801116f895c8\") " pod="openstack/nova-cell1-db-create-jvwwp" Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.120805 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvwvv\" (UniqueName: \"kubernetes.io/projected/a1ee7c12-946c-4a6c-b15b-15cd1c15bd30-kube-api-access-vvwvv\") pod \"nova-api-e562-account-create-update-k68zp\" (UID: \"a1ee7c12-946c-4a6c-b15b-15cd1c15bd30\") " pod="openstack/nova-api-e562-account-create-update-k68zp" Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.120858 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dj8k\" (UniqueName: \"kubernetes.io/projected/a04b84e4-f513-40cc-bd0e-852449fb839d-kube-api-access-7dj8k\") pod \"nova-cell0-db-create-5jcbw\" (UID: \"a04b84e4-f513-40cc-bd0e-852449fb839d\") " pod="openstack/nova-cell0-db-create-5jcbw" Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.120997 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1ee7c12-946c-4a6c-b15b-15cd1c15bd30-operator-scripts\") pod \"nova-api-e562-account-create-update-k68zp\" (UID: \"a1ee7c12-946c-4a6c-b15b-15cd1c15bd30\") " pod="openstack/nova-api-e562-account-create-update-k68zp" Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.121035 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a04b84e4-f513-40cc-bd0e-852449fb839d-operator-scripts\") pod \"nova-cell0-db-create-5jcbw\" (UID: \"a04b84e4-f513-40cc-bd0e-852449fb839d\") " pod="openstack/nova-cell0-db-create-5jcbw" Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.121083 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nx4g\" (UniqueName: \"kubernetes.io/projected/82b6b681-415b-40ee-9510-801116f895c8-kube-api-access-5nx4g\") pod \"nova-cell1-db-create-jvwwp\" (UID: \"82b6b681-415b-40ee-9510-801116f895c8\") " pod="openstack/nova-cell1-db-create-jvwwp" Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.148780 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-jvwwp"] Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.161511 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-0bd1-account-create-update-jzm2v"] Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.163976 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0bd1-account-create-update-jzm2v" Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.175328 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.213196 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-0bd1-account-create-update-jzm2v"] Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.224872 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1ee7c12-946c-4a6c-b15b-15cd1c15bd30-operator-scripts\") pod \"nova-api-e562-account-create-update-k68zp\" (UID: \"a1ee7c12-946c-4a6c-b15b-15cd1c15bd30\") " pod="openstack/nova-api-e562-account-create-update-k68zp" Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.224957 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a04b84e4-f513-40cc-bd0e-852449fb839d-operator-scripts\") pod \"nova-cell0-db-create-5jcbw\" (UID: \"a04b84e4-f513-40cc-bd0e-852449fb839d\") " pod="openstack/nova-cell0-db-create-5jcbw" Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.225019 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nx4g\" (UniqueName: \"kubernetes.io/projected/82b6b681-415b-40ee-9510-801116f895c8-kube-api-access-5nx4g\") pod \"nova-cell1-db-create-jvwwp\" (UID: \"82b6b681-415b-40ee-9510-801116f895c8\") " pod="openstack/nova-cell1-db-create-jvwwp" Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.225261 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8224418-e8de-49e2-a7f5-059ea9ed6f72-operator-scripts\") pod \"nova-cell0-0bd1-account-create-update-jzm2v\" (UID: \"e8224418-e8de-49e2-a7f5-059ea9ed6f72\") " pod="openstack/nova-cell0-0bd1-account-create-update-jzm2v" Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.225356 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82b6b681-415b-40ee-9510-801116f895c8-operator-scripts\") pod \"nova-cell1-db-create-jvwwp\" (UID: \"82b6b681-415b-40ee-9510-801116f895c8\") " pod="openstack/nova-cell1-db-create-jvwwp" Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.225438 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvwvv\" (UniqueName: \"kubernetes.io/projected/a1ee7c12-946c-4a6c-b15b-15cd1c15bd30-kube-api-access-vvwvv\") pod \"nova-api-e562-account-create-update-k68zp\" (UID: \"a1ee7c12-946c-4a6c-b15b-15cd1c15bd30\") " pod="openstack/nova-api-e562-account-create-update-k68zp" Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.225512 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dj8k\" (UniqueName: \"kubernetes.io/projected/a04b84e4-f513-40cc-bd0e-852449fb839d-kube-api-access-7dj8k\") pod \"nova-cell0-db-create-5jcbw\" (UID: \"a04b84e4-f513-40cc-bd0e-852449fb839d\") " pod="openstack/nova-cell0-db-create-5jcbw" Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.225658 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxt8b\" (UniqueName: \"kubernetes.io/projected/e8224418-e8de-49e2-a7f5-059ea9ed6f72-kube-api-access-xxt8b\") pod \"nova-cell0-0bd1-account-create-update-jzm2v\" (UID: \"e8224418-e8de-49e2-a7f5-059ea9ed6f72\") " pod="openstack/nova-cell0-0bd1-account-create-update-jzm2v" Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.225813 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1ee7c12-946c-4a6c-b15b-15cd1c15bd30-operator-scripts\") pod \"nova-api-e562-account-create-update-k68zp\" (UID: \"a1ee7c12-946c-4a6c-b15b-15cd1c15bd30\") " pod="openstack/nova-api-e562-account-create-update-k68zp" Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.226542 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a04b84e4-f513-40cc-bd0e-852449fb839d-operator-scripts\") pod \"nova-cell0-db-create-5jcbw\" (UID: \"a04b84e4-f513-40cc-bd0e-852449fb839d\") " pod="openstack/nova-cell0-db-create-5jcbw" Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.227695 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82b6b681-415b-40ee-9510-801116f895c8-operator-scripts\") pod \"nova-cell1-db-create-jvwwp\" (UID: \"82b6b681-415b-40ee-9510-801116f895c8\") " pod="openstack/nova-cell1-db-create-jvwwp" Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.243200 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-lr4nt" Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.256746 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvwvv\" (UniqueName: \"kubernetes.io/projected/a1ee7c12-946c-4a6c-b15b-15cd1c15bd30-kube-api-access-vvwvv\") pod \"nova-api-e562-account-create-update-k68zp\" (UID: \"a1ee7c12-946c-4a6c-b15b-15cd1c15bd30\") " pod="openstack/nova-api-e562-account-create-update-k68zp" Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.257648 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dj8k\" (UniqueName: \"kubernetes.io/projected/a04b84e4-f513-40cc-bd0e-852449fb839d-kube-api-access-7dj8k\") pod \"nova-cell0-db-create-5jcbw\" (UID: \"a04b84e4-f513-40cc-bd0e-852449fb839d\") " pod="openstack/nova-cell0-db-create-5jcbw" Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.271142 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-02bf-account-create-update-k67dd"] Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.273042 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-02bf-account-create-update-k67dd" Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.276840 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nx4g\" (UniqueName: \"kubernetes.io/projected/82b6b681-415b-40ee-9510-801116f895c8-kube-api-access-5nx4g\") pod \"nova-cell1-db-create-jvwwp\" (UID: \"82b6b681-415b-40ee-9510-801116f895c8\") " pod="openstack/nova-cell1-db-create-jvwwp" Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.280419 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.282432 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-5jcbw" Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.316837 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-02bf-account-create-update-k67dd"] Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.380676 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxt8b\" (UniqueName: \"kubernetes.io/projected/e8224418-e8de-49e2-a7f5-059ea9ed6f72-kube-api-access-xxt8b\") pod \"nova-cell0-0bd1-account-create-update-jzm2v\" (UID: \"e8224418-e8de-49e2-a7f5-059ea9ed6f72\") " pod="openstack/nova-cell0-0bd1-account-create-update-jzm2v" Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.381419 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7b23ad2-3ba6-44d4-88a0-aad1458970d0-operator-scripts\") pod \"nova-cell1-02bf-account-create-update-k67dd\" (UID: \"a7b23ad2-3ba6-44d4-88a0-aad1458970d0\") " pod="openstack/nova-cell1-02bf-account-create-update-k67dd" Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.381646 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8224418-e8de-49e2-a7f5-059ea9ed6f72-operator-scripts\") pod \"nova-cell0-0bd1-account-create-update-jzm2v\" (UID: \"e8224418-e8de-49e2-a7f5-059ea9ed6f72\") " pod="openstack/nova-cell0-0bd1-account-create-update-jzm2v" Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.381888 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4qd7\" (UniqueName: \"kubernetes.io/projected/a7b23ad2-3ba6-44d4-88a0-aad1458970d0-kube-api-access-q4qd7\") pod \"nova-cell1-02bf-account-create-update-k67dd\" (UID: \"a7b23ad2-3ba6-44d4-88a0-aad1458970d0\") " pod="openstack/nova-cell1-02bf-account-create-update-k67dd" Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.383241 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8224418-e8de-49e2-a7f5-059ea9ed6f72-operator-scripts\") pod \"nova-cell0-0bd1-account-create-update-jzm2v\" (UID: \"e8224418-e8de-49e2-a7f5-059ea9ed6f72\") " pod="openstack/nova-cell0-0bd1-account-create-update-jzm2v" Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.411920 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxt8b\" (UniqueName: \"kubernetes.io/projected/e8224418-e8de-49e2-a7f5-059ea9ed6f72-kube-api-access-xxt8b\") pod \"nova-cell0-0bd1-account-create-update-jzm2v\" (UID: \"e8224418-e8de-49e2-a7f5-059ea9ed6f72\") " pod="openstack/nova-cell0-0bd1-account-create-update-jzm2v" Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.415934 4854 generic.go:334] "Generic (PLEG): container finished" podID="b0121a5f-35a0-4d35-94f8-0438121c73c7" containerID="c39e5208b69156226e178ce80faa26196af87e3f48d10417fd4504413248c32d" exitCode=0 Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.415968 4854 generic.go:334] "Generic (PLEG): container finished" podID="b0121a5f-35a0-4d35-94f8-0438121c73c7" containerID="b6362ccee04a2d06ba1f861bcebe22f9dcd537a259ee2e1da76d487e037b24fa" exitCode=2 Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.415976 4854 generic.go:334] "Generic (PLEG): container finished" podID="b0121a5f-35a0-4d35-94f8-0438121c73c7" containerID="8f74b26f4c88d2007763582aa2c604f6724eaa94eb8f121534ad82f1a849bfd1" exitCode=0 Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.415982 4854 generic.go:334] "Generic (PLEG): container finished" podID="b0121a5f-35a0-4d35-94f8-0438121c73c7" containerID="d33d13d566c127c37f5680d5586632dc4aa61a6e49e2aa4a1e527452ae54db25" exitCode=0 Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.416003 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b0121a5f-35a0-4d35-94f8-0438121c73c7","Type":"ContainerDied","Data":"c39e5208b69156226e178ce80faa26196af87e3f48d10417fd4504413248c32d"} Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.416030 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b0121a5f-35a0-4d35-94f8-0438121c73c7","Type":"ContainerDied","Data":"b6362ccee04a2d06ba1f861bcebe22f9dcd537a259ee2e1da76d487e037b24fa"} Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.416041 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b0121a5f-35a0-4d35-94f8-0438121c73c7","Type":"ContainerDied","Data":"8f74b26f4c88d2007763582aa2c604f6724eaa94eb8f121534ad82f1a849bfd1"} Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.416050 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b0121a5f-35a0-4d35-94f8-0438121c73c7","Type":"ContainerDied","Data":"d33d13d566c127c37f5680d5586632dc4aa61a6e49e2aa4a1e527452ae54db25"} Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.435070 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e562-account-create-update-k68zp" Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.475925 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jvwwp" Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.506595 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7b23ad2-3ba6-44d4-88a0-aad1458970d0-operator-scripts\") pod \"nova-cell1-02bf-account-create-update-k67dd\" (UID: \"a7b23ad2-3ba6-44d4-88a0-aad1458970d0\") " pod="openstack/nova-cell1-02bf-account-create-update-k67dd" Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.507575 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0bd1-account-create-update-jzm2v" Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.509280 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4qd7\" (UniqueName: \"kubernetes.io/projected/a7b23ad2-3ba6-44d4-88a0-aad1458970d0-kube-api-access-q4qd7\") pod \"nova-cell1-02bf-account-create-update-k67dd\" (UID: \"a7b23ad2-3ba6-44d4-88a0-aad1458970d0\") " pod="openstack/nova-cell1-02bf-account-create-update-k67dd" Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.513206 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7b23ad2-3ba6-44d4-88a0-aad1458970d0-operator-scripts\") pod \"nova-cell1-02bf-account-create-update-k67dd\" (UID: \"a7b23ad2-3ba6-44d4-88a0-aad1458970d0\") " pod="openstack/nova-cell1-02bf-account-create-update-k67dd" Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.573286 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4qd7\" (UniqueName: \"kubernetes.io/projected/a7b23ad2-3ba6-44d4-88a0-aad1458970d0-kube-api-access-q4qd7\") pod \"nova-cell1-02bf-account-create-update-k67dd\" (UID: \"a7b23ad2-3ba6-44d4-88a0-aad1458970d0\") " pod="openstack/nova-cell1-02bf-account-create-update-k67dd" Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.626038 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-02bf-account-create-update-k67dd" Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.699966 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.755522 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0121a5f-35a0-4d35-94f8-0438121c73c7-scripts\") pod \"b0121a5f-35a0-4d35-94f8-0438121c73c7\" (UID: \"b0121a5f-35a0-4d35-94f8-0438121c73c7\") " Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.755640 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0121a5f-35a0-4d35-94f8-0438121c73c7-config-data\") pod \"b0121a5f-35a0-4d35-94f8-0438121c73c7\" (UID: \"b0121a5f-35a0-4d35-94f8-0438121c73c7\") " Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.755694 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0121a5f-35a0-4d35-94f8-0438121c73c7-combined-ca-bundle\") pod \"b0121a5f-35a0-4d35-94f8-0438121c73c7\" (UID: \"b0121a5f-35a0-4d35-94f8-0438121c73c7\") " Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.755720 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0121a5f-35a0-4d35-94f8-0438121c73c7-run-httpd\") pod \"b0121a5f-35a0-4d35-94f8-0438121c73c7\" (UID: \"b0121a5f-35a0-4d35-94f8-0438121c73c7\") " Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.755796 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0121a5f-35a0-4d35-94f8-0438121c73c7-log-httpd\") pod \"b0121a5f-35a0-4d35-94f8-0438121c73c7\" (UID: \"b0121a5f-35a0-4d35-94f8-0438121c73c7\") " Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.755835 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b0121a5f-35a0-4d35-94f8-0438121c73c7-sg-core-conf-yaml\") pod \"b0121a5f-35a0-4d35-94f8-0438121c73c7\" (UID: \"b0121a5f-35a0-4d35-94f8-0438121c73c7\") " Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.755883 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6b2nm\" (UniqueName: \"kubernetes.io/projected/b0121a5f-35a0-4d35-94f8-0438121c73c7-kube-api-access-6b2nm\") pod \"b0121a5f-35a0-4d35-94f8-0438121c73c7\" (UID: \"b0121a5f-35a0-4d35-94f8-0438121c73c7\") " Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.758064 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0121a5f-35a0-4d35-94f8-0438121c73c7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b0121a5f-35a0-4d35-94f8-0438121c73c7" (UID: "b0121a5f-35a0-4d35-94f8-0438121c73c7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.758504 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0121a5f-35a0-4d35-94f8-0438121c73c7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b0121a5f-35a0-4d35-94f8-0438121c73c7" (UID: "b0121a5f-35a0-4d35-94f8-0438121c73c7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.773181 4854 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0121a5f-35a0-4d35-94f8-0438121c73c7-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.773802 4854 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0121a5f-35a0-4d35-94f8-0438121c73c7-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.779296 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0121a5f-35a0-4d35-94f8-0438121c73c7-scripts" (OuterVolumeSpecName: "scripts") pod "b0121a5f-35a0-4d35-94f8-0438121c73c7" (UID: "b0121a5f-35a0-4d35-94f8-0438121c73c7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.791158 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0121a5f-35a0-4d35-94f8-0438121c73c7-kube-api-access-6b2nm" (OuterVolumeSpecName: "kube-api-access-6b2nm") pod "b0121a5f-35a0-4d35-94f8-0438121c73c7" (UID: "b0121a5f-35a0-4d35-94f8-0438121c73c7"). InnerVolumeSpecName "kube-api-access-6b2nm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.817531 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0121a5f-35a0-4d35-94f8-0438121c73c7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b0121a5f-35a0-4d35-94f8-0438121c73c7" (UID: "b0121a5f-35a0-4d35-94f8-0438121c73c7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.893636 4854 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b0121a5f-35a0-4d35-94f8-0438121c73c7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.893698 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6b2nm\" (UniqueName: \"kubernetes.io/projected/b0121a5f-35a0-4d35-94f8-0438121c73c7-kube-api-access-6b2nm\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:03 crc kubenswrapper[4854]: I0103 06:06:03.893715 4854 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0121a5f-35a0-4d35-94f8-0438121c73c7-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:04 crc kubenswrapper[4854]: I0103 06:06:04.127950 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0121a5f-35a0-4d35-94f8-0438121c73c7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b0121a5f-35a0-4d35-94f8-0438121c73c7" (UID: "b0121a5f-35a0-4d35-94f8-0438121c73c7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:06:04 crc kubenswrapper[4854]: I0103 06:06:04.197713 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-5jcbw"] Jan 03 06:06:04 crc kubenswrapper[4854]: I0103 06:06:04.222252 4854 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0121a5f-35a0-4d35-94f8-0438121c73c7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:04 crc kubenswrapper[4854]: I0103 06:06:04.299304 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0121a5f-35a0-4d35-94f8-0438121c73c7-config-data" (OuterVolumeSpecName: "config-data") pod "b0121a5f-35a0-4d35-94f8-0438121c73c7" (UID: "b0121a5f-35a0-4d35-94f8-0438121c73c7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:06:04 crc kubenswrapper[4854]: I0103 06:06:04.348199 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0121a5f-35a0-4d35-94f8-0438121c73c7-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:04 crc kubenswrapper[4854]: I0103 06:06:04.440410 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-lr4nt"] Jan 03 06:06:04 crc kubenswrapper[4854]: I0103 06:06:04.457798 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-5jcbw" event={"ID":"a04b84e4-f513-40cc-bd0e-852449fb839d","Type":"ContainerStarted","Data":"7d801cb5021b3b8484d8e785eb6f24a5be92ede756b5eb8c97f5081d04093377"} Jan 03 06:06:04 crc kubenswrapper[4854]: I0103 06:06:04.472750 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8rffb" event={"ID":"b65c9339-9f97-4c2f-8d6f-4344c7c33395","Type":"ContainerStarted","Data":"8af947bb8bbfbbd831d1ea9ebf65ca14498a3dd69acc7d8c35f96c1b01aef87c"} Jan 03 06:06:04 crc kubenswrapper[4854]: I0103 06:06:04.479681 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b0121a5f-35a0-4d35-94f8-0438121c73c7","Type":"ContainerDied","Data":"8e98163000400c4a3a823138b61e728123bddc1f77cb7b68539f76d523165909"} Jan 03 06:06:04 crc kubenswrapper[4854]: I0103 06:06:04.479714 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 06:06:04 crc kubenswrapper[4854]: I0103 06:06:04.479762 4854 scope.go:117] "RemoveContainer" containerID="c39e5208b69156226e178ce80faa26196af87e3f48d10417fd4504413248c32d" Jan 03 06:06:04 crc kubenswrapper[4854]: I0103 06:06:04.553851 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:06:04 crc kubenswrapper[4854]: I0103 06:06:04.569557 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:06:04 crc kubenswrapper[4854]: I0103 06:06:04.569652 4854 scope.go:117] "RemoveContainer" containerID="b6362ccee04a2d06ba1f861bcebe22f9dcd537a259ee2e1da76d487e037b24fa" Jan 03 06:06:04 crc kubenswrapper[4854]: W0103 06:06:04.590311 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1ee7c12_946c_4a6c_b15b_15cd1c15bd30.slice/crio-813526604fef79132b8d3e1e612cdee65e7602dd4d1daea39b3f618c0e07691b WatchSource:0}: Error finding container 813526604fef79132b8d3e1e612cdee65e7602dd4d1daea39b3f618c0e07691b: Status 404 returned error can't find the container with id 813526604fef79132b8d3e1e612cdee65e7602dd4d1daea39b3f618c0e07691b Jan 03 06:06:04 crc kubenswrapper[4854]: I0103 06:06:04.609887 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-e562-account-create-update-k68zp"] Jan 03 06:06:04 crc kubenswrapper[4854]: I0103 06:06:04.656273 4854 scope.go:117] "RemoveContainer" containerID="8f74b26f4c88d2007763582aa2c604f6724eaa94eb8f121534ad82f1a849bfd1" Jan 03 06:06:04 crc kubenswrapper[4854]: I0103 06:06:04.664138 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-0bd1-account-create-update-jzm2v"] Jan 03 06:06:04 crc kubenswrapper[4854]: I0103 06:06:04.705515 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:06:04 crc kubenswrapper[4854]: E0103 06:06:04.706413 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0121a5f-35a0-4d35-94f8-0438121c73c7" containerName="sg-core" Jan 03 06:06:04 crc kubenswrapper[4854]: I0103 06:06:04.706427 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0121a5f-35a0-4d35-94f8-0438121c73c7" containerName="sg-core" Jan 03 06:06:04 crc kubenswrapper[4854]: E0103 06:06:04.706451 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0121a5f-35a0-4d35-94f8-0438121c73c7" containerName="ceilometer-central-agent" Jan 03 06:06:04 crc kubenswrapper[4854]: I0103 06:06:04.706458 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0121a5f-35a0-4d35-94f8-0438121c73c7" containerName="ceilometer-central-agent" Jan 03 06:06:04 crc kubenswrapper[4854]: E0103 06:06:04.706481 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0121a5f-35a0-4d35-94f8-0438121c73c7" containerName="ceilometer-notification-agent" Jan 03 06:06:04 crc kubenswrapper[4854]: I0103 06:06:04.706487 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0121a5f-35a0-4d35-94f8-0438121c73c7" containerName="ceilometer-notification-agent" Jan 03 06:06:04 crc kubenswrapper[4854]: E0103 06:06:04.706508 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0121a5f-35a0-4d35-94f8-0438121c73c7" containerName="proxy-httpd" Jan 03 06:06:04 crc kubenswrapper[4854]: I0103 06:06:04.706513 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0121a5f-35a0-4d35-94f8-0438121c73c7" containerName="proxy-httpd" Jan 03 06:06:04 crc kubenswrapper[4854]: I0103 06:06:04.706731 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0121a5f-35a0-4d35-94f8-0438121c73c7" containerName="proxy-httpd" Jan 03 06:06:04 crc kubenswrapper[4854]: I0103 06:06:04.706745 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0121a5f-35a0-4d35-94f8-0438121c73c7" containerName="sg-core" Jan 03 06:06:04 crc kubenswrapper[4854]: I0103 06:06:04.706760 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0121a5f-35a0-4d35-94f8-0438121c73c7" containerName="ceilometer-notification-agent" Jan 03 06:06:04 crc kubenswrapper[4854]: I0103 06:06:04.706768 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0121a5f-35a0-4d35-94f8-0438121c73c7" containerName="ceilometer-central-agent" Jan 03 06:06:04 crc kubenswrapper[4854]: I0103 06:06:04.737853 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 06:06:04 crc kubenswrapper[4854]: I0103 06:06:04.745990 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 03 06:06:04 crc kubenswrapper[4854]: I0103 06:06:04.746866 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 03 06:06:04 crc kubenswrapper[4854]: I0103 06:06:04.801383 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:06:04 crc kubenswrapper[4854]: I0103 06:06:04.823899 4854 scope.go:117] "RemoveContainer" containerID="d33d13d566c127c37f5680d5586632dc4aa61a6e49e2aa4a1e527452ae54db25" Jan 03 06:06:04 crc kubenswrapper[4854]: I0103 06:06:04.861357 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-02bf-account-create-update-k67dd"] Jan 03 06:06:04 crc kubenswrapper[4854]: I0103 06:06:04.874937 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/217dbe70-f377-42d7-8b9a-bbc22f53b861-log-httpd\") pod \"ceilometer-0\" (UID: \"217dbe70-f377-42d7-8b9a-bbc22f53b861\") " pod="openstack/ceilometer-0" Jan 03 06:06:04 crc kubenswrapper[4854]: I0103 06:06:04.874992 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/217dbe70-f377-42d7-8b9a-bbc22f53b861-run-httpd\") pod \"ceilometer-0\" (UID: \"217dbe70-f377-42d7-8b9a-bbc22f53b861\") " pod="openstack/ceilometer-0" Jan 03 06:06:04 crc kubenswrapper[4854]: I0103 06:06:04.875018 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/217dbe70-f377-42d7-8b9a-bbc22f53b861-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"217dbe70-f377-42d7-8b9a-bbc22f53b861\") " pod="openstack/ceilometer-0" Jan 03 06:06:04 crc kubenswrapper[4854]: I0103 06:06:04.875058 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/217dbe70-f377-42d7-8b9a-bbc22f53b861-scripts\") pod \"ceilometer-0\" (UID: \"217dbe70-f377-42d7-8b9a-bbc22f53b861\") " pod="openstack/ceilometer-0" Jan 03 06:06:04 crc kubenswrapper[4854]: I0103 06:06:04.875139 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/217dbe70-f377-42d7-8b9a-bbc22f53b861-config-data\") pod \"ceilometer-0\" (UID: \"217dbe70-f377-42d7-8b9a-bbc22f53b861\") " pod="openstack/ceilometer-0" Jan 03 06:06:04 crc kubenswrapper[4854]: I0103 06:06:04.875185 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfbnh\" (UniqueName: \"kubernetes.io/projected/217dbe70-f377-42d7-8b9a-bbc22f53b861-kube-api-access-kfbnh\") pod \"ceilometer-0\" (UID: \"217dbe70-f377-42d7-8b9a-bbc22f53b861\") " pod="openstack/ceilometer-0" Jan 03 06:06:04 crc kubenswrapper[4854]: I0103 06:06:04.875249 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/217dbe70-f377-42d7-8b9a-bbc22f53b861-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"217dbe70-f377-42d7-8b9a-bbc22f53b861\") " pod="openstack/ceilometer-0" Jan 03 06:06:04 crc kubenswrapper[4854]: I0103 06:06:04.912246 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-jvwwp"] Jan 03 06:06:04 crc kubenswrapper[4854]: I0103 06:06:04.981640 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/217dbe70-f377-42d7-8b9a-bbc22f53b861-log-httpd\") pod \"ceilometer-0\" (UID: \"217dbe70-f377-42d7-8b9a-bbc22f53b861\") " pod="openstack/ceilometer-0" Jan 03 06:06:04 crc kubenswrapper[4854]: I0103 06:06:04.981695 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/217dbe70-f377-42d7-8b9a-bbc22f53b861-run-httpd\") pod \"ceilometer-0\" (UID: \"217dbe70-f377-42d7-8b9a-bbc22f53b861\") " pod="openstack/ceilometer-0" Jan 03 06:06:04 crc kubenswrapper[4854]: I0103 06:06:04.981726 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/217dbe70-f377-42d7-8b9a-bbc22f53b861-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"217dbe70-f377-42d7-8b9a-bbc22f53b861\") " pod="openstack/ceilometer-0" Jan 03 06:06:04 crc kubenswrapper[4854]: I0103 06:06:04.981783 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/217dbe70-f377-42d7-8b9a-bbc22f53b861-scripts\") pod \"ceilometer-0\" (UID: \"217dbe70-f377-42d7-8b9a-bbc22f53b861\") " pod="openstack/ceilometer-0" Jan 03 06:06:04 crc kubenswrapper[4854]: I0103 06:06:04.981863 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/217dbe70-f377-42d7-8b9a-bbc22f53b861-config-data\") pod \"ceilometer-0\" (UID: \"217dbe70-f377-42d7-8b9a-bbc22f53b861\") " pod="openstack/ceilometer-0" Jan 03 06:06:04 crc kubenswrapper[4854]: I0103 06:06:04.981914 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfbnh\" (UniqueName: \"kubernetes.io/projected/217dbe70-f377-42d7-8b9a-bbc22f53b861-kube-api-access-kfbnh\") pod \"ceilometer-0\" (UID: \"217dbe70-f377-42d7-8b9a-bbc22f53b861\") " pod="openstack/ceilometer-0" Jan 03 06:06:04 crc kubenswrapper[4854]: I0103 06:06:04.981995 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/217dbe70-f377-42d7-8b9a-bbc22f53b861-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"217dbe70-f377-42d7-8b9a-bbc22f53b861\") " pod="openstack/ceilometer-0" Jan 03 06:06:04 crc kubenswrapper[4854]: I0103 06:06:04.986851 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/217dbe70-f377-42d7-8b9a-bbc22f53b861-log-httpd\") pod \"ceilometer-0\" (UID: \"217dbe70-f377-42d7-8b9a-bbc22f53b861\") " pod="openstack/ceilometer-0" Jan 03 06:06:04 crc kubenswrapper[4854]: I0103 06:06:04.988685 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/217dbe70-f377-42d7-8b9a-bbc22f53b861-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"217dbe70-f377-42d7-8b9a-bbc22f53b861\") " pod="openstack/ceilometer-0" Jan 03 06:06:04 crc kubenswrapper[4854]: I0103 06:06:04.989668 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/217dbe70-f377-42d7-8b9a-bbc22f53b861-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"217dbe70-f377-42d7-8b9a-bbc22f53b861\") " pod="openstack/ceilometer-0" Jan 03 06:06:04 crc kubenswrapper[4854]: I0103 06:06:04.989830 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/217dbe70-f377-42d7-8b9a-bbc22f53b861-run-httpd\") pod \"ceilometer-0\" (UID: \"217dbe70-f377-42d7-8b9a-bbc22f53b861\") " pod="openstack/ceilometer-0" Jan 03 06:06:04 crc kubenswrapper[4854]: I0103 06:06:04.996903 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/217dbe70-f377-42d7-8b9a-bbc22f53b861-scripts\") pod \"ceilometer-0\" (UID: \"217dbe70-f377-42d7-8b9a-bbc22f53b861\") " pod="openstack/ceilometer-0" Jan 03 06:06:05 crc kubenswrapper[4854]: I0103 06:06:05.026589 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/217dbe70-f377-42d7-8b9a-bbc22f53b861-config-data\") pod \"ceilometer-0\" (UID: \"217dbe70-f377-42d7-8b9a-bbc22f53b861\") " pod="openstack/ceilometer-0" Jan 03 06:06:05 crc kubenswrapper[4854]: I0103 06:06:05.050002 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfbnh\" (UniqueName: \"kubernetes.io/projected/217dbe70-f377-42d7-8b9a-bbc22f53b861-kube-api-access-kfbnh\") pod \"ceilometer-0\" (UID: \"217dbe70-f377-42d7-8b9a-bbc22f53b861\") " pod="openstack/ceilometer-0" Jan 03 06:06:05 crc kubenswrapper[4854]: I0103 06:06:05.066880 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 06:06:05 crc kubenswrapper[4854]: I0103 06:06:05.427523 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-6749994886-zsx65" Jan 03 06:06:05 crc kubenswrapper[4854]: I0103 06:06:05.495038 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-79cf8b54b6-vks4f"] Jan 03 06:06:05 crc kubenswrapper[4854]: I0103 06:06:05.495551 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-79cf8b54b6-vks4f" podUID="57dd35dc-074c-4a29-92f6-afebc0f9fad3" containerName="heat-engine" containerID="cri-o://86ef38ad3fc6e7960ff9d969c9c7b497087e21b1c52cb1332c14406c64ec98b1" gracePeriod=60 Jan 03 06:06:05 crc kubenswrapper[4854]: I0103 06:06:05.498024 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-lr4nt" event={"ID":"1aa6af8e-d27e-4727-a8bc-4a2e5690cc88","Type":"ContainerStarted","Data":"b3a4bb6ae85ff462a04fcb864c21308bfdcadbb3fe731e30db4a5c9c9c8a7133"} Jan 03 06:06:05 crc kubenswrapper[4854]: I0103 06:06:05.522036 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-jvwwp" event={"ID":"82b6b681-415b-40ee-9510-801116f895c8","Type":"ContainerStarted","Data":"f46d21964ef656041716a752414400eb2ddd44c92e3a788a00fc2caa0bb36d23"} Jan 03 06:06:05 crc kubenswrapper[4854]: I0103 06:06:05.523355 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e562-account-create-update-k68zp" event={"ID":"a1ee7c12-946c-4a6c-b15b-15cd1c15bd30","Type":"ContainerStarted","Data":"813526604fef79132b8d3e1e612cdee65e7602dd4d1daea39b3f618c0e07691b"} Jan 03 06:06:05 crc kubenswrapper[4854]: I0103 06:06:05.544538 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0bd1-account-create-update-jzm2v" event={"ID":"e8224418-e8de-49e2-a7f5-059ea9ed6f72","Type":"ContainerStarted","Data":"479bd588e0cebe0f28adc4beafc651688f9a9714769b9b7ed71c6309d93e8b28"} Jan 03 06:06:05 crc kubenswrapper[4854]: I0103 06:06:05.544582 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0bd1-account-create-update-jzm2v" event={"ID":"e8224418-e8de-49e2-a7f5-059ea9ed6f72","Type":"ContainerStarted","Data":"14ba4724a990d3c3183e971733e077d1eca48ea223fb498acf1738e4ccf46d03"} Jan 03 06:06:05 crc kubenswrapper[4854]: I0103 06:06:05.556392 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-5jcbw" event={"ID":"a04b84e4-f513-40cc-bd0e-852449fb839d","Type":"ContainerStarted","Data":"c4c4b41efa4c18bfbe4ffb15e8f65666f6bf430e325d83a138b571f34baf8da0"} Jan 03 06:06:05 crc kubenswrapper[4854]: I0103 06:06:05.562701 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-02bf-account-create-update-k67dd" event={"ID":"a7b23ad2-3ba6-44d4-88a0-aad1458970d0","Type":"ContainerStarted","Data":"a993bbe30b41dee0ba41876df18879f88ecc2083ee807a38fcc6b0474632d614"} Jan 03 06:06:05 crc kubenswrapper[4854]: I0103 06:06:05.578375 4854 generic.go:334] "Generic (PLEG): container finished" podID="b65c9339-9f97-4c2f-8d6f-4344c7c33395" containerID="8af947bb8bbfbbd831d1ea9ebf65ca14498a3dd69acc7d8c35f96c1b01aef87c" exitCode=0 Jan 03 06:06:05 crc kubenswrapper[4854]: I0103 06:06:05.578421 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8rffb" event={"ID":"b65c9339-9f97-4c2f-8d6f-4344c7c33395","Type":"ContainerDied","Data":"8af947bb8bbfbbd831d1ea9ebf65ca14498a3dd69acc7d8c35f96c1b01aef87c"} Jan 03 06:06:05 crc kubenswrapper[4854]: I0103 06:06:05.601672 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-0bd1-account-create-update-jzm2v" podStartSLOduration=3.601654536 podStartE2EDuration="3.601654536s" podCreationTimestamp="2026-01-03 06:06:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:06:05.600255392 +0000 UTC m=+1543.926831954" watchObservedRunningTime="2026-01-03 06:06:05.601654536 +0000 UTC m=+1543.928231108" Jan 03 06:06:05 crc kubenswrapper[4854]: I0103 06:06:05.650933 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:06:05 crc kubenswrapper[4854]: I0103 06:06:05.694125 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-5jcbw" podStartSLOduration=3.694074391 podStartE2EDuration="3.694074391s" podCreationTimestamp="2026-01-03 06:06:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:06:05.677028667 +0000 UTC m=+1544.003605259" watchObservedRunningTime="2026-01-03 06:06:05.694074391 +0000 UTC m=+1544.020650973" Jan 03 06:06:05 crc kubenswrapper[4854]: I0103 06:06:05.872894 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-c795c8675-ng42x" Jan 03 06:06:05 crc kubenswrapper[4854]: I0103 06:06:05.974279 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5fd96db984-rmm7s"] Jan 03 06:06:06 crc kubenswrapper[4854]: I0103 06:06:06.097868 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-94fd9f97f-bcw2n" Jan 03 06:06:06 crc kubenswrapper[4854]: I0103 06:06:06.181895 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0121a5f-35a0-4d35-94f8-0438121c73c7" path="/var/lib/kubelet/pods/b0121a5f-35a0-4d35-94f8-0438121c73c7/volumes" Jan 03 06:06:06 crc kubenswrapper[4854]: I0103 06:06:06.182777 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6d569589c9-7q7mv"] Jan 03 06:06:06 crc kubenswrapper[4854]: E0103 06:06:06.500835 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="86ef38ad3fc6e7960ff9d969c9c7b497087e21b1c52cb1332c14406c64ec98b1" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 03 06:06:06 crc kubenswrapper[4854]: E0103 06:06:06.518683 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="86ef38ad3fc6e7960ff9d969c9c7b497087e21b1c52cb1332c14406c64ec98b1" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 03 06:06:06 crc kubenswrapper[4854]: I0103 06:06:06.547912 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5fd96db984-rmm7s" Jan 03 06:06:06 crc kubenswrapper[4854]: E0103 06:06:06.548216 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="86ef38ad3fc6e7960ff9d969c9c7b497087e21b1c52cb1332c14406c64ec98b1" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 03 06:06:06 crc kubenswrapper[4854]: E0103 06:06:06.548263 4854 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-79cf8b54b6-vks4f" podUID="57dd35dc-074c-4a29-92f6-afebc0f9fad3" containerName="heat-engine" Jan 03 06:06:06 crc kubenswrapper[4854]: I0103 06:06:06.614717 4854 generic.go:334] "Generic (PLEG): container finished" podID="a7b23ad2-3ba6-44d4-88a0-aad1458970d0" containerID="c518a6dfeb4244807dc93fe5d4ebd2b6813dc690c8bb874dc6ca36cba420f27b" exitCode=0 Jan 03 06:06:06 crc kubenswrapper[4854]: I0103 06:06:06.614832 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-02bf-account-create-update-k67dd" event={"ID":"a7b23ad2-3ba6-44d4-88a0-aad1458970d0","Type":"ContainerDied","Data":"c518a6dfeb4244807dc93fe5d4ebd2b6813dc690c8bb874dc6ca36cba420f27b"} Jan 03 06:06:06 crc kubenswrapper[4854]: I0103 06:06:06.620095 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"217dbe70-f377-42d7-8b9a-bbc22f53b861","Type":"ContainerStarted","Data":"7a89647d6254bacb8a55dfc918ac25342957c88b7f412d834f8d6379c8893791"} Jan 03 06:06:06 crc kubenswrapper[4854]: I0103 06:06:06.632272 4854 generic.go:334] "Generic (PLEG): container finished" podID="1aa6af8e-d27e-4727-a8bc-4a2e5690cc88" containerID="4f1afe4a0d42833cb881a50df3661bf589dc97c1dfd0b8b3b0ad1fae7b32e14f" exitCode=0 Jan 03 06:06:06 crc kubenswrapper[4854]: I0103 06:06:06.632357 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-lr4nt" event={"ID":"1aa6af8e-d27e-4727-a8bc-4a2e5690cc88","Type":"ContainerDied","Data":"4f1afe4a0d42833cb881a50df3661bf589dc97c1dfd0b8b3b0ad1fae7b32e14f"} Jan 03 06:06:06 crc kubenswrapper[4854]: I0103 06:06:06.637839 4854 generic.go:334] "Generic (PLEG): container finished" podID="82b6b681-415b-40ee-9510-801116f895c8" containerID="a46991e8b0af15112f224c8ae0d956377d62db81674a0f7d67408d34eba15989" exitCode=0 Jan 03 06:06:06 crc kubenswrapper[4854]: I0103 06:06:06.637897 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-jvwwp" event={"ID":"82b6b681-415b-40ee-9510-801116f895c8","Type":"ContainerDied","Data":"a46991e8b0af15112f224c8ae0d956377d62db81674a0f7d67408d34eba15989"} Jan 03 06:06:06 crc kubenswrapper[4854]: I0103 06:06:06.639818 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5fd96db984-rmm7s" event={"ID":"b1aee80e-651a-4434-a1da-34bd6dbd83bd","Type":"ContainerDied","Data":"9de06276a04ebe13a7314bbf8d03bf799dc868d3a0c6db676c6c5b8203034250"} Jan 03 06:06:06 crc kubenswrapper[4854]: I0103 06:06:06.639846 4854 scope.go:117] "RemoveContainer" containerID="e606ed260a56ef24ec6b68d343d6a99a1cbe6140a17972664be5ee735ad4811f" Jan 03 06:06:06 crc kubenswrapper[4854]: I0103 06:06:06.639902 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5fd96db984-rmm7s" Jan 03 06:06:06 crc kubenswrapper[4854]: I0103 06:06:06.653275 4854 generic.go:334] "Generic (PLEG): container finished" podID="a1ee7c12-946c-4a6c-b15b-15cd1c15bd30" containerID="4b9ea40ee8db87fb23371989a5e7c70468dd6a8379abdf18a9ad3a1e0d124b25" exitCode=0 Jan 03 06:06:06 crc kubenswrapper[4854]: I0103 06:06:06.653415 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e562-account-create-update-k68zp" event={"ID":"a1ee7c12-946c-4a6c-b15b-15cd1c15bd30","Type":"ContainerDied","Data":"4b9ea40ee8db87fb23371989a5e7c70468dd6a8379abdf18a9ad3a1e0d124b25"} Jan 03 06:06:06 crc kubenswrapper[4854]: I0103 06:06:06.660265 4854 generic.go:334] "Generic (PLEG): container finished" podID="e8224418-e8de-49e2-a7f5-059ea9ed6f72" containerID="479bd588e0cebe0f28adc4beafc651688f9a9714769b9b7ed71c6309d93e8b28" exitCode=0 Jan 03 06:06:06 crc kubenswrapper[4854]: I0103 06:06:06.660362 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0bd1-account-create-update-jzm2v" event={"ID":"e8224418-e8de-49e2-a7f5-059ea9ed6f72","Type":"ContainerDied","Data":"479bd588e0cebe0f28adc4beafc651688f9a9714769b9b7ed71c6309d93e8b28"} Jan 03 06:06:06 crc kubenswrapper[4854]: I0103 06:06:06.663397 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1aee80e-651a-4434-a1da-34bd6dbd83bd-combined-ca-bundle\") pod \"b1aee80e-651a-4434-a1da-34bd6dbd83bd\" (UID: \"b1aee80e-651a-4434-a1da-34bd6dbd83bd\") " Jan 03 06:06:06 crc kubenswrapper[4854]: I0103 06:06:06.663647 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1aee80e-651a-4434-a1da-34bd6dbd83bd-config-data\") pod \"b1aee80e-651a-4434-a1da-34bd6dbd83bd\" (UID: \"b1aee80e-651a-4434-a1da-34bd6dbd83bd\") " Jan 03 06:06:06 crc kubenswrapper[4854]: I0103 06:06:06.663673 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4chkk\" (UniqueName: \"kubernetes.io/projected/b1aee80e-651a-4434-a1da-34bd6dbd83bd-kube-api-access-4chkk\") pod \"b1aee80e-651a-4434-a1da-34bd6dbd83bd\" (UID: \"b1aee80e-651a-4434-a1da-34bd6dbd83bd\") " Jan 03 06:06:06 crc kubenswrapper[4854]: I0103 06:06:06.663860 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b1aee80e-651a-4434-a1da-34bd6dbd83bd-config-data-custom\") pod \"b1aee80e-651a-4434-a1da-34bd6dbd83bd\" (UID: \"b1aee80e-651a-4434-a1da-34bd6dbd83bd\") " Jan 03 06:06:06 crc kubenswrapper[4854]: I0103 06:06:06.676893 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1aee80e-651a-4434-a1da-34bd6dbd83bd-kube-api-access-4chkk" (OuterVolumeSpecName: "kube-api-access-4chkk") pod "b1aee80e-651a-4434-a1da-34bd6dbd83bd" (UID: "b1aee80e-651a-4434-a1da-34bd6dbd83bd"). InnerVolumeSpecName "kube-api-access-4chkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:06:06 crc kubenswrapper[4854]: I0103 06:06:06.678419 4854 generic.go:334] "Generic (PLEG): container finished" podID="a04b84e4-f513-40cc-bd0e-852449fb839d" containerID="c4c4b41efa4c18bfbe4ffb15e8f65666f6bf430e325d83a138b571f34baf8da0" exitCode=0 Jan 03 06:06:06 crc kubenswrapper[4854]: I0103 06:06:06.678463 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-5jcbw" event={"ID":"a04b84e4-f513-40cc-bd0e-852449fb839d","Type":"ContainerDied","Data":"c4c4b41efa4c18bfbe4ffb15e8f65666f6bf430e325d83a138b571f34baf8da0"} Jan 03 06:06:06 crc kubenswrapper[4854]: I0103 06:06:06.682816 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1aee80e-651a-4434-a1da-34bd6dbd83bd-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b1aee80e-651a-4434-a1da-34bd6dbd83bd" (UID: "b1aee80e-651a-4434-a1da-34bd6dbd83bd"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:06:06 crc kubenswrapper[4854]: I0103 06:06:06.726769 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1aee80e-651a-4434-a1da-34bd6dbd83bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b1aee80e-651a-4434-a1da-34bd6dbd83bd" (UID: "b1aee80e-651a-4434-a1da-34bd6dbd83bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:06:06 crc kubenswrapper[4854]: I0103 06:06:06.752415 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1aee80e-651a-4434-a1da-34bd6dbd83bd-config-data" (OuterVolumeSpecName: "config-data") pod "b1aee80e-651a-4434-a1da-34bd6dbd83bd" (UID: "b1aee80e-651a-4434-a1da-34bd6dbd83bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:06:06 crc kubenswrapper[4854]: I0103 06:06:06.768436 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1aee80e-651a-4434-a1da-34bd6dbd83bd-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:06 crc kubenswrapper[4854]: I0103 06:06:06.768465 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4chkk\" (UniqueName: \"kubernetes.io/projected/b1aee80e-651a-4434-a1da-34bd6dbd83bd-kube-api-access-4chkk\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:06 crc kubenswrapper[4854]: I0103 06:06:06.768492 4854 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b1aee80e-651a-4434-a1da-34bd6dbd83bd-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:06 crc kubenswrapper[4854]: I0103 06:06:06.768501 4854 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1aee80e-651a-4434-a1da-34bd6dbd83bd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:06 crc kubenswrapper[4854]: I0103 06:06:06.823222 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6d569589c9-7q7mv" Jan 03 06:06:06 crc kubenswrapper[4854]: I0103 06:06:06.971695 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a290934-d2f6-475a-814a-209a27b7e897-combined-ca-bundle\") pod \"3a290934-d2f6-475a-814a-209a27b7e897\" (UID: \"3a290934-d2f6-475a-814a-209a27b7e897\") " Jan 03 06:06:06 crc kubenswrapper[4854]: I0103 06:06:06.971852 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a290934-d2f6-475a-814a-209a27b7e897-config-data-custom\") pod \"3a290934-d2f6-475a-814a-209a27b7e897\" (UID: \"3a290934-d2f6-475a-814a-209a27b7e897\") " Jan 03 06:06:06 crc kubenswrapper[4854]: I0103 06:06:06.971952 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjfm7\" (UniqueName: \"kubernetes.io/projected/3a290934-d2f6-475a-814a-209a27b7e897-kube-api-access-jjfm7\") pod \"3a290934-d2f6-475a-814a-209a27b7e897\" (UID: \"3a290934-d2f6-475a-814a-209a27b7e897\") " Jan 03 06:06:06 crc kubenswrapper[4854]: I0103 06:06:06.971970 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a290934-d2f6-475a-814a-209a27b7e897-config-data\") pod \"3a290934-d2f6-475a-814a-209a27b7e897\" (UID: \"3a290934-d2f6-475a-814a-209a27b7e897\") " Jan 03 06:06:06 crc kubenswrapper[4854]: I0103 06:06:06.988719 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a290934-d2f6-475a-814a-209a27b7e897-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3a290934-d2f6-475a-814a-209a27b7e897" (UID: "3a290934-d2f6-475a-814a-209a27b7e897"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:06:07 crc kubenswrapper[4854]: I0103 06:06:07.006493 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a290934-d2f6-475a-814a-209a27b7e897-kube-api-access-jjfm7" (OuterVolumeSpecName: "kube-api-access-jjfm7") pod "3a290934-d2f6-475a-814a-209a27b7e897" (UID: "3a290934-d2f6-475a-814a-209a27b7e897"). InnerVolumeSpecName "kube-api-access-jjfm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:06:07 crc kubenswrapper[4854]: I0103 06:06:07.061228 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a290934-d2f6-475a-814a-209a27b7e897-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a290934-d2f6-475a-814a-209a27b7e897" (UID: "3a290934-d2f6-475a-814a-209a27b7e897"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:06:07 crc kubenswrapper[4854]: I0103 06:06:07.075032 4854 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a290934-d2f6-475a-814a-209a27b7e897-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:07 crc kubenswrapper[4854]: I0103 06:06:07.075072 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjfm7\" (UniqueName: \"kubernetes.io/projected/3a290934-d2f6-475a-814a-209a27b7e897-kube-api-access-jjfm7\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:07 crc kubenswrapper[4854]: I0103 06:06:07.075096 4854 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a290934-d2f6-475a-814a-209a27b7e897-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:07 crc kubenswrapper[4854]: I0103 06:06:07.076172 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5fd96db984-rmm7s"] Jan 03 06:06:07 crc kubenswrapper[4854]: I0103 06:06:07.095716 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-5fd96db984-rmm7s"] Jan 03 06:06:07 crc kubenswrapper[4854]: I0103 06:06:07.116165 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a290934-d2f6-475a-814a-209a27b7e897-config-data" (OuterVolumeSpecName: "config-data") pod "3a290934-d2f6-475a-814a-209a27b7e897" (UID: "3a290934-d2f6-475a-814a-209a27b7e897"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:06:07 crc kubenswrapper[4854]: I0103 06:06:07.179943 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a290934-d2f6-475a-814a-209a27b7e897-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:07 crc kubenswrapper[4854]: I0103 06:06:07.436491 4854 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 03 06:06:07 crc kubenswrapper[4854]: I0103 06:06:07.717617 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6d569589c9-7q7mv" event={"ID":"3a290934-d2f6-475a-814a-209a27b7e897","Type":"ContainerDied","Data":"75fdfe4bd8f4858407c7adc62ebd4bc322ebbd9f69636d7cfeef7b5fad17acd4"} Jan 03 06:06:07 crc kubenswrapper[4854]: I0103 06:06:07.717839 4854 scope.go:117] "RemoveContainer" containerID="d7f3eff565029ff58c943c330d55ab1ded2a373b273715a6cf8ef2ffe0c6fdce" Jan 03 06:06:07 crc kubenswrapper[4854]: I0103 06:06:07.717736 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6d569589c9-7q7mv" Jan 03 06:06:07 crc kubenswrapper[4854]: I0103 06:06:07.739552 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8rffb" event={"ID":"b65c9339-9f97-4c2f-8d6f-4344c7c33395","Type":"ContainerStarted","Data":"cf4fe947d4397bacb2b92a2a76341776c6ab8b41ea88cb4ccb857c6a2faf42e8"} Jan 03 06:06:07 crc kubenswrapper[4854]: I0103 06:06:07.761099 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"217dbe70-f377-42d7-8b9a-bbc22f53b861","Type":"ContainerStarted","Data":"eba305729573796e2491bfe8e22a01c956224f31b8ca5337ae374b8ac948ef69"} Jan 03 06:06:07 crc kubenswrapper[4854]: I0103 06:06:07.761145 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"217dbe70-f377-42d7-8b9a-bbc22f53b861","Type":"ContainerStarted","Data":"c36c0aa53f56e323a283942b7ab861f27bab7b9109892aac492aa97b18337f62"} Jan 03 06:06:07 crc kubenswrapper[4854]: I0103 06:06:07.883381 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6d569589c9-7q7mv"] Jan 03 06:06:07 crc kubenswrapper[4854]: I0103 06:06:07.947983 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-6d569589c9-7q7mv"] Jan 03 06:06:07 crc kubenswrapper[4854]: I0103 06:06:07.972380 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8rffb" podStartSLOduration=4.238850987 podStartE2EDuration="7.972351883s" podCreationTimestamp="2026-01-03 06:06:00 +0000 UTC" firstStartedPulling="2026-01-03 06:06:02.322253069 +0000 UTC m=+1540.648829641" lastFinishedPulling="2026-01-03 06:06:06.055753965 +0000 UTC m=+1544.382330537" observedRunningTime="2026-01-03 06:06:07.794112814 +0000 UTC m=+1546.120689396" watchObservedRunningTime="2026-01-03 06:06:07.972351883 +0000 UTC m=+1546.298928455" Jan 03 06:06:08 crc kubenswrapper[4854]: I0103 06:06:08.142519 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a290934-d2f6-475a-814a-209a27b7e897" path="/var/lib/kubelet/pods/3a290934-d2f6-475a-814a-209a27b7e897/volumes" Jan 03 06:06:08 crc kubenswrapper[4854]: I0103 06:06:08.143338 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1aee80e-651a-4434-a1da-34bd6dbd83bd" path="/var/lib/kubelet/pods/b1aee80e-651a-4434-a1da-34bd6dbd83bd/volumes" Jan 03 06:06:08 crc kubenswrapper[4854]: I0103 06:06:08.481713 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-02bf-account-create-update-k67dd" Jan 03 06:06:08 crc kubenswrapper[4854]: I0103 06:06:08.580911 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4qd7\" (UniqueName: \"kubernetes.io/projected/a7b23ad2-3ba6-44d4-88a0-aad1458970d0-kube-api-access-q4qd7\") pod \"a7b23ad2-3ba6-44d4-88a0-aad1458970d0\" (UID: \"a7b23ad2-3ba6-44d4-88a0-aad1458970d0\") " Jan 03 06:06:08 crc kubenswrapper[4854]: I0103 06:06:08.581060 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7b23ad2-3ba6-44d4-88a0-aad1458970d0-operator-scripts\") pod \"a7b23ad2-3ba6-44d4-88a0-aad1458970d0\" (UID: \"a7b23ad2-3ba6-44d4-88a0-aad1458970d0\") " Jan 03 06:06:08 crc kubenswrapper[4854]: I0103 06:06:08.582170 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7b23ad2-3ba6-44d4-88a0-aad1458970d0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a7b23ad2-3ba6-44d4-88a0-aad1458970d0" (UID: "a7b23ad2-3ba6-44d4-88a0-aad1458970d0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:06:08 crc kubenswrapper[4854]: I0103 06:06:08.592817 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7b23ad2-3ba6-44d4-88a0-aad1458970d0-kube-api-access-q4qd7" (OuterVolumeSpecName: "kube-api-access-q4qd7") pod "a7b23ad2-3ba6-44d4-88a0-aad1458970d0" (UID: "a7b23ad2-3ba6-44d4-88a0-aad1458970d0"). InnerVolumeSpecName "kube-api-access-q4qd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:06:08 crc kubenswrapper[4854]: I0103 06:06:08.684646 4854 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7b23ad2-3ba6-44d4-88a0-aad1458970d0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:08 crc kubenswrapper[4854]: I0103 06:06:08.684945 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4qd7\" (UniqueName: \"kubernetes.io/projected/a7b23ad2-3ba6-44d4-88a0-aad1458970d0-kube-api-access-q4qd7\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:08 crc kubenswrapper[4854]: I0103 06:06:08.804116 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-jvwwp" event={"ID":"82b6b681-415b-40ee-9510-801116f895c8","Type":"ContainerDied","Data":"f46d21964ef656041716a752414400eb2ddd44c92e3a788a00fc2caa0bb36d23"} Jan 03 06:06:08 crc kubenswrapper[4854]: I0103 06:06:08.805187 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f46d21964ef656041716a752414400eb2ddd44c92e3a788a00fc2caa0bb36d23" Jan 03 06:06:08 crc kubenswrapper[4854]: I0103 06:06:08.825450 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e562-account-create-update-k68zp" event={"ID":"a1ee7c12-946c-4a6c-b15b-15cd1c15bd30","Type":"ContainerDied","Data":"813526604fef79132b8d3e1e612cdee65e7602dd4d1daea39b3f618c0e07691b"} Jan 03 06:06:08 crc kubenswrapper[4854]: I0103 06:06:08.825494 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="813526604fef79132b8d3e1e612cdee65e7602dd4d1daea39b3f618c0e07691b" Jan 03 06:06:08 crc kubenswrapper[4854]: I0103 06:06:08.844784 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0bd1-account-create-update-jzm2v" event={"ID":"e8224418-e8de-49e2-a7f5-059ea9ed6f72","Type":"ContainerDied","Data":"14ba4724a990d3c3183e971733e077d1eca48ea223fb498acf1738e4ccf46d03"} Jan 03 06:06:08 crc kubenswrapper[4854]: I0103 06:06:08.844830 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14ba4724a990d3c3183e971733e077d1eca48ea223fb498acf1738e4ccf46d03" Jan 03 06:06:08 crc kubenswrapper[4854]: I0103 06:06:08.870765 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-02bf-account-create-update-k67dd" event={"ID":"a7b23ad2-3ba6-44d4-88a0-aad1458970d0","Type":"ContainerDied","Data":"a993bbe30b41dee0ba41876df18879f88ecc2083ee807a38fcc6b0474632d614"} Jan 03 06:06:08 crc kubenswrapper[4854]: I0103 06:06:08.870806 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a993bbe30b41dee0ba41876df18879f88ecc2083ee807a38fcc6b0474632d614" Jan 03 06:06:08 crc kubenswrapper[4854]: I0103 06:06:08.870867 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-02bf-account-create-update-k67dd" Jan 03 06:06:08 crc kubenswrapper[4854]: I0103 06:06:08.880184 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"217dbe70-f377-42d7-8b9a-bbc22f53b861","Type":"ContainerStarted","Data":"a1f982716199eeee216fa9cde5b54660f4d8cf68fd9401a187541846fa072f27"} Jan 03 06:06:08 crc kubenswrapper[4854]: I0103 06:06:08.922068 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jvwwp" Jan 03 06:06:08 crc kubenswrapper[4854]: I0103 06:06:08.947512 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0bd1-account-create-update-jzm2v" Jan 03 06:06:08 crc kubenswrapper[4854]: I0103 06:06:08.961130 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e562-account-create-update-k68zp" Jan 03 06:06:08 crc kubenswrapper[4854]: I0103 06:06:08.972503 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-5jcbw" Jan 03 06:06:09 crc kubenswrapper[4854]: I0103 06:06:09.000330 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-lr4nt" Jan 03 06:06:09 crc kubenswrapper[4854]: I0103 06:06:09.101778 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8224418-e8de-49e2-a7f5-059ea9ed6f72-operator-scripts\") pod \"e8224418-e8de-49e2-a7f5-059ea9ed6f72\" (UID: \"e8224418-e8de-49e2-a7f5-059ea9ed6f72\") " Jan 03 06:06:09 crc kubenswrapper[4854]: I0103 06:06:09.101846 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvwvv\" (UniqueName: \"kubernetes.io/projected/a1ee7c12-946c-4a6c-b15b-15cd1c15bd30-kube-api-access-vvwvv\") pod \"a1ee7c12-946c-4a6c-b15b-15cd1c15bd30\" (UID: \"a1ee7c12-946c-4a6c-b15b-15cd1c15bd30\") " Jan 03 06:06:09 crc kubenswrapper[4854]: I0103 06:06:09.101954 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82b6b681-415b-40ee-9510-801116f895c8-operator-scripts\") pod \"82b6b681-415b-40ee-9510-801116f895c8\" (UID: \"82b6b681-415b-40ee-9510-801116f895c8\") " Jan 03 06:06:09 crc kubenswrapper[4854]: I0103 06:06:09.101979 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxt8b\" (UniqueName: \"kubernetes.io/projected/e8224418-e8de-49e2-a7f5-059ea9ed6f72-kube-api-access-xxt8b\") pod \"e8224418-e8de-49e2-a7f5-059ea9ed6f72\" (UID: \"e8224418-e8de-49e2-a7f5-059ea9ed6f72\") " Jan 03 06:06:09 crc kubenswrapper[4854]: I0103 06:06:09.102050 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nx4g\" (UniqueName: \"kubernetes.io/projected/82b6b681-415b-40ee-9510-801116f895c8-kube-api-access-5nx4g\") pod \"82b6b681-415b-40ee-9510-801116f895c8\" (UID: \"82b6b681-415b-40ee-9510-801116f895c8\") " Jan 03 06:06:09 crc kubenswrapper[4854]: I0103 06:06:09.102113 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1ee7c12-946c-4a6c-b15b-15cd1c15bd30-operator-scripts\") pod \"a1ee7c12-946c-4a6c-b15b-15cd1c15bd30\" (UID: \"a1ee7c12-946c-4a6c-b15b-15cd1c15bd30\") " Jan 03 06:06:09 crc kubenswrapper[4854]: I0103 06:06:09.102149 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a04b84e4-f513-40cc-bd0e-852449fb839d-operator-scripts\") pod \"a04b84e4-f513-40cc-bd0e-852449fb839d\" (UID: \"a04b84e4-f513-40cc-bd0e-852449fb839d\") " Jan 03 06:06:09 crc kubenswrapper[4854]: I0103 06:06:09.102217 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1aa6af8e-d27e-4727-a8bc-4a2e5690cc88-operator-scripts\") pod \"1aa6af8e-d27e-4727-a8bc-4a2e5690cc88\" (UID: \"1aa6af8e-d27e-4727-a8bc-4a2e5690cc88\") " Jan 03 06:06:09 crc kubenswrapper[4854]: I0103 06:06:09.102245 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56vks\" (UniqueName: \"kubernetes.io/projected/1aa6af8e-d27e-4727-a8bc-4a2e5690cc88-kube-api-access-56vks\") pod \"1aa6af8e-d27e-4727-a8bc-4a2e5690cc88\" (UID: \"1aa6af8e-d27e-4727-a8bc-4a2e5690cc88\") " Jan 03 06:06:09 crc kubenswrapper[4854]: I0103 06:06:09.102259 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dj8k\" (UniqueName: \"kubernetes.io/projected/a04b84e4-f513-40cc-bd0e-852449fb839d-kube-api-access-7dj8k\") pod \"a04b84e4-f513-40cc-bd0e-852449fb839d\" (UID: \"a04b84e4-f513-40cc-bd0e-852449fb839d\") " Jan 03 06:06:09 crc kubenswrapper[4854]: I0103 06:06:09.104561 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a04b84e4-f513-40cc-bd0e-852449fb839d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a04b84e4-f513-40cc-bd0e-852449fb839d" (UID: "a04b84e4-f513-40cc-bd0e-852449fb839d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:06:09 crc kubenswrapper[4854]: I0103 06:06:09.104615 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1ee7c12-946c-4a6c-b15b-15cd1c15bd30-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a1ee7c12-946c-4a6c-b15b-15cd1c15bd30" (UID: "a1ee7c12-946c-4a6c-b15b-15cd1c15bd30"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:06:09 crc kubenswrapper[4854]: I0103 06:06:09.105030 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82b6b681-415b-40ee-9510-801116f895c8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "82b6b681-415b-40ee-9510-801116f895c8" (UID: "82b6b681-415b-40ee-9510-801116f895c8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:06:09 crc kubenswrapper[4854]: I0103 06:06:09.105172 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8224418-e8de-49e2-a7f5-059ea9ed6f72-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e8224418-e8de-49e2-a7f5-059ea9ed6f72" (UID: "e8224418-e8de-49e2-a7f5-059ea9ed6f72"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:06:09 crc kubenswrapper[4854]: I0103 06:06:09.105784 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1aa6af8e-d27e-4727-a8bc-4a2e5690cc88-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1aa6af8e-d27e-4727-a8bc-4a2e5690cc88" (UID: "1aa6af8e-d27e-4727-a8bc-4a2e5690cc88"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:06:09 crc kubenswrapper[4854]: I0103 06:06:09.110272 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a04b84e4-f513-40cc-bd0e-852449fb839d-kube-api-access-7dj8k" (OuterVolumeSpecName: "kube-api-access-7dj8k") pod "a04b84e4-f513-40cc-bd0e-852449fb839d" (UID: "a04b84e4-f513-40cc-bd0e-852449fb839d"). InnerVolumeSpecName "kube-api-access-7dj8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:06:09 crc kubenswrapper[4854]: I0103 06:06:09.113023 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82b6b681-415b-40ee-9510-801116f895c8-kube-api-access-5nx4g" (OuterVolumeSpecName: "kube-api-access-5nx4g") pod "82b6b681-415b-40ee-9510-801116f895c8" (UID: "82b6b681-415b-40ee-9510-801116f895c8"). InnerVolumeSpecName "kube-api-access-5nx4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:06:09 crc kubenswrapper[4854]: I0103 06:06:09.113309 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8224418-e8de-49e2-a7f5-059ea9ed6f72-kube-api-access-xxt8b" (OuterVolumeSpecName: "kube-api-access-xxt8b") pod "e8224418-e8de-49e2-a7f5-059ea9ed6f72" (UID: "e8224418-e8de-49e2-a7f5-059ea9ed6f72"). InnerVolumeSpecName "kube-api-access-xxt8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:06:09 crc kubenswrapper[4854]: I0103 06:06:09.113605 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1aa6af8e-d27e-4727-a8bc-4a2e5690cc88-kube-api-access-56vks" (OuterVolumeSpecName: "kube-api-access-56vks") pod "1aa6af8e-d27e-4727-a8bc-4a2e5690cc88" (UID: "1aa6af8e-d27e-4727-a8bc-4a2e5690cc88"). InnerVolumeSpecName "kube-api-access-56vks". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:06:09 crc kubenswrapper[4854]: I0103 06:06:09.115337 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1ee7c12-946c-4a6c-b15b-15cd1c15bd30-kube-api-access-vvwvv" (OuterVolumeSpecName: "kube-api-access-vvwvv") pod "a1ee7c12-946c-4a6c-b15b-15cd1c15bd30" (UID: "a1ee7c12-946c-4a6c-b15b-15cd1c15bd30"). InnerVolumeSpecName "kube-api-access-vvwvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:06:09 crc kubenswrapper[4854]: I0103 06:06:09.206326 4854 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82b6b681-415b-40ee-9510-801116f895c8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:09 crc kubenswrapper[4854]: I0103 06:06:09.206361 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxt8b\" (UniqueName: \"kubernetes.io/projected/e8224418-e8de-49e2-a7f5-059ea9ed6f72-kube-api-access-xxt8b\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:09 crc kubenswrapper[4854]: I0103 06:06:09.206371 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nx4g\" (UniqueName: \"kubernetes.io/projected/82b6b681-415b-40ee-9510-801116f895c8-kube-api-access-5nx4g\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:09 crc kubenswrapper[4854]: I0103 06:06:09.206381 4854 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1ee7c12-946c-4a6c-b15b-15cd1c15bd30-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:09 crc kubenswrapper[4854]: I0103 06:06:09.206392 4854 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a04b84e4-f513-40cc-bd0e-852449fb839d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:09 crc kubenswrapper[4854]: I0103 06:06:09.206401 4854 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1aa6af8e-d27e-4727-a8bc-4a2e5690cc88-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:09 crc kubenswrapper[4854]: I0103 06:06:09.206411 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56vks\" (UniqueName: \"kubernetes.io/projected/1aa6af8e-d27e-4727-a8bc-4a2e5690cc88-kube-api-access-56vks\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:09 crc kubenswrapper[4854]: I0103 06:06:09.206420 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dj8k\" (UniqueName: \"kubernetes.io/projected/a04b84e4-f513-40cc-bd0e-852449fb839d-kube-api-access-7dj8k\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:09 crc kubenswrapper[4854]: I0103 06:06:09.206429 4854 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8224418-e8de-49e2-a7f5-059ea9ed6f72-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:09 crc kubenswrapper[4854]: I0103 06:06:09.206437 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvwvv\" (UniqueName: \"kubernetes.io/projected/a1ee7c12-946c-4a6c-b15b-15cd1c15bd30-kube-api-access-vvwvv\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:09 crc kubenswrapper[4854]: I0103 06:06:09.891755 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-5jcbw" event={"ID":"a04b84e4-f513-40cc-bd0e-852449fb839d","Type":"ContainerDied","Data":"7d801cb5021b3b8484d8e785eb6f24a5be92ede756b5eb8c97f5081d04093377"} Jan 03 06:06:09 crc kubenswrapper[4854]: I0103 06:06:09.892026 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d801cb5021b3b8484d8e785eb6f24a5be92ede756b5eb8c97f5081d04093377" Jan 03 06:06:09 crc kubenswrapper[4854]: I0103 06:06:09.891851 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-5jcbw" Jan 03 06:06:09 crc kubenswrapper[4854]: I0103 06:06:09.897055 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0bd1-account-create-update-jzm2v" Jan 03 06:06:09 crc kubenswrapper[4854]: I0103 06:06:09.906981 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-lr4nt" event={"ID":"1aa6af8e-d27e-4727-a8bc-4a2e5690cc88","Type":"ContainerDied","Data":"b3a4bb6ae85ff462a04fcb864c21308bfdcadbb3fe731e30db4a5c9c9c8a7133"} Jan 03 06:06:09 crc kubenswrapper[4854]: I0103 06:06:09.907023 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3a4bb6ae85ff462a04fcb864c21308bfdcadbb3fe731e30db4a5c9c9c8a7133" Jan 03 06:06:09 crc kubenswrapper[4854]: I0103 06:06:09.907129 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-lr4nt" Jan 03 06:06:09 crc kubenswrapper[4854]: I0103 06:06:09.907578 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jvwwp" Jan 03 06:06:09 crc kubenswrapper[4854]: I0103 06:06:09.907576 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e562-account-create-update-k68zp" Jan 03 06:06:10 crc kubenswrapper[4854]: I0103 06:06:10.893266 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-65bkj" Jan 03 06:06:10 crc kubenswrapper[4854]: I0103 06:06:10.913105 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"217dbe70-f377-42d7-8b9a-bbc22f53b861","Type":"ContainerStarted","Data":"000200e55927526ddc97fdb2df68709ac6b8f08aebb43297bae5b548d4e75c59"} Jan 03 06:06:10 crc kubenswrapper[4854]: I0103 06:06:10.913365 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 03 06:06:10 crc kubenswrapper[4854]: I0103 06:06:10.938783 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.262715542 podStartE2EDuration="6.938764829s" podCreationTimestamp="2026-01-03 06:06:04 +0000 UTC" firstStartedPulling="2026-01-03 06:06:05.684806396 +0000 UTC m=+1544.011382968" lastFinishedPulling="2026-01-03 06:06:10.360855683 +0000 UTC m=+1548.687432255" observedRunningTime="2026-01-03 06:06:10.934384342 +0000 UTC m=+1549.260960924" watchObservedRunningTime="2026-01-03 06:06:10.938764829 +0000 UTC m=+1549.265341401" Jan 03 06:06:10 crc kubenswrapper[4854]: I0103 06:06:10.961054 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-65bkj"] Jan 03 06:06:10 crc kubenswrapper[4854]: I0103 06:06:10.961517 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-65bkj" podUID="e54bce74-4b1b-463a-a4ef-fee9f38a5cfa" containerName="registry-server" containerID="cri-o://04198e36f52fe046ef740462572926f3a9e54e11412d331e19968961667a5a89" gracePeriod=2 Jan 03 06:06:11 crc kubenswrapper[4854]: I0103 06:06:11.406262 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8rffb" Jan 03 06:06:11 crc kubenswrapper[4854]: I0103 06:06:11.406572 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8rffb" Jan 03 06:06:11 crc kubenswrapper[4854]: I0103 06:06:11.548665 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-65bkj" Jan 03 06:06:11 crc kubenswrapper[4854]: I0103 06:06:11.778225 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e54bce74-4b1b-463a-a4ef-fee9f38a5cfa-utilities\") pod \"e54bce74-4b1b-463a-a4ef-fee9f38a5cfa\" (UID: \"e54bce74-4b1b-463a-a4ef-fee9f38a5cfa\") " Jan 03 06:06:11 crc kubenswrapper[4854]: I0103 06:06:11.778798 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e54bce74-4b1b-463a-a4ef-fee9f38a5cfa-utilities" (OuterVolumeSpecName: "utilities") pod "e54bce74-4b1b-463a-a4ef-fee9f38a5cfa" (UID: "e54bce74-4b1b-463a-a4ef-fee9f38a5cfa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:06:11 crc kubenswrapper[4854]: I0103 06:06:11.778848 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e54bce74-4b1b-463a-a4ef-fee9f38a5cfa-catalog-content\") pod \"e54bce74-4b1b-463a-a4ef-fee9f38a5cfa\" (UID: \"e54bce74-4b1b-463a-a4ef-fee9f38a5cfa\") " Jan 03 06:06:11 crc kubenswrapper[4854]: I0103 06:06:11.779150 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gv5xg\" (UniqueName: \"kubernetes.io/projected/e54bce74-4b1b-463a-a4ef-fee9f38a5cfa-kube-api-access-gv5xg\") pod \"e54bce74-4b1b-463a-a4ef-fee9f38a5cfa\" (UID: \"e54bce74-4b1b-463a-a4ef-fee9f38a5cfa\") " Jan 03 06:06:11 crc kubenswrapper[4854]: I0103 06:06:11.780301 4854 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e54bce74-4b1b-463a-a4ef-fee9f38a5cfa-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:11 crc kubenswrapper[4854]: I0103 06:06:11.785271 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e54bce74-4b1b-463a-a4ef-fee9f38a5cfa-kube-api-access-gv5xg" (OuterVolumeSpecName: "kube-api-access-gv5xg") pod "e54bce74-4b1b-463a-a4ef-fee9f38a5cfa" (UID: "e54bce74-4b1b-463a-a4ef-fee9f38a5cfa"). InnerVolumeSpecName "kube-api-access-gv5xg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:06:11 crc kubenswrapper[4854]: I0103 06:06:11.833558 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e54bce74-4b1b-463a-a4ef-fee9f38a5cfa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e54bce74-4b1b-463a-a4ef-fee9f38a5cfa" (UID: "e54bce74-4b1b-463a-a4ef-fee9f38a5cfa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:06:11 crc kubenswrapper[4854]: I0103 06:06:11.882806 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gv5xg\" (UniqueName: \"kubernetes.io/projected/e54bce74-4b1b-463a-a4ef-fee9f38a5cfa-kube-api-access-gv5xg\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:11 crc kubenswrapper[4854]: I0103 06:06:11.882866 4854 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e54bce74-4b1b-463a-a4ef-fee9f38a5cfa-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:11 crc kubenswrapper[4854]: I0103 06:06:11.929281 4854 generic.go:334] "Generic (PLEG): container finished" podID="e54bce74-4b1b-463a-a4ef-fee9f38a5cfa" containerID="04198e36f52fe046ef740462572926f3a9e54e11412d331e19968961667a5a89" exitCode=0 Jan 03 06:06:11 crc kubenswrapper[4854]: I0103 06:06:11.929404 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-65bkj" Jan 03 06:06:11 crc kubenswrapper[4854]: I0103 06:06:11.929385 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-65bkj" event={"ID":"e54bce74-4b1b-463a-a4ef-fee9f38a5cfa","Type":"ContainerDied","Data":"04198e36f52fe046ef740462572926f3a9e54e11412d331e19968961667a5a89"} Jan 03 06:06:11 crc kubenswrapper[4854]: I0103 06:06:11.929462 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-65bkj" event={"ID":"e54bce74-4b1b-463a-a4ef-fee9f38a5cfa","Type":"ContainerDied","Data":"6bcc2df716abf7248a61defd54f00ec6f18851f022aa55ae52d2f344d43697b7"} Jan 03 06:06:11 crc kubenswrapper[4854]: I0103 06:06:11.929490 4854 scope.go:117] "RemoveContainer" containerID="04198e36f52fe046ef740462572926f3a9e54e11412d331e19968961667a5a89" Jan 03 06:06:11 crc kubenswrapper[4854]: I0103 06:06:11.960910 4854 scope.go:117] "RemoveContainer" containerID="7a82c8094fd604ca23534b9561fd77ef2f99e86aecd1adf784fa02acecb563e6" Jan 03 06:06:12 crc kubenswrapper[4854]: I0103 06:06:12.014007 4854 scope.go:117] "RemoveContainer" containerID="b200980c1d7c5dd8c40db49246a201934f0525b93633a06876657471fc126903" Jan 03 06:06:12 crc kubenswrapper[4854]: I0103 06:06:12.028365 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-65bkj"] Jan 03 06:06:12 crc kubenswrapper[4854]: I0103 06:06:12.041965 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-65bkj"] Jan 03 06:06:12 crc kubenswrapper[4854]: I0103 06:06:12.078400 4854 scope.go:117] "RemoveContainer" containerID="04198e36f52fe046ef740462572926f3a9e54e11412d331e19968961667a5a89" Jan 03 06:06:12 crc kubenswrapper[4854]: E0103 06:06:12.081247 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04198e36f52fe046ef740462572926f3a9e54e11412d331e19968961667a5a89\": container with ID starting with 04198e36f52fe046ef740462572926f3a9e54e11412d331e19968961667a5a89 not found: ID does not exist" containerID="04198e36f52fe046ef740462572926f3a9e54e11412d331e19968961667a5a89" Jan 03 06:06:12 crc kubenswrapper[4854]: I0103 06:06:12.081302 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04198e36f52fe046ef740462572926f3a9e54e11412d331e19968961667a5a89"} err="failed to get container status \"04198e36f52fe046ef740462572926f3a9e54e11412d331e19968961667a5a89\": rpc error: code = NotFound desc = could not find container \"04198e36f52fe046ef740462572926f3a9e54e11412d331e19968961667a5a89\": container with ID starting with 04198e36f52fe046ef740462572926f3a9e54e11412d331e19968961667a5a89 not found: ID does not exist" Jan 03 06:06:12 crc kubenswrapper[4854]: I0103 06:06:12.081333 4854 scope.go:117] "RemoveContainer" containerID="7a82c8094fd604ca23534b9561fd77ef2f99e86aecd1adf784fa02acecb563e6" Jan 03 06:06:12 crc kubenswrapper[4854]: E0103 06:06:12.083028 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a82c8094fd604ca23534b9561fd77ef2f99e86aecd1adf784fa02acecb563e6\": container with ID starting with 7a82c8094fd604ca23534b9561fd77ef2f99e86aecd1adf784fa02acecb563e6 not found: ID does not exist" containerID="7a82c8094fd604ca23534b9561fd77ef2f99e86aecd1adf784fa02acecb563e6" Jan 03 06:06:12 crc kubenswrapper[4854]: I0103 06:06:12.083079 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a82c8094fd604ca23534b9561fd77ef2f99e86aecd1adf784fa02acecb563e6"} err="failed to get container status \"7a82c8094fd604ca23534b9561fd77ef2f99e86aecd1adf784fa02acecb563e6\": rpc error: code = NotFound desc = could not find container \"7a82c8094fd604ca23534b9561fd77ef2f99e86aecd1adf784fa02acecb563e6\": container with ID starting with 7a82c8094fd604ca23534b9561fd77ef2f99e86aecd1adf784fa02acecb563e6 not found: ID does not exist" Jan 03 06:06:12 crc kubenswrapper[4854]: I0103 06:06:12.083122 4854 scope.go:117] "RemoveContainer" containerID="b200980c1d7c5dd8c40db49246a201934f0525b93633a06876657471fc126903" Jan 03 06:06:12 crc kubenswrapper[4854]: E0103 06:06:12.087842 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b200980c1d7c5dd8c40db49246a201934f0525b93633a06876657471fc126903\": container with ID starting with b200980c1d7c5dd8c40db49246a201934f0525b93633a06876657471fc126903 not found: ID does not exist" containerID="b200980c1d7c5dd8c40db49246a201934f0525b93633a06876657471fc126903" Jan 03 06:06:12 crc kubenswrapper[4854]: I0103 06:06:12.087888 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b200980c1d7c5dd8c40db49246a201934f0525b93633a06876657471fc126903"} err="failed to get container status \"b200980c1d7c5dd8c40db49246a201934f0525b93633a06876657471fc126903\": rpc error: code = NotFound desc = could not find container \"b200980c1d7c5dd8c40db49246a201934f0525b93633a06876657471fc126903\": container with ID starting with b200980c1d7c5dd8c40db49246a201934f0525b93633a06876657471fc126903 not found: ID does not exist" Jan 03 06:06:12 crc kubenswrapper[4854]: I0103 06:06:12.135019 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e54bce74-4b1b-463a-a4ef-fee9f38a5cfa" path="/var/lib/kubelet/pods/e54bce74-4b1b-463a-a4ef-fee9f38a5cfa/volumes" Jan 03 06:06:12 crc kubenswrapper[4854]: I0103 06:06:12.449672 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8rffb" podUID="b65c9339-9f97-4c2f-8d6f-4344c7c33395" containerName="registry-server" probeResult="failure" output=< Jan 03 06:06:12 crc kubenswrapper[4854]: timeout: failed to connect service ":50051" within 1s Jan 03 06:06:12 crc kubenswrapper[4854]: > Jan 03 06:06:12 crc kubenswrapper[4854]: I0103 06:06:12.473831 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-79cf8b54b6-vks4f" Jan 03 06:06:12 crc kubenswrapper[4854]: I0103 06:06:12.602071 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57dd35dc-074c-4a29-92f6-afebc0f9fad3-config-data-custom\") pod \"57dd35dc-074c-4a29-92f6-afebc0f9fad3\" (UID: \"57dd35dc-074c-4a29-92f6-afebc0f9fad3\") " Jan 03 06:06:12 crc kubenswrapper[4854]: I0103 06:06:12.602152 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57dd35dc-074c-4a29-92f6-afebc0f9fad3-config-data\") pod \"57dd35dc-074c-4a29-92f6-afebc0f9fad3\" (UID: \"57dd35dc-074c-4a29-92f6-afebc0f9fad3\") " Jan 03 06:06:12 crc kubenswrapper[4854]: I0103 06:06:12.602186 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwc4d\" (UniqueName: \"kubernetes.io/projected/57dd35dc-074c-4a29-92f6-afebc0f9fad3-kube-api-access-cwc4d\") pod \"57dd35dc-074c-4a29-92f6-afebc0f9fad3\" (UID: \"57dd35dc-074c-4a29-92f6-afebc0f9fad3\") " Jan 03 06:06:12 crc kubenswrapper[4854]: I0103 06:06:12.602315 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57dd35dc-074c-4a29-92f6-afebc0f9fad3-combined-ca-bundle\") pod \"57dd35dc-074c-4a29-92f6-afebc0f9fad3\" (UID: \"57dd35dc-074c-4a29-92f6-afebc0f9fad3\") " Jan 03 06:06:12 crc kubenswrapper[4854]: I0103 06:06:12.608391 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57dd35dc-074c-4a29-92f6-afebc0f9fad3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "57dd35dc-074c-4a29-92f6-afebc0f9fad3" (UID: "57dd35dc-074c-4a29-92f6-afebc0f9fad3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:06:12 crc kubenswrapper[4854]: I0103 06:06:12.608576 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57dd35dc-074c-4a29-92f6-afebc0f9fad3-kube-api-access-cwc4d" (OuterVolumeSpecName: "kube-api-access-cwc4d") pod "57dd35dc-074c-4a29-92f6-afebc0f9fad3" (UID: "57dd35dc-074c-4a29-92f6-afebc0f9fad3"). InnerVolumeSpecName "kube-api-access-cwc4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:06:12 crc kubenswrapper[4854]: I0103 06:06:12.642898 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57dd35dc-074c-4a29-92f6-afebc0f9fad3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "57dd35dc-074c-4a29-92f6-afebc0f9fad3" (UID: "57dd35dc-074c-4a29-92f6-afebc0f9fad3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:06:12 crc kubenswrapper[4854]: I0103 06:06:12.679403 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57dd35dc-074c-4a29-92f6-afebc0f9fad3-config-data" (OuterVolumeSpecName: "config-data") pod "57dd35dc-074c-4a29-92f6-afebc0f9fad3" (UID: "57dd35dc-074c-4a29-92f6-afebc0f9fad3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:06:12 crc kubenswrapper[4854]: I0103 06:06:12.705219 4854 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57dd35dc-074c-4a29-92f6-afebc0f9fad3-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:12 crc kubenswrapper[4854]: I0103 06:06:12.705262 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57dd35dc-074c-4a29-92f6-afebc0f9fad3-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:12 crc kubenswrapper[4854]: I0103 06:06:12.705275 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwc4d\" (UniqueName: \"kubernetes.io/projected/57dd35dc-074c-4a29-92f6-afebc0f9fad3-kube-api-access-cwc4d\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:12 crc kubenswrapper[4854]: I0103 06:06:12.705288 4854 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57dd35dc-074c-4a29-92f6-afebc0f9fad3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:12 crc kubenswrapper[4854]: I0103 06:06:12.941911 4854 generic.go:334] "Generic (PLEG): container finished" podID="57dd35dc-074c-4a29-92f6-afebc0f9fad3" containerID="86ef38ad3fc6e7960ff9d969c9c7b497087e21b1c52cb1332c14406c64ec98b1" exitCode=0 Jan 03 06:06:12 crc kubenswrapper[4854]: I0103 06:06:12.941983 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-79cf8b54b6-vks4f" event={"ID":"57dd35dc-074c-4a29-92f6-afebc0f9fad3","Type":"ContainerDied","Data":"86ef38ad3fc6e7960ff9d969c9c7b497087e21b1c52cb1332c14406c64ec98b1"} Jan 03 06:06:12 crc kubenswrapper[4854]: I0103 06:06:12.942019 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-79cf8b54b6-vks4f" event={"ID":"57dd35dc-074c-4a29-92f6-afebc0f9fad3","Type":"ContainerDied","Data":"58da2c540f51305af2b3dc6e981c31aed48347582e1b215dc37ded0dc9bcabbf"} Jan 03 06:06:12 crc kubenswrapper[4854]: I0103 06:06:12.942039 4854 scope.go:117] "RemoveContainer" containerID="86ef38ad3fc6e7960ff9d969c9c7b497087e21b1c52cb1332c14406c64ec98b1" Jan 03 06:06:12 crc kubenswrapper[4854]: I0103 06:06:12.942174 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-79cf8b54b6-vks4f" Jan 03 06:06:12 crc kubenswrapper[4854]: I0103 06:06:12.981519 4854 scope.go:117] "RemoveContainer" containerID="86ef38ad3fc6e7960ff9d969c9c7b497087e21b1c52cb1332c14406c64ec98b1" Jan 03 06:06:12 crc kubenswrapper[4854]: E0103 06:06:12.982103 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86ef38ad3fc6e7960ff9d969c9c7b497087e21b1c52cb1332c14406c64ec98b1\": container with ID starting with 86ef38ad3fc6e7960ff9d969c9c7b497087e21b1c52cb1332c14406c64ec98b1 not found: ID does not exist" containerID="86ef38ad3fc6e7960ff9d969c9c7b497087e21b1c52cb1332c14406c64ec98b1" Jan 03 06:06:12 crc kubenswrapper[4854]: I0103 06:06:12.982155 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86ef38ad3fc6e7960ff9d969c9c7b497087e21b1c52cb1332c14406c64ec98b1"} err="failed to get container status \"86ef38ad3fc6e7960ff9d969c9c7b497087e21b1c52cb1332c14406c64ec98b1\": rpc error: code = NotFound desc = could not find container \"86ef38ad3fc6e7960ff9d969c9c7b497087e21b1c52cb1332c14406c64ec98b1\": container with ID starting with 86ef38ad3fc6e7960ff9d969c9c7b497087e21b1c52cb1332c14406c64ec98b1 not found: ID does not exist" Jan 03 06:06:12 crc kubenswrapper[4854]: I0103 06:06:12.993118 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-79cf8b54b6-vks4f"] Jan 03 06:06:13 crc kubenswrapper[4854]: I0103 06:06:13.008825 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-79cf8b54b6-vks4f"] Jan 03 06:06:13 crc kubenswrapper[4854]: I0103 06:06:13.313311 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ptr5k"] Jan 03 06:06:13 crc kubenswrapper[4854]: E0103 06:06:13.314068 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8224418-e8de-49e2-a7f5-059ea9ed6f72" containerName="mariadb-account-create-update" Jan 03 06:06:13 crc kubenswrapper[4854]: I0103 06:06:13.314108 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8224418-e8de-49e2-a7f5-059ea9ed6f72" containerName="mariadb-account-create-update" Jan 03 06:06:13 crc kubenswrapper[4854]: E0103 06:06:13.314124 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82b6b681-415b-40ee-9510-801116f895c8" containerName="mariadb-database-create" Jan 03 06:06:13 crc kubenswrapper[4854]: I0103 06:06:13.314131 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="82b6b681-415b-40ee-9510-801116f895c8" containerName="mariadb-database-create" Jan 03 06:06:13 crc kubenswrapper[4854]: E0103 06:06:13.314150 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57dd35dc-074c-4a29-92f6-afebc0f9fad3" containerName="heat-engine" Jan 03 06:06:13 crc kubenswrapper[4854]: I0103 06:06:13.314157 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="57dd35dc-074c-4a29-92f6-afebc0f9fad3" containerName="heat-engine" Jan 03 06:06:13 crc kubenswrapper[4854]: E0103 06:06:13.314169 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e54bce74-4b1b-463a-a4ef-fee9f38a5cfa" containerName="registry-server" Jan 03 06:06:13 crc kubenswrapper[4854]: I0103 06:06:13.314176 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="e54bce74-4b1b-463a-a4ef-fee9f38a5cfa" containerName="registry-server" Jan 03 06:06:13 crc kubenswrapper[4854]: E0103 06:06:13.314187 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e54bce74-4b1b-463a-a4ef-fee9f38a5cfa" containerName="extract-content" Jan 03 06:06:13 crc kubenswrapper[4854]: I0103 06:06:13.314193 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="e54bce74-4b1b-463a-a4ef-fee9f38a5cfa" containerName="extract-content" Jan 03 06:06:13 crc kubenswrapper[4854]: E0103 06:06:13.314201 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1ee7c12-946c-4a6c-b15b-15cd1c15bd30" containerName="mariadb-account-create-update" Jan 03 06:06:13 crc kubenswrapper[4854]: I0103 06:06:13.314207 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1ee7c12-946c-4a6c-b15b-15cd1c15bd30" containerName="mariadb-account-create-update" Jan 03 06:06:13 crc kubenswrapper[4854]: E0103 06:06:13.314221 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e54bce74-4b1b-463a-a4ef-fee9f38a5cfa" containerName="extract-utilities" Jan 03 06:06:13 crc kubenswrapper[4854]: I0103 06:06:13.314228 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="e54bce74-4b1b-463a-a4ef-fee9f38a5cfa" containerName="extract-utilities" Jan 03 06:06:13 crc kubenswrapper[4854]: E0103 06:06:13.314234 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a290934-d2f6-475a-814a-209a27b7e897" containerName="heat-cfnapi" Jan 03 06:06:13 crc kubenswrapper[4854]: I0103 06:06:13.314240 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a290934-d2f6-475a-814a-209a27b7e897" containerName="heat-cfnapi" Jan 03 06:06:13 crc kubenswrapper[4854]: E0103 06:06:13.314252 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a290934-d2f6-475a-814a-209a27b7e897" containerName="heat-cfnapi" Jan 03 06:06:13 crc kubenswrapper[4854]: I0103 06:06:13.314259 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a290934-d2f6-475a-814a-209a27b7e897" containerName="heat-cfnapi" Jan 03 06:06:13 crc kubenswrapper[4854]: E0103 06:06:13.314269 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1aee80e-651a-4434-a1da-34bd6dbd83bd" containerName="heat-api" Jan 03 06:06:13 crc kubenswrapper[4854]: I0103 06:06:13.314275 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1aee80e-651a-4434-a1da-34bd6dbd83bd" containerName="heat-api" Jan 03 06:06:13 crc kubenswrapper[4854]: E0103 06:06:13.314290 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1aa6af8e-d27e-4727-a8bc-4a2e5690cc88" containerName="mariadb-database-create" Jan 03 06:06:13 crc kubenswrapper[4854]: I0103 06:06:13.314296 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="1aa6af8e-d27e-4727-a8bc-4a2e5690cc88" containerName="mariadb-database-create" Jan 03 06:06:13 crc kubenswrapper[4854]: E0103 06:06:13.314302 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a04b84e4-f513-40cc-bd0e-852449fb839d" containerName="mariadb-database-create" Jan 03 06:06:13 crc kubenswrapper[4854]: I0103 06:06:13.314308 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="a04b84e4-f513-40cc-bd0e-852449fb839d" containerName="mariadb-database-create" Jan 03 06:06:13 crc kubenswrapper[4854]: E0103 06:06:13.314316 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7b23ad2-3ba6-44d4-88a0-aad1458970d0" containerName="mariadb-account-create-update" Jan 03 06:06:13 crc kubenswrapper[4854]: I0103 06:06:13.314321 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7b23ad2-3ba6-44d4-88a0-aad1458970d0" containerName="mariadb-account-create-update" Jan 03 06:06:13 crc kubenswrapper[4854]: E0103 06:06:13.314339 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1aee80e-651a-4434-a1da-34bd6dbd83bd" containerName="heat-api" Jan 03 06:06:13 crc kubenswrapper[4854]: I0103 06:06:13.314345 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1aee80e-651a-4434-a1da-34bd6dbd83bd" containerName="heat-api" Jan 03 06:06:13 crc kubenswrapper[4854]: I0103 06:06:13.314534 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="1aa6af8e-d27e-4727-a8bc-4a2e5690cc88" containerName="mariadb-database-create" Jan 03 06:06:13 crc kubenswrapper[4854]: I0103 06:06:13.314553 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1aee80e-651a-4434-a1da-34bd6dbd83bd" containerName="heat-api" Jan 03 06:06:13 crc kubenswrapper[4854]: I0103 06:06:13.314561 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="a04b84e4-f513-40cc-bd0e-852449fb839d" containerName="mariadb-database-create" Jan 03 06:06:13 crc kubenswrapper[4854]: I0103 06:06:13.314573 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8224418-e8de-49e2-a7f5-059ea9ed6f72" containerName="mariadb-account-create-update" Jan 03 06:06:13 crc kubenswrapper[4854]: I0103 06:06:13.314583 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="e54bce74-4b1b-463a-a4ef-fee9f38a5cfa" containerName="registry-server" Jan 03 06:06:13 crc kubenswrapper[4854]: I0103 06:06:13.314596 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="82b6b681-415b-40ee-9510-801116f895c8" containerName="mariadb-database-create" Jan 03 06:06:13 crc kubenswrapper[4854]: I0103 06:06:13.314604 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1aee80e-651a-4434-a1da-34bd6dbd83bd" containerName="heat-api" Jan 03 06:06:13 crc kubenswrapper[4854]: I0103 06:06:13.314617 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a290934-d2f6-475a-814a-209a27b7e897" containerName="heat-cfnapi" Jan 03 06:06:13 crc kubenswrapper[4854]: I0103 06:06:13.314636 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="57dd35dc-074c-4a29-92f6-afebc0f9fad3" containerName="heat-engine" Jan 03 06:06:13 crc kubenswrapper[4854]: I0103 06:06:13.314647 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a290934-d2f6-475a-814a-209a27b7e897" containerName="heat-cfnapi" Jan 03 06:06:13 crc kubenswrapper[4854]: I0103 06:06:13.314657 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7b23ad2-3ba6-44d4-88a0-aad1458970d0" containerName="mariadb-account-create-update" Jan 03 06:06:13 crc kubenswrapper[4854]: I0103 06:06:13.314667 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1ee7c12-946c-4a6c-b15b-15cd1c15bd30" containerName="mariadb-account-create-update" Jan 03 06:06:13 crc kubenswrapper[4854]: I0103 06:06:13.315462 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ptr5k" Jan 03 06:06:13 crc kubenswrapper[4854]: I0103 06:06:13.317769 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 03 06:06:13 crc kubenswrapper[4854]: I0103 06:06:13.320608 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-jp9zh" Jan 03 06:06:13 crc kubenswrapper[4854]: I0103 06:06:13.331190 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ptr5k"] Jan 03 06:06:13 crc kubenswrapper[4854]: I0103 06:06:13.340734 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 03 06:06:13 crc kubenswrapper[4854]: I0103 06:06:13.421547 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ffba700-7bb8-458d-b50f-322985473e2d-config-data\") pod \"nova-cell0-conductor-db-sync-ptr5k\" (UID: \"0ffba700-7bb8-458d-b50f-322985473e2d\") " pod="openstack/nova-cell0-conductor-db-sync-ptr5k" Jan 03 06:06:13 crc kubenswrapper[4854]: I0103 06:06:13.421614 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ffba700-7bb8-458d-b50f-322985473e2d-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-ptr5k\" (UID: \"0ffba700-7bb8-458d-b50f-322985473e2d\") " pod="openstack/nova-cell0-conductor-db-sync-ptr5k" Jan 03 06:06:13 crc kubenswrapper[4854]: I0103 06:06:13.421850 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9slk\" (UniqueName: \"kubernetes.io/projected/0ffba700-7bb8-458d-b50f-322985473e2d-kube-api-access-z9slk\") pod \"nova-cell0-conductor-db-sync-ptr5k\" (UID: \"0ffba700-7bb8-458d-b50f-322985473e2d\") " pod="openstack/nova-cell0-conductor-db-sync-ptr5k" Jan 03 06:06:13 crc kubenswrapper[4854]: I0103 06:06:13.421895 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ffba700-7bb8-458d-b50f-322985473e2d-scripts\") pod \"nova-cell0-conductor-db-sync-ptr5k\" (UID: \"0ffba700-7bb8-458d-b50f-322985473e2d\") " pod="openstack/nova-cell0-conductor-db-sync-ptr5k" Jan 03 06:06:13 crc kubenswrapper[4854]: I0103 06:06:13.523794 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9slk\" (UniqueName: \"kubernetes.io/projected/0ffba700-7bb8-458d-b50f-322985473e2d-kube-api-access-z9slk\") pod \"nova-cell0-conductor-db-sync-ptr5k\" (UID: \"0ffba700-7bb8-458d-b50f-322985473e2d\") " pod="openstack/nova-cell0-conductor-db-sync-ptr5k" Jan 03 06:06:13 crc kubenswrapper[4854]: I0103 06:06:13.523859 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ffba700-7bb8-458d-b50f-322985473e2d-scripts\") pod \"nova-cell0-conductor-db-sync-ptr5k\" (UID: \"0ffba700-7bb8-458d-b50f-322985473e2d\") " pod="openstack/nova-cell0-conductor-db-sync-ptr5k" Jan 03 06:06:13 crc kubenswrapper[4854]: I0103 06:06:13.523893 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ffba700-7bb8-458d-b50f-322985473e2d-config-data\") pod \"nova-cell0-conductor-db-sync-ptr5k\" (UID: \"0ffba700-7bb8-458d-b50f-322985473e2d\") " pod="openstack/nova-cell0-conductor-db-sync-ptr5k" Jan 03 06:06:13 crc kubenswrapper[4854]: I0103 06:06:13.523924 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ffba700-7bb8-458d-b50f-322985473e2d-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-ptr5k\" (UID: \"0ffba700-7bb8-458d-b50f-322985473e2d\") " pod="openstack/nova-cell0-conductor-db-sync-ptr5k" Jan 03 06:06:13 crc kubenswrapper[4854]: I0103 06:06:13.528796 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ffba700-7bb8-458d-b50f-322985473e2d-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-ptr5k\" (UID: \"0ffba700-7bb8-458d-b50f-322985473e2d\") " pod="openstack/nova-cell0-conductor-db-sync-ptr5k" Jan 03 06:06:13 crc kubenswrapper[4854]: I0103 06:06:13.528951 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ffba700-7bb8-458d-b50f-322985473e2d-config-data\") pod \"nova-cell0-conductor-db-sync-ptr5k\" (UID: \"0ffba700-7bb8-458d-b50f-322985473e2d\") " pod="openstack/nova-cell0-conductor-db-sync-ptr5k" Jan 03 06:06:13 crc kubenswrapper[4854]: I0103 06:06:13.529179 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ffba700-7bb8-458d-b50f-322985473e2d-scripts\") pod \"nova-cell0-conductor-db-sync-ptr5k\" (UID: \"0ffba700-7bb8-458d-b50f-322985473e2d\") " pod="openstack/nova-cell0-conductor-db-sync-ptr5k" Jan 03 06:06:13 crc kubenswrapper[4854]: I0103 06:06:13.543496 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9slk\" (UniqueName: \"kubernetes.io/projected/0ffba700-7bb8-458d-b50f-322985473e2d-kube-api-access-z9slk\") pod \"nova-cell0-conductor-db-sync-ptr5k\" (UID: \"0ffba700-7bb8-458d-b50f-322985473e2d\") " pod="openstack/nova-cell0-conductor-db-sync-ptr5k" Jan 03 06:06:13 crc kubenswrapper[4854]: I0103 06:06:13.696952 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ptr5k" Jan 03 06:06:14 crc kubenswrapper[4854]: I0103 06:06:14.132067 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57dd35dc-074c-4a29-92f6-afebc0f9fad3" path="/var/lib/kubelet/pods/57dd35dc-074c-4a29-92f6-afebc0f9fad3/volumes" Jan 03 06:06:14 crc kubenswrapper[4854]: I0103 06:06:14.198628 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ptr5k"] Jan 03 06:06:14 crc kubenswrapper[4854]: I0103 06:06:14.985487 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ptr5k" event={"ID":"0ffba700-7bb8-458d-b50f-322985473e2d","Type":"ContainerStarted","Data":"d6551415f4e6b154c364d59b89a663b412df7d6782b87ab7de911655d9af1081"} Jan 03 06:06:15 crc kubenswrapper[4854]: I0103 06:06:15.442324 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4jpmx"] Jan 03 06:06:15 crc kubenswrapper[4854]: I0103 06:06:15.445421 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4jpmx" Jan 03 06:06:15 crc kubenswrapper[4854]: I0103 06:06:15.466412 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4jpmx"] Jan 03 06:06:15 crc kubenswrapper[4854]: I0103 06:06:15.570746 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-295fb\" (UniqueName: \"kubernetes.io/projected/210d7917-5e9d-486f-a1bc-d9297b633a41-kube-api-access-295fb\") pod \"redhat-marketplace-4jpmx\" (UID: \"210d7917-5e9d-486f-a1bc-d9297b633a41\") " pod="openshift-marketplace/redhat-marketplace-4jpmx" Jan 03 06:06:15 crc kubenswrapper[4854]: I0103 06:06:15.570822 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/210d7917-5e9d-486f-a1bc-d9297b633a41-catalog-content\") pod \"redhat-marketplace-4jpmx\" (UID: \"210d7917-5e9d-486f-a1bc-d9297b633a41\") " pod="openshift-marketplace/redhat-marketplace-4jpmx" Jan 03 06:06:15 crc kubenswrapper[4854]: I0103 06:06:15.570883 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/210d7917-5e9d-486f-a1bc-d9297b633a41-utilities\") pod \"redhat-marketplace-4jpmx\" (UID: \"210d7917-5e9d-486f-a1bc-d9297b633a41\") " pod="openshift-marketplace/redhat-marketplace-4jpmx" Jan 03 06:06:15 crc kubenswrapper[4854]: I0103 06:06:15.675552 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-295fb\" (UniqueName: \"kubernetes.io/projected/210d7917-5e9d-486f-a1bc-d9297b633a41-kube-api-access-295fb\") pod \"redhat-marketplace-4jpmx\" (UID: \"210d7917-5e9d-486f-a1bc-d9297b633a41\") " pod="openshift-marketplace/redhat-marketplace-4jpmx" Jan 03 06:06:15 crc kubenswrapper[4854]: I0103 06:06:15.675624 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/210d7917-5e9d-486f-a1bc-d9297b633a41-catalog-content\") pod \"redhat-marketplace-4jpmx\" (UID: \"210d7917-5e9d-486f-a1bc-d9297b633a41\") " pod="openshift-marketplace/redhat-marketplace-4jpmx" Jan 03 06:06:15 crc kubenswrapper[4854]: I0103 06:06:15.675655 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/210d7917-5e9d-486f-a1bc-d9297b633a41-utilities\") pod \"redhat-marketplace-4jpmx\" (UID: \"210d7917-5e9d-486f-a1bc-d9297b633a41\") " pod="openshift-marketplace/redhat-marketplace-4jpmx" Jan 03 06:06:15 crc kubenswrapper[4854]: I0103 06:06:15.676242 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/210d7917-5e9d-486f-a1bc-d9297b633a41-utilities\") pod \"redhat-marketplace-4jpmx\" (UID: \"210d7917-5e9d-486f-a1bc-d9297b633a41\") " pod="openshift-marketplace/redhat-marketplace-4jpmx" Jan 03 06:06:15 crc kubenswrapper[4854]: I0103 06:06:15.676536 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/210d7917-5e9d-486f-a1bc-d9297b633a41-catalog-content\") pod \"redhat-marketplace-4jpmx\" (UID: \"210d7917-5e9d-486f-a1bc-d9297b633a41\") " pod="openshift-marketplace/redhat-marketplace-4jpmx" Jan 03 06:06:15 crc kubenswrapper[4854]: I0103 06:06:15.695627 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-295fb\" (UniqueName: \"kubernetes.io/projected/210d7917-5e9d-486f-a1bc-d9297b633a41-kube-api-access-295fb\") pod \"redhat-marketplace-4jpmx\" (UID: \"210d7917-5e9d-486f-a1bc-d9297b633a41\") " pod="openshift-marketplace/redhat-marketplace-4jpmx" Jan 03 06:06:15 crc kubenswrapper[4854]: I0103 06:06:15.772441 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4jpmx" Jan 03 06:06:16 crc kubenswrapper[4854]: I0103 06:06:16.407539 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4jpmx"] Jan 03 06:06:17 crc kubenswrapper[4854]: I0103 06:06:17.014526 4854 generic.go:334] "Generic (PLEG): container finished" podID="210d7917-5e9d-486f-a1bc-d9297b633a41" containerID="92d5ec8c664838f194363288ecd198757c60cf213d47ad871ba126da74846fea" exitCode=0 Jan 03 06:06:17 crc kubenswrapper[4854]: I0103 06:06:17.014604 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4jpmx" event={"ID":"210d7917-5e9d-486f-a1bc-d9297b633a41","Type":"ContainerDied","Data":"92d5ec8c664838f194363288ecd198757c60cf213d47ad871ba126da74846fea"} Jan 03 06:06:17 crc kubenswrapper[4854]: I0103 06:06:17.014898 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4jpmx" event={"ID":"210d7917-5e9d-486f-a1bc-d9297b633a41","Type":"ContainerStarted","Data":"ef1ac32f42930d4b088d326fb841147c83fd0327fa206a292b009d1d830e2645"} Jan 03 06:06:18 crc kubenswrapper[4854]: I0103 06:06:18.037640 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4jpmx" event={"ID":"210d7917-5e9d-486f-a1bc-d9297b633a41","Type":"ContainerStarted","Data":"83e4ef196288962685d78d435bb370e8aad25320a944be8b26e55339f758daf4"} Jan 03 06:06:19 crc kubenswrapper[4854]: I0103 06:06:19.063915 4854 generic.go:334] "Generic (PLEG): container finished" podID="210d7917-5e9d-486f-a1bc-d9297b633a41" containerID="83e4ef196288962685d78d435bb370e8aad25320a944be8b26e55339f758daf4" exitCode=0 Jan 03 06:06:19 crc kubenswrapper[4854]: I0103 06:06:19.064016 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4jpmx" event={"ID":"210d7917-5e9d-486f-a1bc-d9297b633a41","Type":"ContainerDied","Data":"83e4ef196288962685d78d435bb370e8aad25320a944be8b26e55339f758daf4"} Jan 03 06:06:21 crc kubenswrapper[4854]: I0103 06:06:21.453198 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8rffb" Jan 03 06:06:21 crc kubenswrapper[4854]: I0103 06:06:21.511368 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8rffb" Jan 03 06:06:23 crc kubenswrapper[4854]: I0103 06:06:23.023397 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8rffb"] Jan 03 06:06:23 crc kubenswrapper[4854]: I0103 06:06:23.076099 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 03 06:06:23 crc kubenswrapper[4854]: I0103 06:06:23.076350 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7c6807cf-78d2-4314-be86-3193a4f978a7" containerName="glance-log" containerID="cri-o://4f5c017b53116d2cacca17ed02b99c93ce5f2c0ba7b305cd414189f7ff05d415" gracePeriod=30 Jan 03 06:06:23 crc kubenswrapper[4854]: I0103 06:06:23.076886 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7c6807cf-78d2-4314-be86-3193a4f978a7" containerName="glance-httpd" containerID="cri-o://6c46ade4d9d60b7442622c1fefc49cd81bdb643723bc9efdf2f5666873fac3f1" gracePeriod=30 Jan 03 06:06:23 crc kubenswrapper[4854]: I0103 06:06:23.148867 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8rffb" podUID="b65c9339-9f97-4c2f-8d6f-4344c7c33395" containerName="registry-server" containerID="cri-o://cf4fe947d4397bacb2b92a2a76341776c6ab8b41ea88cb4ccb857c6a2faf42e8" gracePeriod=2 Jan 03 06:06:23 crc kubenswrapper[4854]: I0103 06:06:23.695940 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8rffb" Jan 03 06:06:23 crc kubenswrapper[4854]: I0103 06:06:23.829432 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ss7c5\" (UniqueName: \"kubernetes.io/projected/b65c9339-9f97-4c2f-8d6f-4344c7c33395-kube-api-access-ss7c5\") pod \"b65c9339-9f97-4c2f-8d6f-4344c7c33395\" (UID: \"b65c9339-9f97-4c2f-8d6f-4344c7c33395\") " Jan 03 06:06:23 crc kubenswrapper[4854]: I0103 06:06:23.829643 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b65c9339-9f97-4c2f-8d6f-4344c7c33395-catalog-content\") pod \"b65c9339-9f97-4c2f-8d6f-4344c7c33395\" (UID: \"b65c9339-9f97-4c2f-8d6f-4344c7c33395\") " Jan 03 06:06:23 crc kubenswrapper[4854]: I0103 06:06:23.829703 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b65c9339-9f97-4c2f-8d6f-4344c7c33395-utilities\") pod \"b65c9339-9f97-4c2f-8d6f-4344c7c33395\" (UID: \"b65c9339-9f97-4c2f-8d6f-4344c7c33395\") " Jan 03 06:06:23 crc kubenswrapper[4854]: I0103 06:06:23.830911 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b65c9339-9f97-4c2f-8d6f-4344c7c33395-utilities" (OuterVolumeSpecName: "utilities") pod "b65c9339-9f97-4c2f-8d6f-4344c7c33395" (UID: "b65c9339-9f97-4c2f-8d6f-4344c7c33395"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:06:23 crc kubenswrapper[4854]: I0103 06:06:23.834366 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b65c9339-9f97-4c2f-8d6f-4344c7c33395-kube-api-access-ss7c5" (OuterVolumeSpecName: "kube-api-access-ss7c5") pod "b65c9339-9f97-4c2f-8d6f-4344c7c33395" (UID: "b65c9339-9f97-4c2f-8d6f-4344c7c33395"). InnerVolumeSpecName "kube-api-access-ss7c5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:06:23 crc kubenswrapper[4854]: I0103 06:06:23.932559 4854 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b65c9339-9f97-4c2f-8d6f-4344c7c33395-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:23 crc kubenswrapper[4854]: I0103 06:06:23.932590 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ss7c5\" (UniqueName: \"kubernetes.io/projected/b65c9339-9f97-4c2f-8d6f-4344c7c33395-kube-api-access-ss7c5\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:23 crc kubenswrapper[4854]: I0103 06:06:23.936034 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b65c9339-9f97-4c2f-8d6f-4344c7c33395-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b65c9339-9f97-4c2f-8d6f-4344c7c33395" (UID: "b65c9339-9f97-4c2f-8d6f-4344c7c33395"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:06:24 crc kubenswrapper[4854]: I0103 06:06:24.034636 4854 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b65c9339-9f97-4c2f-8d6f-4344c7c33395-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:24 crc kubenswrapper[4854]: I0103 06:06:24.175513 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ptr5k" event={"ID":"0ffba700-7bb8-458d-b50f-322985473e2d","Type":"ContainerStarted","Data":"601848ec57f14b23dfba8b7d3ce2cacb396834177d0434e06bcd0fcdb811bb8f"} Jan 03 06:06:24 crc kubenswrapper[4854]: I0103 06:06:24.179983 4854 generic.go:334] "Generic (PLEG): container finished" podID="7c6807cf-78d2-4314-be86-3193a4f978a7" containerID="4f5c017b53116d2cacca17ed02b99c93ce5f2c0ba7b305cd414189f7ff05d415" exitCode=143 Jan 03 06:06:24 crc kubenswrapper[4854]: I0103 06:06:24.180112 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7c6807cf-78d2-4314-be86-3193a4f978a7","Type":"ContainerDied","Data":"4f5c017b53116d2cacca17ed02b99c93ce5f2c0ba7b305cd414189f7ff05d415"} Jan 03 06:06:24 crc kubenswrapper[4854]: I0103 06:06:24.183876 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4jpmx" event={"ID":"210d7917-5e9d-486f-a1bc-d9297b633a41","Type":"ContainerStarted","Data":"f7bcac251203930f839e6f54487da9cf8b8a44a4bc2047d8aaf6cffd2ccc8002"} Jan 03 06:06:24 crc kubenswrapper[4854]: I0103 06:06:24.188309 4854 generic.go:334] "Generic (PLEG): container finished" podID="b65c9339-9f97-4c2f-8d6f-4344c7c33395" containerID="cf4fe947d4397bacb2b92a2a76341776c6ab8b41ea88cb4ccb857c6a2faf42e8" exitCode=0 Jan 03 06:06:24 crc kubenswrapper[4854]: I0103 06:06:24.188466 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8rffb" event={"ID":"b65c9339-9f97-4c2f-8d6f-4344c7c33395","Type":"ContainerDied","Data":"cf4fe947d4397bacb2b92a2a76341776c6ab8b41ea88cb4ccb857c6a2faf42e8"} Jan 03 06:06:24 crc kubenswrapper[4854]: I0103 06:06:24.188560 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8rffb" event={"ID":"b65c9339-9f97-4c2f-8d6f-4344c7c33395","Type":"ContainerDied","Data":"1dbfc285598c5a140de5b47bfd1a5b5a608707922e0c100b0246e4fd7047b470"} Jan 03 06:06:24 crc kubenswrapper[4854]: I0103 06:06:24.188651 4854 scope.go:117] "RemoveContainer" containerID="cf4fe947d4397bacb2b92a2a76341776c6ab8b41ea88cb4ccb857c6a2faf42e8" Jan 03 06:06:24 crc kubenswrapper[4854]: I0103 06:06:24.188875 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8rffb" Jan 03 06:06:24 crc kubenswrapper[4854]: I0103 06:06:24.213680 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-ptr5k" podStartSLOduration=2.055451551 podStartE2EDuration="11.213653732s" podCreationTimestamp="2026-01-03 06:06:13 +0000 UTC" firstStartedPulling="2026-01-03 06:06:14.202541836 +0000 UTC m=+1552.529118418" lastFinishedPulling="2026-01-03 06:06:23.360744027 +0000 UTC m=+1561.687320599" observedRunningTime="2026-01-03 06:06:24.196944626 +0000 UTC m=+1562.523521208" watchObservedRunningTime="2026-01-03 06:06:24.213653732 +0000 UTC m=+1562.540230314" Jan 03 06:06:24 crc kubenswrapper[4854]: I0103 06:06:24.230781 4854 scope.go:117] "RemoveContainer" containerID="8af947bb8bbfbbd831d1ea9ebf65ca14498a3dd69acc7d8c35f96c1b01aef87c" Jan 03 06:06:24 crc kubenswrapper[4854]: I0103 06:06:24.246023 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4jpmx" podStartSLOduration=2.8090008859999998 podStartE2EDuration="9.245995309s" podCreationTimestamp="2026-01-03 06:06:15 +0000 UTC" firstStartedPulling="2026-01-03 06:06:17.016599649 +0000 UTC m=+1555.343176221" lastFinishedPulling="2026-01-03 06:06:23.453594072 +0000 UTC m=+1561.780170644" observedRunningTime="2026-01-03 06:06:24.237956038 +0000 UTC m=+1562.564532610" watchObservedRunningTime="2026-01-03 06:06:24.245995309 +0000 UTC m=+1562.572571881" Jan 03 06:06:24 crc kubenswrapper[4854]: I0103 06:06:24.258016 4854 scope.go:117] "RemoveContainer" containerID="601f024b2d3b91533561f324dbe76c4b96adf7efc2ce6d80254d83ea7666eaac" Jan 03 06:06:24 crc kubenswrapper[4854]: I0103 06:06:24.282638 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8rffb"] Jan 03 06:06:24 crc kubenswrapper[4854]: I0103 06:06:24.297024 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8rffb"] Jan 03 06:06:24 crc kubenswrapper[4854]: I0103 06:06:24.303710 4854 scope.go:117] "RemoveContainer" containerID="cf4fe947d4397bacb2b92a2a76341776c6ab8b41ea88cb4ccb857c6a2faf42e8" Jan 03 06:06:24 crc kubenswrapper[4854]: E0103 06:06:24.305032 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf4fe947d4397bacb2b92a2a76341776c6ab8b41ea88cb4ccb857c6a2faf42e8\": container with ID starting with cf4fe947d4397bacb2b92a2a76341776c6ab8b41ea88cb4ccb857c6a2faf42e8 not found: ID does not exist" containerID="cf4fe947d4397bacb2b92a2a76341776c6ab8b41ea88cb4ccb857c6a2faf42e8" Jan 03 06:06:24 crc kubenswrapper[4854]: I0103 06:06:24.305098 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf4fe947d4397bacb2b92a2a76341776c6ab8b41ea88cb4ccb857c6a2faf42e8"} err="failed to get container status \"cf4fe947d4397bacb2b92a2a76341776c6ab8b41ea88cb4ccb857c6a2faf42e8\": rpc error: code = NotFound desc = could not find container \"cf4fe947d4397bacb2b92a2a76341776c6ab8b41ea88cb4ccb857c6a2faf42e8\": container with ID starting with cf4fe947d4397bacb2b92a2a76341776c6ab8b41ea88cb4ccb857c6a2faf42e8 not found: ID does not exist" Jan 03 06:06:24 crc kubenswrapper[4854]: I0103 06:06:24.305124 4854 scope.go:117] "RemoveContainer" containerID="8af947bb8bbfbbd831d1ea9ebf65ca14498a3dd69acc7d8c35f96c1b01aef87c" Jan 03 06:06:24 crc kubenswrapper[4854]: E0103 06:06:24.305818 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8af947bb8bbfbbd831d1ea9ebf65ca14498a3dd69acc7d8c35f96c1b01aef87c\": container with ID starting with 8af947bb8bbfbbd831d1ea9ebf65ca14498a3dd69acc7d8c35f96c1b01aef87c not found: ID does not exist" containerID="8af947bb8bbfbbd831d1ea9ebf65ca14498a3dd69acc7d8c35f96c1b01aef87c" Jan 03 06:06:24 crc kubenswrapper[4854]: I0103 06:06:24.306011 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8af947bb8bbfbbd831d1ea9ebf65ca14498a3dd69acc7d8c35f96c1b01aef87c"} err="failed to get container status \"8af947bb8bbfbbd831d1ea9ebf65ca14498a3dd69acc7d8c35f96c1b01aef87c\": rpc error: code = NotFound desc = could not find container \"8af947bb8bbfbbd831d1ea9ebf65ca14498a3dd69acc7d8c35f96c1b01aef87c\": container with ID starting with 8af947bb8bbfbbd831d1ea9ebf65ca14498a3dd69acc7d8c35f96c1b01aef87c not found: ID does not exist" Jan 03 06:06:24 crc kubenswrapper[4854]: I0103 06:06:24.306213 4854 scope.go:117] "RemoveContainer" containerID="601f024b2d3b91533561f324dbe76c4b96adf7efc2ce6d80254d83ea7666eaac" Jan 03 06:06:24 crc kubenswrapper[4854]: E0103 06:06:24.307500 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"601f024b2d3b91533561f324dbe76c4b96adf7efc2ce6d80254d83ea7666eaac\": container with ID starting with 601f024b2d3b91533561f324dbe76c4b96adf7efc2ce6d80254d83ea7666eaac not found: ID does not exist" containerID="601f024b2d3b91533561f324dbe76c4b96adf7efc2ce6d80254d83ea7666eaac" Jan 03 06:06:24 crc kubenswrapper[4854]: I0103 06:06:24.307561 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"601f024b2d3b91533561f324dbe76c4b96adf7efc2ce6d80254d83ea7666eaac"} err="failed to get container status \"601f024b2d3b91533561f324dbe76c4b96adf7efc2ce6d80254d83ea7666eaac\": rpc error: code = NotFound desc = could not find container \"601f024b2d3b91533561f324dbe76c4b96adf7efc2ce6d80254d83ea7666eaac\": container with ID starting with 601f024b2d3b91533561f324dbe76c4b96adf7efc2ce6d80254d83ea7666eaac not found: ID does not exist" Jan 03 06:06:24 crc kubenswrapper[4854]: I0103 06:06:24.316265 4854 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod1f4da9c0-58a1-41d0-9d97-6cf376e6233d"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod1f4da9c0-58a1-41d0-9d97-6cf376e6233d] : Timed out while waiting for systemd to remove kubepods-besteffort-pod1f4da9c0_58a1_41d0_9d97_6cf376e6233d.slice" Jan 03 06:06:24 crc kubenswrapper[4854]: E0103 06:06:24.398325 4854 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb65c9339_9f97_4c2f_8d6f_4344c7c33395.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb65c9339_9f97_4c2f_8d6f_4344c7c33395.slice/crio-1dbfc285598c5a140de5b47bfd1a5b5a608707922e0c100b0246e4fd7047b470\": RecentStats: unable to find data in memory cache]" Jan 03 06:06:24 crc kubenswrapper[4854]: I0103 06:06:24.831940 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 03 06:06:24 crc kubenswrapper[4854]: I0103 06:06:24.836631 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="592030b4-bfc1-4eb9-81a3-20a22a405f70" containerName="glance-log" containerID="cri-o://85643b8d8711e7fcd4a7c9cf76762eace64d762f122af93501aa913d2f5fb88d" gracePeriod=30 Jan 03 06:06:24 crc kubenswrapper[4854]: I0103 06:06:24.836705 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="592030b4-bfc1-4eb9-81a3-20a22a405f70" containerName="glance-httpd" containerID="cri-o://8100ec056877151f84d04b412963cdb83d2054f08073f1c1ab4a64e7a7da9f5f" gracePeriod=30 Jan 03 06:06:25 crc kubenswrapper[4854]: I0103 06:06:25.203098 4854 generic.go:334] "Generic (PLEG): container finished" podID="592030b4-bfc1-4eb9-81a3-20a22a405f70" containerID="85643b8d8711e7fcd4a7c9cf76762eace64d762f122af93501aa913d2f5fb88d" exitCode=143 Jan 03 06:06:25 crc kubenswrapper[4854]: I0103 06:06:25.203184 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"592030b4-bfc1-4eb9-81a3-20a22a405f70","Type":"ContainerDied","Data":"85643b8d8711e7fcd4a7c9cf76762eace64d762f122af93501aa913d2f5fb88d"} Jan 03 06:06:25 crc kubenswrapper[4854]: I0103 06:06:25.737857 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:06:25 crc kubenswrapper[4854]: I0103 06:06:25.738200 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="217dbe70-f377-42d7-8b9a-bbc22f53b861" containerName="sg-core" containerID="cri-o://a1f982716199eeee216fa9cde5b54660f4d8cf68fd9401a187541846fa072f27" gracePeriod=30 Jan 03 06:06:25 crc kubenswrapper[4854]: I0103 06:06:25.738256 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="217dbe70-f377-42d7-8b9a-bbc22f53b861" containerName="proxy-httpd" containerID="cri-o://000200e55927526ddc97fdb2df68709ac6b8f08aebb43297bae5b548d4e75c59" gracePeriod=30 Jan 03 06:06:25 crc kubenswrapper[4854]: I0103 06:06:25.738256 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="217dbe70-f377-42d7-8b9a-bbc22f53b861" containerName="ceilometer-notification-agent" containerID="cri-o://eba305729573796e2491bfe8e22a01c956224f31b8ca5337ae374b8ac948ef69" gracePeriod=30 Jan 03 06:06:25 crc kubenswrapper[4854]: I0103 06:06:25.738160 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="217dbe70-f377-42d7-8b9a-bbc22f53b861" containerName="ceilometer-central-agent" containerID="cri-o://c36c0aa53f56e323a283942b7ab861f27bab7b9109892aac492aa97b18337f62" gracePeriod=30 Jan 03 06:06:25 crc kubenswrapper[4854]: I0103 06:06:25.749116 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="217dbe70-f377-42d7-8b9a-bbc22f53b861" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.230:3000/\": EOF" Jan 03 06:06:25 crc kubenswrapper[4854]: I0103 06:06:25.773489 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4jpmx" Jan 03 06:06:25 crc kubenswrapper[4854]: I0103 06:06:25.773546 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4jpmx" Jan 03 06:06:26 crc kubenswrapper[4854]: I0103 06:06:26.130771 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b65c9339-9f97-4c2f-8d6f-4344c7c33395" path="/var/lib/kubelet/pods/b65c9339-9f97-4c2f-8d6f-4344c7c33395/volumes" Jan 03 06:06:26 crc kubenswrapper[4854]: I0103 06:06:26.217951 4854 generic.go:334] "Generic (PLEG): container finished" podID="217dbe70-f377-42d7-8b9a-bbc22f53b861" containerID="000200e55927526ddc97fdb2df68709ac6b8f08aebb43297bae5b548d4e75c59" exitCode=0 Jan 03 06:06:26 crc kubenswrapper[4854]: I0103 06:06:26.217994 4854 generic.go:334] "Generic (PLEG): container finished" podID="217dbe70-f377-42d7-8b9a-bbc22f53b861" containerID="a1f982716199eeee216fa9cde5b54660f4d8cf68fd9401a187541846fa072f27" exitCode=2 Jan 03 06:06:26 crc kubenswrapper[4854]: I0103 06:06:26.218044 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"217dbe70-f377-42d7-8b9a-bbc22f53b861","Type":"ContainerDied","Data":"000200e55927526ddc97fdb2df68709ac6b8f08aebb43297bae5b548d4e75c59"} Jan 03 06:06:26 crc kubenswrapper[4854]: I0103 06:06:26.218135 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"217dbe70-f377-42d7-8b9a-bbc22f53b861","Type":"ContainerDied","Data":"a1f982716199eeee216fa9cde5b54660f4d8cf68fd9401a187541846fa072f27"} Jan 03 06:06:26 crc kubenswrapper[4854]: I0103 06:06:26.834895 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-4jpmx" podUID="210d7917-5e9d-486f-a1bc-d9297b633a41" containerName="registry-server" probeResult="failure" output=< Jan 03 06:06:26 crc kubenswrapper[4854]: timeout: failed to connect service ":50051" within 1s Jan 03 06:06:26 crc kubenswrapper[4854]: > Jan 03 06:06:26 crc kubenswrapper[4854]: I0103 06:06:26.891285 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 03 06:06:26 crc kubenswrapper[4854]: I0103 06:06:26.922773 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zc7d\" (UniqueName: \"kubernetes.io/projected/7c6807cf-78d2-4314-be86-3193a4f978a7-kube-api-access-7zc7d\") pod \"7c6807cf-78d2-4314-be86-3193a4f978a7\" (UID: \"7c6807cf-78d2-4314-be86-3193a4f978a7\") " Jan 03 06:06:26 crc kubenswrapper[4854]: I0103 06:06:26.922948 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c6807cf-78d2-4314-be86-3193a4f978a7-scripts\") pod \"7c6807cf-78d2-4314-be86-3193a4f978a7\" (UID: \"7c6807cf-78d2-4314-be86-3193a4f978a7\") " Jan 03 06:06:26 crc kubenswrapper[4854]: I0103 06:06:26.924547 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eb48dc05-44bc-494a-9a3b-52570d27764e\") pod \"7c6807cf-78d2-4314-be86-3193a4f978a7\" (UID: \"7c6807cf-78d2-4314-be86-3193a4f978a7\") " Jan 03 06:06:26 crc kubenswrapper[4854]: I0103 06:06:26.924639 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c6807cf-78d2-4314-be86-3193a4f978a7-logs\") pod \"7c6807cf-78d2-4314-be86-3193a4f978a7\" (UID: \"7c6807cf-78d2-4314-be86-3193a4f978a7\") " Jan 03 06:06:26 crc kubenswrapper[4854]: I0103 06:06:26.924857 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c6807cf-78d2-4314-be86-3193a4f978a7-config-data\") pod \"7c6807cf-78d2-4314-be86-3193a4f978a7\" (UID: \"7c6807cf-78d2-4314-be86-3193a4f978a7\") " Jan 03 06:06:26 crc kubenswrapper[4854]: I0103 06:06:26.924890 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c6807cf-78d2-4314-be86-3193a4f978a7-public-tls-certs\") pod \"7c6807cf-78d2-4314-be86-3193a4f978a7\" (UID: \"7c6807cf-78d2-4314-be86-3193a4f978a7\") " Jan 03 06:06:26 crc kubenswrapper[4854]: I0103 06:06:26.925022 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c6807cf-78d2-4314-be86-3193a4f978a7-combined-ca-bundle\") pod \"7c6807cf-78d2-4314-be86-3193a4f978a7\" (UID: \"7c6807cf-78d2-4314-be86-3193a4f978a7\") " Jan 03 06:06:26 crc kubenswrapper[4854]: I0103 06:06:26.925055 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7c6807cf-78d2-4314-be86-3193a4f978a7-httpd-run\") pod \"7c6807cf-78d2-4314-be86-3193a4f978a7\" (UID: \"7c6807cf-78d2-4314-be86-3193a4f978a7\") " Jan 03 06:06:26 crc kubenswrapper[4854]: I0103 06:06:26.925057 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c6807cf-78d2-4314-be86-3193a4f978a7-logs" (OuterVolumeSpecName: "logs") pod "7c6807cf-78d2-4314-be86-3193a4f978a7" (UID: "7c6807cf-78d2-4314-be86-3193a4f978a7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:06:26 crc kubenswrapper[4854]: I0103 06:06:26.926309 4854 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c6807cf-78d2-4314-be86-3193a4f978a7-logs\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:26 crc kubenswrapper[4854]: I0103 06:06:26.926436 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c6807cf-78d2-4314-be86-3193a4f978a7-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7c6807cf-78d2-4314-be86-3193a4f978a7" (UID: "7c6807cf-78d2-4314-be86-3193a4f978a7"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:06:26 crc kubenswrapper[4854]: I0103 06:06:26.931468 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c6807cf-78d2-4314-be86-3193a4f978a7-scripts" (OuterVolumeSpecName: "scripts") pod "7c6807cf-78d2-4314-be86-3193a4f978a7" (UID: "7c6807cf-78d2-4314-be86-3193a4f978a7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:06:26 crc kubenswrapper[4854]: I0103 06:06:26.931896 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c6807cf-78d2-4314-be86-3193a4f978a7-kube-api-access-7zc7d" (OuterVolumeSpecName: "kube-api-access-7zc7d") pod "7c6807cf-78d2-4314-be86-3193a4f978a7" (UID: "7c6807cf-78d2-4314-be86-3193a4f978a7"). InnerVolumeSpecName "kube-api-access-7zc7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:06:26 crc kubenswrapper[4854]: I0103 06:06:26.970241 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eb48dc05-44bc-494a-9a3b-52570d27764e" (OuterVolumeSpecName: "glance") pod "7c6807cf-78d2-4314-be86-3193a4f978a7" (UID: "7c6807cf-78d2-4314-be86-3193a4f978a7"). InnerVolumeSpecName "pvc-eb48dc05-44bc-494a-9a3b-52570d27764e". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 03 06:06:26 crc kubenswrapper[4854]: I0103 06:06:26.991042 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c6807cf-78d2-4314-be86-3193a4f978a7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7c6807cf-78d2-4314-be86-3193a4f978a7" (UID: "7c6807cf-78d2-4314-be86-3193a4f978a7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.012629 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c6807cf-78d2-4314-be86-3193a4f978a7-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "7c6807cf-78d2-4314-be86-3193a4f978a7" (UID: "7c6807cf-78d2-4314-be86-3193a4f978a7"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.032473 4854 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c6807cf-78d2-4314-be86-3193a4f978a7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.032534 4854 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7c6807cf-78d2-4314-be86-3193a4f978a7-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.032806 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zc7d\" (UniqueName: \"kubernetes.io/projected/7c6807cf-78d2-4314-be86-3193a4f978a7-kube-api-access-7zc7d\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.032833 4854 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c6807cf-78d2-4314-be86-3193a4f978a7-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.032901 4854 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-eb48dc05-44bc-494a-9a3b-52570d27764e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eb48dc05-44bc-494a-9a3b-52570d27764e\") on node \"crc\" " Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.032920 4854 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c6807cf-78d2-4314-be86-3193a4f978a7-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.041525 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c6807cf-78d2-4314-be86-3193a4f978a7-config-data" (OuterVolumeSpecName: "config-data") pod "7c6807cf-78d2-4314-be86-3193a4f978a7" (UID: "7c6807cf-78d2-4314-be86-3193a4f978a7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.073861 4854 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.074021 4854 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-eb48dc05-44bc-494a-9a3b-52570d27764e" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eb48dc05-44bc-494a-9a3b-52570d27764e") on node "crc" Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.135661 4854 reconciler_common.go:293] "Volume detached for volume \"pvc-eb48dc05-44bc-494a-9a3b-52570d27764e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eb48dc05-44bc-494a-9a3b-52570d27764e\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.135704 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c6807cf-78d2-4314-be86-3193a4f978a7-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.230839 4854 generic.go:334] "Generic (PLEG): container finished" podID="217dbe70-f377-42d7-8b9a-bbc22f53b861" containerID="c36c0aa53f56e323a283942b7ab861f27bab7b9109892aac492aa97b18337f62" exitCode=0 Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.230908 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"217dbe70-f377-42d7-8b9a-bbc22f53b861","Type":"ContainerDied","Data":"c36c0aa53f56e323a283942b7ab861f27bab7b9109892aac492aa97b18337f62"} Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.232932 4854 generic.go:334] "Generic (PLEG): container finished" podID="7c6807cf-78d2-4314-be86-3193a4f978a7" containerID="6c46ade4d9d60b7442622c1fefc49cd81bdb643723bc9efdf2f5666873fac3f1" exitCode=0 Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.232965 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7c6807cf-78d2-4314-be86-3193a4f978a7","Type":"ContainerDied","Data":"6c46ade4d9d60b7442622c1fefc49cd81bdb643723bc9efdf2f5666873fac3f1"} Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.232997 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7c6807cf-78d2-4314-be86-3193a4f978a7","Type":"ContainerDied","Data":"b5f3cfa503a68335cece6183716584750500821567821c9d4616ad7daf3dbc80"} Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.233015 4854 scope.go:117] "RemoveContainer" containerID="6c46ade4d9d60b7442622c1fefc49cd81bdb643723bc9efdf2f5666873fac3f1" Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.233019 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.259119 4854 scope.go:117] "RemoveContainer" containerID="4f5c017b53116d2cacca17ed02b99c93ce5f2c0ba7b305cd414189f7ff05d415" Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.278840 4854 scope.go:117] "RemoveContainer" containerID="6c46ade4d9d60b7442622c1fefc49cd81bdb643723bc9efdf2f5666873fac3f1" Jan 03 06:06:27 crc kubenswrapper[4854]: E0103 06:06:27.279314 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c46ade4d9d60b7442622c1fefc49cd81bdb643723bc9efdf2f5666873fac3f1\": container with ID starting with 6c46ade4d9d60b7442622c1fefc49cd81bdb643723bc9efdf2f5666873fac3f1 not found: ID does not exist" containerID="6c46ade4d9d60b7442622c1fefc49cd81bdb643723bc9efdf2f5666873fac3f1" Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.279363 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c46ade4d9d60b7442622c1fefc49cd81bdb643723bc9efdf2f5666873fac3f1"} err="failed to get container status \"6c46ade4d9d60b7442622c1fefc49cd81bdb643723bc9efdf2f5666873fac3f1\": rpc error: code = NotFound desc = could not find container \"6c46ade4d9d60b7442622c1fefc49cd81bdb643723bc9efdf2f5666873fac3f1\": container with ID starting with 6c46ade4d9d60b7442622c1fefc49cd81bdb643723bc9efdf2f5666873fac3f1 not found: ID does not exist" Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.279394 4854 scope.go:117] "RemoveContainer" containerID="4f5c017b53116d2cacca17ed02b99c93ce5f2c0ba7b305cd414189f7ff05d415" Jan 03 06:06:27 crc kubenswrapper[4854]: E0103 06:06:27.279597 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f5c017b53116d2cacca17ed02b99c93ce5f2c0ba7b305cd414189f7ff05d415\": container with ID starting with 4f5c017b53116d2cacca17ed02b99c93ce5f2c0ba7b305cd414189f7ff05d415 not found: ID does not exist" containerID="4f5c017b53116d2cacca17ed02b99c93ce5f2c0ba7b305cd414189f7ff05d415" Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.279619 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f5c017b53116d2cacca17ed02b99c93ce5f2c0ba7b305cd414189f7ff05d415"} err="failed to get container status \"4f5c017b53116d2cacca17ed02b99c93ce5f2c0ba7b305cd414189f7ff05d415\": rpc error: code = NotFound desc = could not find container \"4f5c017b53116d2cacca17ed02b99c93ce5f2c0ba7b305cd414189f7ff05d415\": container with ID starting with 4f5c017b53116d2cacca17ed02b99c93ce5f2c0ba7b305cd414189f7ff05d415 not found: ID does not exist" Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.285386 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.301703 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.315902 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 03 06:06:27 crc kubenswrapper[4854]: E0103 06:06:27.316349 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c6807cf-78d2-4314-be86-3193a4f978a7" containerName="glance-httpd" Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.316367 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c6807cf-78d2-4314-be86-3193a4f978a7" containerName="glance-httpd" Jan 03 06:06:27 crc kubenswrapper[4854]: E0103 06:06:27.316393 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b65c9339-9f97-4c2f-8d6f-4344c7c33395" containerName="extract-utilities" Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.316399 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="b65c9339-9f97-4c2f-8d6f-4344c7c33395" containerName="extract-utilities" Jan 03 06:06:27 crc kubenswrapper[4854]: E0103 06:06:27.316423 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b65c9339-9f97-4c2f-8d6f-4344c7c33395" containerName="registry-server" Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.316429 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="b65c9339-9f97-4c2f-8d6f-4344c7c33395" containerName="registry-server" Jan 03 06:06:27 crc kubenswrapper[4854]: E0103 06:06:27.316437 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b65c9339-9f97-4c2f-8d6f-4344c7c33395" containerName="extract-content" Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.316443 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="b65c9339-9f97-4c2f-8d6f-4344c7c33395" containerName="extract-content" Jan 03 06:06:27 crc kubenswrapper[4854]: E0103 06:06:27.316464 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c6807cf-78d2-4314-be86-3193a4f978a7" containerName="glance-log" Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.316470 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c6807cf-78d2-4314-be86-3193a4f978a7" containerName="glance-log" Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.316681 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c6807cf-78d2-4314-be86-3193a4f978a7" containerName="glance-httpd" Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.316705 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c6807cf-78d2-4314-be86-3193a4f978a7" containerName="glance-log" Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.316724 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="b65c9339-9f97-4c2f-8d6f-4344c7c33395" containerName="registry-server" Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.317942 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.320342 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.320397 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.330350 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.444050 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-eb48dc05-44bc-494a-9a3b-52570d27764e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eb48dc05-44bc-494a-9a3b-52570d27764e\") pod \"glance-default-external-api-0\" (UID: \"ffae8cd6-4ec1-43b5-9c20-a2a23c7d6d05\") " pod="openstack/glance-default-external-api-0" Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.444357 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ffae8cd6-4ec1-43b5-9c20-a2a23c7d6d05-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ffae8cd6-4ec1-43b5-9c20-a2a23c7d6d05\") " pod="openstack/glance-default-external-api-0" Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.444391 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffae8cd6-4ec1-43b5-9c20-a2a23c7d6d05-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ffae8cd6-4ec1-43b5-9c20-a2a23c7d6d05\") " pod="openstack/glance-default-external-api-0" Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.444432 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffae8cd6-4ec1-43b5-9c20-a2a23c7d6d05-logs\") pod \"glance-default-external-api-0\" (UID: \"ffae8cd6-4ec1-43b5-9c20-a2a23c7d6d05\") " pod="openstack/glance-default-external-api-0" Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.444471 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffae8cd6-4ec1-43b5-9c20-a2a23c7d6d05-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ffae8cd6-4ec1-43b5-9c20-a2a23c7d6d05\") " pod="openstack/glance-default-external-api-0" Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.444493 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j987v\" (UniqueName: \"kubernetes.io/projected/ffae8cd6-4ec1-43b5-9c20-a2a23c7d6d05-kube-api-access-j987v\") pod \"glance-default-external-api-0\" (UID: \"ffae8cd6-4ec1-43b5-9c20-a2a23c7d6d05\") " pod="openstack/glance-default-external-api-0" Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.444564 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffae8cd6-4ec1-43b5-9c20-a2a23c7d6d05-config-data\") pod \"glance-default-external-api-0\" (UID: \"ffae8cd6-4ec1-43b5-9c20-a2a23c7d6d05\") " pod="openstack/glance-default-external-api-0" Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.444602 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffae8cd6-4ec1-43b5-9c20-a2a23c7d6d05-scripts\") pod \"glance-default-external-api-0\" (UID: \"ffae8cd6-4ec1-43b5-9c20-a2a23c7d6d05\") " pod="openstack/glance-default-external-api-0" Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.546921 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffae8cd6-4ec1-43b5-9c20-a2a23c7d6d05-config-data\") pod \"glance-default-external-api-0\" (UID: \"ffae8cd6-4ec1-43b5-9c20-a2a23c7d6d05\") " pod="openstack/glance-default-external-api-0" Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.546989 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffae8cd6-4ec1-43b5-9c20-a2a23c7d6d05-scripts\") pod \"glance-default-external-api-0\" (UID: \"ffae8cd6-4ec1-43b5-9c20-a2a23c7d6d05\") " pod="openstack/glance-default-external-api-0" Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.547093 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-eb48dc05-44bc-494a-9a3b-52570d27764e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eb48dc05-44bc-494a-9a3b-52570d27764e\") pod \"glance-default-external-api-0\" (UID: \"ffae8cd6-4ec1-43b5-9c20-a2a23c7d6d05\") " pod="openstack/glance-default-external-api-0" Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.547143 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ffae8cd6-4ec1-43b5-9c20-a2a23c7d6d05-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ffae8cd6-4ec1-43b5-9c20-a2a23c7d6d05\") " pod="openstack/glance-default-external-api-0" Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.547187 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffae8cd6-4ec1-43b5-9c20-a2a23c7d6d05-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ffae8cd6-4ec1-43b5-9c20-a2a23c7d6d05\") " pod="openstack/glance-default-external-api-0" Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.547233 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffae8cd6-4ec1-43b5-9c20-a2a23c7d6d05-logs\") pod \"glance-default-external-api-0\" (UID: \"ffae8cd6-4ec1-43b5-9c20-a2a23c7d6d05\") " pod="openstack/glance-default-external-api-0" Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.547277 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffae8cd6-4ec1-43b5-9c20-a2a23c7d6d05-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ffae8cd6-4ec1-43b5-9c20-a2a23c7d6d05\") " pod="openstack/glance-default-external-api-0" Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.547298 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j987v\" (UniqueName: \"kubernetes.io/projected/ffae8cd6-4ec1-43b5-9c20-a2a23c7d6d05-kube-api-access-j987v\") pod \"glance-default-external-api-0\" (UID: \"ffae8cd6-4ec1-43b5-9c20-a2a23c7d6d05\") " pod="openstack/glance-default-external-api-0" Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.548038 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffae8cd6-4ec1-43b5-9c20-a2a23c7d6d05-logs\") pod \"glance-default-external-api-0\" (UID: \"ffae8cd6-4ec1-43b5-9c20-a2a23c7d6d05\") " pod="openstack/glance-default-external-api-0" Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.548093 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ffae8cd6-4ec1-43b5-9c20-a2a23c7d6d05-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ffae8cd6-4ec1-43b5-9c20-a2a23c7d6d05\") " pod="openstack/glance-default-external-api-0" Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.552805 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffae8cd6-4ec1-43b5-9c20-a2a23c7d6d05-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ffae8cd6-4ec1-43b5-9c20-a2a23c7d6d05\") " pod="openstack/glance-default-external-api-0" Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.553661 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffae8cd6-4ec1-43b5-9c20-a2a23c7d6d05-scripts\") pod \"glance-default-external-api-0\" (UID: \"ffae8cd6-4ec1-43b5-9c20-a2a23c7d6d05\") " pod="openstack/glance-default-external-api-0" Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.554037 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffae8cd6-4ec1-43b5-9c20-a2a23c7d6d05-config-data\") pod \"glance-default-external-api-0\" (UID: \"ffae8cd6-4ec1-43b5-9c20-a2a23c7d6d05\") " pod="openstack/glance-default-external-api-0" Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.554498 4854 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.554537 4854 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-eb48dc05-44bc-494a-9a3b-52570d27764e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eb48dc05-44bc-494a-9a3b-52570d27764e\") pod \"glance-default-external-api-0\" (UID: \"ffae8cd6-4ec1-43b5-9c20-a2a23c7d6d05\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c9f88882bd3572929d1777fc402cfbfc71f661649bff4d337785efef0a76426b/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.554795 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffae8cd6-4ec1-43b5-9c20-a2a23c7d6d05-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ffae8cd6-4ec1-43b5-9c20-a2a23c7d6d05\") " pod="openstack/glance-default-external-api-0" Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.577946 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j987v\" (UniqueName: \"kubernetes.io/projected/ffae8cd6-4ec1-43b5-9c20-a2a23c7d6d05-kube-api-access-j987v\") pod \"glance-default-external-api-0\" (UID: \"ffae8cd6-4ec1-43b5-9c20-a2a23c7d6d05\") " pod="openstack/glance-default-external-api-0" Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.607156 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-eb48dc05-44bc-494a-9a3b-52570d27764e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eb48dc05-44bc-494a-9a3b-52570d27764e\") pod \"glance-default-external-api-0\" (UID: \"ffae8cd6-4ec1-43b5-9c20-a2a23c7d6d05\") " pod="openstack/glance-default-external-api-0" Jan 03 06:06:27 crc kubenswrapper[4854]: I0103 06:06:27.635913 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 03 06:06:28 crc kubenswrapper[4854]: I0103 06:06:28.138472 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c6807cf-78d2-4314-be86-3193a4f978a7" path="/var/lib/kubelet/pods/7c6807cf-78d2-4314-be86-3193a4f978a7/volumes" Jan 03 06:06:28 crc kubenswrapper[4854]: I0103 06:06:28.303215 4854 generic.go:334] "Generic (PLEG): container finished" podID="592030b4-bfc1-4eb9-81a3-20a22a405f70" containerID="8100ec056877151f84d04b412963cdb83d2054f08073f1c1ab4a64e7a7da9f5f" exitCode=0 Jan 03 06:06:28 crc kubenswrapper[4854]: I0103 06:06:28.303322 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"592030b4-bfc1-4eb9-81a3-20a22a405f70","Type":"ContainerDied","Data":"8100ec056877151f84d04b412963cdb83d2054f08073f1c1ab4a64e7a7da9f5f"} Jan 03 06:06:28 crc kubenswrapper[4854]: I0103 06:06:28.462402 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 03 06:06:28 crc kubenswrapper[4854]: W0103 06:06:28.489587 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffae8cd6_4ec1_43b5_9c20_a2a23c7d6d05.slice/crio-e0785250e1f9ec51d9cc9a420224b483cf005ebaea252a630f6095f0d67f4757 WatchSource:0}: Error finding container e0785250e1f9ec51d9cc9a420224b483cf005ebaea252a630f6095f0d67f4757: Status 404 returned error can't find the container with id e0785250e1f9ec51d9cc9a420224b483cf005ebaea252a630f6095f0d67f4757 Jan 03 06:06:28 crc kubenswrapper[4854]: I0103 06:06:28.808578 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 03 06:06:28 crc kubenswrapper[4854]: I0103 06:06:28.900660 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/592030b4-bfc1-4eb9-81a3-20a22a405f70-config-data\") pod \"592030b4-bfc1-4eb9-81a3-20a22a405f70\" (UID: \"592030b4-bfc1-4eb9-81a3-20a22a405f70\") " Jan 03 06:06:28 crc kubenswrapper[4854]: I0103 06:06:28.900708 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-td89g\" (UniqueName: \"kubernetes.io/projected/592030b4-bfc1-4eb9-81a3-20a22a405f70-kube-api-access-td89g\") pod \"592030b4-bfc1-4eb9-81a3-20a22a405f70\" (UID: \"592030b4-bfc1-4eb9-81a3-20a22a405f70\") " Jan 03 06:06:28 crc kubenswrapper[4854]: I0103 06:06:28.903281 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-51516975-721e-4e12-b0dd-7f07c321db4e\") pod \"592030b4-bfc1-4eb9-81a3-20a22a405f70\" (UID: \"592030b4-bfc1-4eb9-81a3-20a22a405f70\") " Jan 03 06:06:28 crc kubenswrapper[4854]: I0103 06:06:28.903349 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/592030b4-bfc1-4eb9-81a3-20a22a405f70-combined-ca-bundle\") pod \"592030b4-bfc1-4eb9-81a3-20a22a405f70\" (UID: \"592030b4-bfc1-4eb9-81a3-20a22a405f70\") " Jan 03 06:06:28 crc kubenswrapper[4854]: I0103 06:06:28.903455 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/592030b4-bfc1-4eb9-81a3-20a22a405f70-httpd-run\") pod \"592030b4-bfc1-4eb9-81a3-20a22a405f70\" (UID: \"592030b4-bfc1-4eb9-81a3-20a22a405f70\") " Jan 03 06:06:28 crc kubenswrapper[4854]: I0103 06:06:28.903513 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/592030b4-bfc1-4eb9-81a3-20a22a405f70-scripts\") pod \"592030b4-bfc1-4eb9-81a3-20a22a405f70\" (UID: \"592030b4-bfc1-4eb9-81a3-20a22a405f70\") " Jan 03 06:06:28 crc kubenswrapper[4854]: I0103 06:06:28.903547 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/592030b4-bfc1-4eb9-81a3-20a22a405f70-internal-tls-certs\") pod \"592030b4-bfc1-4eb9-81a3-20a22a405f70\" (UID: \"592030b4-bfc1-4eb9-81a3-20a22a405f70\") " Jan 03 06:06:28 crc kubenswrapper[4854]: I0103 06:06:28.903605 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/592030b4-bfc1-4eb9-81a3-20a22a405f70-logs\") pod \"592030b4-bfc1-4eb9-81a3-20a22a405f70\" (UID: \"592030b4-bfc1-4eb9-81a3-20a22a405f70\") " Jan 03 06:06:28 crc kubenswrapper[4854]: I0103 06:06:28.904662 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/592030b4-bfc1-4eb9-81a3-20a22a405f70-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "592030b4-bfc1-4eb9-81a3-20a22a405f70" (UID: "592030b4-bfc1-4eb9-81a3-20a22a405f70"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:06:28 crc kubenswrapper[4854]: I0103 06:06:28.905299 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/592030b4-bfc1-4eb9-81a3-20a22a405f70-logs" (OuterVolumeSpecName: "logs") pod "592030b4-bfc1-4eb9-81a3-20a22a405f70" (UID: "592030b4-bfc1-4eb9-81a3-20a22a405f70"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:06:28 crc kubenswrapper[4854]: I0103 06:06:28.913593 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/592030b4-bfc1-4eb9-81a3-20a22a405f70-scripts" (OuterVolumeSpecName: "scripts") pod "592030b4-bfc1-4eb9-81a3-20a22a405f70" (UID: "592030b4-bfc1-4eb9-81a3-20a22a405f70"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:06:28 crc kubenswrapper[4854]: I0103 06:06:28.920893 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/592030b4-bfc1-4eb9-81a3-20a22a405f70-kube-api-access-td89g" (OuterVolumeSpecName: "kube-api-access-td89g") pod "592030b4-bfc1-4eb9-81a3-20a22a405f70" (UID: "592030b4-bfc1-4eb9-81a3-20a22a405f70"). InnerVolumeSpecName "kube-api-access-td89g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:06:28 crc kubenswrapper[4854]: I0103 06:06:28.958295 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/592030b4-bfc1-4eb9-81a3-20a22a405f70-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "592030b4-bfc1-4eb9-81a3-20a22a405f70" (UID: "592030b4-bfc1-4eb9-81a3-20a22a405f70"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:06:28 crc kubenswrapper[4854]: I0103 06:06:28.968859 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-51516975-721e-4e12-b0dd-7f07c321db4e" (OuterVolumeSpecName: "glance") pod "592030b4-bfc1-4eb9-81a3-20a22a405f70" (UID: "592030b4-bfc1-4eb9-81a3-20a22a405f70"). InnerVolumeSpecName "pvc-51516975-721e-4e12-b0dd-7f07c321db4e". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 03 06:06:29 crc kubenswrapper[4854]: I0103 06:06:29.007475 4854 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/592030b4-bfc1-4eb9-81a3-20a22a405f70-logs\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:29 crc kubenswrapper[4854]: I0103 06:06:29.007521 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-td89g\" (UniqueName: \"kubernetes.io/projected/592030b4-bfc1-4eb9-81a3-20a22a405f70-kube-api-access-td89g\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:29 crc kubenswrapper[4854]: I0103 06:06:29.007579 4854 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-51516975-721e-4e12-b0dd-7f07c321db4e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-51516975-721e-4e12-b0dd-7f07c321db4e\") on node \"crc\" " Jan 03 06:06:29 crc kubenswrapper[4854]: I0103 06:06:29.007596 4854 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/592030b4-bfc1-4eb9-81a3-20a22a405f70-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:29 crc kubenswrapper[4854]: I0103 06:06:29.007610 4854 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/592030b4-bfc1-4eb9-81a3-20a22a405f70-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:29 crc kubenswrapper[4854]: I0103 06:06:29.007620 4854 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/592030b4-bfc1-4eb9-81a3-20a22a405f70-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:29 crc kubenswrapper[4854]: I0103 06:06:29.035300 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/592030b4-bfc1-4eb9-81a3-20a22a405f70-config-data" (OuterVolumeSpecName: "config-data") pod "592030b4-bfc1-4eb9-81a3-20a22a405f70" (UID: "592030b4-bfc1-4eb9-81a3-20a22a405f70"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:06:29 crc kubenswrapper[4854]: I0103 06:06:29.047930 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/592030b4-bfc1-4eb9-81a3-20a22a405f70-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "592030b4-bfc1-4eb9-81a3-20a22a405f70" (UID: "592030b4-bfc1-4eb9-81a3-20a22a405f70"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:06:29 crc kubenswrapper[4854]: I0103 06:06:29.099004 4854 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 03 06:06:29 crc kubenswrapper[4854]: I0103 06:06:29.099175 4854 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-51516975-721e-4e12-b0dd-7f07c321db4e" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-51516975-721e-4e12-b0dd-7f07c321db4e") on node "crc" Jan 03 06:06:29 crc kubenswrapper[4854]: I0103 06:06:29.111005 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/592030b4-bfc1-4eb9-81a3-20a22a405f70-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:29 crc kubenswrapper[4854]: I0103 06:06:29.111048 4854 reconciler_common.go:293] "Volume detached for volume \"pvc-51516975-721e-4e12-b0dd-7f07c321db4e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-51516975-721e-4e12-b0dd-7f07c321db4e\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:29 crc kubenswrapper[4854]: I0103 06:06:29.111064 4854 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/592030b4-bfc1-4eb9-81a3-20a22a405f70-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:29 crc kubenswrapper[4854]: I0103 06:06:29.357860 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ffae8cd6-4ec1-43b5-9c20-a2a23c7d6d05","Type":"ContainerStarted","Data":"e0785250e1f9ec51d9cc9a420224b483cf005ebaea252a630f6095f0d67f4757"} Jan 03 06:06:29 crc kubenswrapper[4854]: I0103 06:06:29.362182 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"592030b4-bfc1-4eb9-81a3-20a22a405f70","Type":"ContainerDied","Data":"c70e8d7c9620c6e4df6b0b13fc6fc26c413b8d9c8584de0c842f3668e9319747"} Jan 03 06:06:29 crc kubenswrapper[4854]: I0103 06:06:29.362258 4854 scope.go:117] "RemoveContainer" containerID="8100ec056877151f84d04b412963cdb83d2054f08073f1c1ab4a64e7a7da9f5f" Jan 03 06:06:29 crc kubenswrapper[4854]: I0103 06:06:29.362334 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 03 06:06:29 crc kubenswrapper[4854]: I0103 06:06:29.410494 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 03 06:06:29 crc kubenswrapper[4854]: I0103 06:06:29.420764 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 03 06:06:29 crc kubenswrapper[4854]: I0103 06:06:29.444201 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 03 06:06:29 crc kubenswrapper[4854]: E0103 06:06:29.444790 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="592030b4-bfc1-4eb9-81a3-20a22a405f70" containerName="glance-log" Jan 03 06:06:29 crc kubenswrapper[4854]: I0103 06:06:29.444808 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="592030b4-bfc1-4eb9-81a3-20a22a405f70" containerName="glance-log" Jan 03 06:06:29 crc kubenswrapper[4854]: E0103 06:06:29.444829 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="592030b4-bfc1-4eb9-81a3-20a22a405f70" containerName="glance-httpd" Jan 03 06:06:29 crc kubenswrapper[4854]: I0103 06:06:29.444839 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="592030b4-bfc1-4eb9-81a3-20a22a405f70" containerName="glance-httpd" Jan 03 06:06:29 crc kubenswrapper[4854]: I0103 06:06:29.445074 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="592030b4-bfc1-4eb9-81a3-20a22a405f70" containerName="glance-httpd" Jan 03 06:06:29 crc kubenswrapper[4854]: I0103 06:06:29.445124 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="592030b4-bfc1-4eb9-81a3-20a22a405f70" containerName="glance-log" Jan 03 06:06:29 crc kubenswrapper[4854]: I0103 06:06:29.446363 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 03 06:06:29 crc kubenswrapper[4854]: I0103 06:06:29.457779 4854 scope.go:117] "RemoveContainer" containerID="85643b8d8711e7fcd4a7c9cf76762eace64d762f122af93501aa913d2f5fb88d" Jan 03 06:06:29 crc kubenswrapper[4854]: I0103 06:06:29.457967 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 03 06:06:29 crc kubenswrapper[4854]: I0103 06:06:29.458209 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 03 06:06:29 crc kubenswrapper[4854]: I0103 06:06:29.460476 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 03 06:06:29 crc kubenswrapper[4854]: I0103 06:06:29.524642 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/861c2ac7-54e5-4409-ab4a-87bdfe074572-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"861c2ac7-54e5-4409-ab4a-87bdfe074572\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:06:29 crc kubenswrapper[4854]: I0103 06:06:29.524726 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/861c2ac7-54e5-4409-ab4a-87bdfe074572-scripts\") pod \"glance-default-internal-api-0\" (UID: \"861c2ac7-54e5-4409-ab4a-87bdfe074572\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:06:29 crc kubenswrapper[4854]: I0103 06:06:29.524774 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/861c2ac7-54e5-4409-ab4a-87bdfe074572-config-data\") pod \"glance-default-internal-api-0\" (UID: \"861c2ac7-54e5-4409-ab4a-87bdfe074572\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:06:29 crc kubenswrapper[4854]: I0103 06:06:29.524809 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-51516975-721e-4e12-b0dd-7f07c321db4e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-51516975-721e-4e12-b0dd-7f07c321db4e\") pod \"glance-default-internal-api-0\" (UID: \"861c2ac7-54e5-4409-ab4a-87bdfe074572\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:06:29 crc kubenswrapper[4854]: I0103 06:06:29.524834 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/861c2ac7-54e5-4409-ab4a-87bdfe074572-logs\") pod \"glance-default-internal-api-0\" (UID: \"861c2ac7-54e5-4409-ab4a-87bdfe074572\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:06:29 crc kubenswrapper[4854]: I0103 06:06:29.524850 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/861c2ac7-54e5-4409-ab4a-87bdfe074572-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"861c2ac7-54e5-4409-ab4a-87bdfe074572\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:06:29 crc kubenswrapper[4854]: I0103 06:06:29.524872 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvvrl\" (UniqueName: \"kubernetes.io/projected/861c2ac7-54e5-4409-ab4a-87bdfe074572-kube-api-access-mvvrl\") pod \"glance-default-internal-api-0\" (UID: \"861c2ac7-54e5-4409-ab4a-87bdfe074572\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:06:29 crc kubenswrapper[4854]: I0103 06:06:29.524887 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/861c2ac7-54e5-4409-ab4a-87bdfe074572-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"861c2ac7-54e5-4409-ab4a-87bdfe074572\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:06:29 crc kubenswrapper[4854]: I0103 06:06:29.627648 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/861c2ac7-54e5-4409-ab4a-87bdfe074572-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"861c2ac7-54e5-4409-ab4a-87bdfe074572\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:06:29 crc kubenswrapper[4854]: I0103 06:06:29.627743 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/861c2ac7-54e5-4409-ab4a-87bdfe074572-scripts\") pod \"glance-default-internal-api-0\" (UID: \"861c2ac7-54e5-4409-ab4a-87bdfe074572\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:06:29 crc kubenswrapper[4854]: I0103 06:06:29.627794 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/861c2ac7-54e5-4409-ab4a-87bdfe074572-config-data\") pod \"glance-default-internal-api-0\" (UID: \"861c2ac7-54e5-4409-ab4a-87bdfe074572\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:06:29 crc kubenswrapper[4854]: I0103 06:06:29.627863 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-51516975-721e-4e12-b0dd-7f07c321db4e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-51516975-721e-4e12-b0dd-7f07c321db4e\") pod \"glance-default-internal-api-0\" (UID: \"861c2ac7-54e5-4409-ab4a-87bdfe074572\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:06:29 crc kubenswrapper[4854]: I0103 06:06:29.627895 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/861c2ac7-54e5-4409-ab4a-87bdfe074572-logs\") pod \"glance-default-internal-api-0\" (UID: \"861c2ac7-54e5-4409-ab4a-87bdfe074572\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:06:29 crc kubenswrapper[4854]: I0103 06:06:29.627913 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/861c2ac7-54e5-4409-ab4a-87bdfe074572-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"861c2ac7-54e5-4409-ab4a-87bdfe074572\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:06:29 crc kubenswrapper[4854]: I0103 06:06:29.627939 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvvrl\" (UniqueName: \"kubernetes.io/projected/861c2ac7-54e5-4409-ab4a-87bdfe074572-kube-api-access-mvvrl\") pod \"glance-default-internal-api-0\" (UID: \"861c2ac7-54e5-4409-ab4a-87bdfe074572\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:06:29 crc kubenswrapper[4854]: I0103 06:06:29.627961 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/861c2ac7-54e5-4409-ab4a-87bdfe074572-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"861c2ac7-54e5-4409-ab4a-87bdfe074572\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:06:29 crc kubenswrapper[4854]: I0103 06:06:29.630442 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/861c2ac7-54e5-4409-ab4a-87bdfe074572-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"861c2ac7-54e5-4409-ab4a-87bdfe074572\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:06:29 crc kubenswrapper[4854]: I0103 06:06:29.634763 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/861c2ac7-54e5-4409-ab4a-87bdfe074572-logs\") pod \"glance-default-internal-api-0\" (UID: \"861c2ac7-54e5-4409-ab4a-87bdfe074572\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:06:29 crc kubenswrapper[4854]: I0103 06:06:29.638810 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/861c2ac7-54e5-4409-ab4a-87bdfe074572-scripts\") pod \"glance-default-internal-api-0\" (UID: \"861c2ac7-54e5-4409-ab4a-87bdfe074572\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:06:29 crc kubenswrapper[4854]: I0103 06:06:29.643832 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/861c2ac7-54e5-4409-ab4a-87bdfe074572-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"861c2ac7-54e5-4409-ab4a-87bdfe074572\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:06:29 crc kubenswrapper[4854]: I0103 06:06:29.643882 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/861c2ac7-54e5-4409-ab4a-87bdfe074572-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"861c2ac7-54e5-4409-ab4a-87bdfe074572\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:06:29 crc kubenswrapper[4854]: I0103 06:06:29.646169 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/861c2ac7-54e5-4409-ab4a-87bdfe074572-config-data\") pod \"glance-default-internal-api-0\" (UID: \"861c2ac7-54e5-4409-ab4a-87bdfe074572\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:06:29 crc kubenswrapper[4854]: I0103 06:06:29.661683 4854 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 03 06:06:29 crc kubenswrapper[4854]: I0103 06:06:29.661731 4854 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-51516975-721e-4e12-b0dd-7f07c321db4e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-51516975-721e-4e12-b0dd-7f07c321db4e\") pod \"glance-default-internal-api-0\" (UID: \"861c2ac7-54e5-4409-ab4a-87bdfe074572\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0b13373a9549fe2a930d42f4a05d9c8d5f6308eb7dd62d88a464a85534d03fb8/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 03 06:06:29 crc kubenswrapper[4854]: I0103 06:06:29.664801 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvvrl\" (UniqueName: \"kubernetes.io/projected/861c2ac7-54e5-4409-ab4a-87bdfe074572-kube-api-access-mvvrl\") pod \"glance-default-internal-api-0\" (UID: \"861c2ac7-54e5-4409-ab4a-87bdfe074572\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:06:29 crc kubenswrapper[4854]: I0103 06:06:29.913065 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-51516975-721e-4e12-b0dd-7f07c321db4e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-51516975-721e-4e12-b0dd-7f07c321db4e\") pod \"glance-default-internal-api-0\" (UID: \"861c2ac7-54e5-4409-ab4a-87bdfe074572\") " pod="openstack/glance-default-internal-api-0" Jan 03 06:06:30 crc kubenswrapper[4854]: I0103 06:06:30.135318 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="592030b4-bfc1-4eb9-81a3-20a22a405f70" path="/var/lib/kubelet/pods/592030b4-bfc1-4eb9-81a3-20a22a405f70/volumes" Jan 03 06:06:30 crc kubenswrapper[4854]: I0103 06:06:30.139788 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 03 06:06:30 crc kubenswrapper[4854]: I0103 06:06:30.394688 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ffae8cd6-4ec1-43b5-9c20-a2a23c7d6d05","Type":"ContainerStarted","Data":"4654a8108609c8f6cfafed160c0928efd614c3c7988fc284edf5b8efa1e9da42"} Jan 03 06:06:30 crc kubenswrapper[4854]: I0103 06:06:30.396417 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ffae8cd6-4ec1-43b5-9c20-a2a23c7d6d05","Type":"ContainerStarted","Data":"9b5732ef7b63356e0630281954d634a86406d6ed9974e5dae788e1cb26a5c24f"} Jan 03 06:06:30 crc kubenswrapper[4854]: I0103 06:06:30.420102 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.420069165 podStartE2EDuration="3.420069165s" podCreationTimestamp="2026-01-03 06:06:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:06:30.41424443 +0000 UTC m=+1568.740821012" watchObservedRunningTime="2026-01-03 06:06:30.420069165 +0000 UTC m=+1568.746645727" Jan 03 06:06:30 crc kubenswrapper[4854]: W0103 06:06:30.713418 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod861c2ac7_54e5_4409_ab4a_87bdfe074572.slice/crio-64b2b258b18a9896c49fb9914b7644bc16a8256f929fc418476a6562a63d22bb WatchSource:0}: Error finding container 64b2b258b18a9896c49fb9914b7644bc16a8256f929fc418476a6562a63d22bb: Status 404 returned error can't find the container with id 64b2b258b18a9896c49fb9914b7644bc16a8256f929fc418476a6562a63d22bb Jan 03 06:06:30 crc kubenswrapper[4854]: I0103 06:06:30.720721 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 03 06:06:31 crc kubenswrapper[4854]: I0103 06:06:31.436612 4854 generic.go:334] "Generic (PLEG): container finished" podID="217dbe70-f377-42d7-8b9a-bbc22f53b861" containerID="eba305729573796e2491bfe8e22a01c956224f31b8ca5337ae374b8ac948ef69" exitCode=0 Jan 03 06:06:31 crc kubenswrapper[4854]: I0103 06:06:31.436922 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"217dbe70-f377-42d7-8b9a-bbc22f53b861","Type":"ContainerDied","Data":"eba305729573796e2491bfe8e22a01c956224f31b8ca5337ae374b8ac948ef69"} Jan 03 06:06:31 crc kubenswrapper[4854]: I0103 06:06:31.442398 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"861c2ac7-54e5-4409-ab4a-87bdfe074572","Type":"ContainerStarted","Data":"8a802dacf036e919282996bb91d34411d217d43b06c6e3131afefead12cdeccb"} Jan 03 06:06:31 crc kubenswrapper[4854]: I0103 06:06:31.442469 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"861c2ac7-54e5-4409-ab4a-87bdfe074572","Type":"ContainerStarted","Data":"64b2b258b18a9896c49fb9914b7644bc16a8256f929fc418476a6562a63d22bb"} Jan 03 06:06:31 crc kubenswrapper[4854]: I0103 06:06:31.725202 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 06:06:31 crc kubenswrapper[4854]: I0103 06:06:31.792903 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/217dbe70-f377-42d7-8b9a-bbc22f53b861-config-data\") pod \"217dbe70-f377-42d7-8b9a-bbc22f53b861\" (UID: \"217dbe70-f377-42d7-8b9a-bbc22f53b861\") " Jan 03 06:06:31 crc kubenswrapper[4854]: I0103 06:06:31.793046 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfbnh\" (UniqueName: \"kubernetes.io/projected/217dbe70-f377-42d7-8b9a-bbc22f53b861-kube-api-access-kfbnh\") pod \"217dbe70-f377-42d7-8b9a-bbc22f53b861\" (UID: \"217dbe70-f377-42d7-8b9a-bbc22f53b861\") " Jan 03 06:06:31 crc kubenswrapper[4854]: I0103 06:06:31.793109 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/217dbe70-f377-42d7-8b9a-bbc22f53b861-combined-ca-bundle\") pod \"217dbe70-f377-42d7-8b9a-bbc22f53b861\" (UID: \"217dbe70-f377-42d7-8b9a-bbc22f53b861\") " Jan 03 06:06:31 crc kubenswrapper[4854]: I0103 06:06:31.796385 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/217dbe70-f377-42d7-8b9a-bbc22f53b861-log-httpd\") pod \"217dbe70-f377-42d7-8b9a-bbc22f53b861\" (UID: \"217dbe70-f377-42d7-8b9a-bbc22f53b861\") " Jan 03 06:06:31 crc kubenswrapper[4854]: I0103 06:06:31.796461 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/217dbe70-f377-42d7-8b9a-bbc22f53b861-scripts\") pod \"217dbe70-f377-42d7-8b9a-bbc22f53b861\" (UID: \"217dbe70-f377-42d7-8b9a-bbc22f53b861\") " Jan 03 06:06:31 crc kubenswrapper[4854]: I0103 06:06:31.796490 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/217dbe70-f377-42d7-8b9a-bbc22f53b861-run-httpd\") pod \"217dbe70-f377-42d7-8b9a-bbc22f53b861\" (UID: \"217dbe70-f377-42d7-8b9a-bbc22f53b861\") " Jan 03 06:06:31 crc kubenswrapper[4854]: I0103 06:06:31.796624 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/217dbe70-f377-42d7-8b9a-bbc22f53b861-sg-core-conf-yaml\") pod \"217dbe70-f377-42d7-8b9a-bbc22f53b861\" (UID: \"217dbe70-f377-42d7-8b9a-bbc22f53b861\") " Jan 03 06:06:31 crc kubenswrapper[4854]: I0103 06:06:31.801242 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/217dbe70-f377-42d7-8b9a-bbc22f53b861-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "217dbe70-f377-42d7-8b9a-bbc22f53b861" (UID: "217dbe70-f377-42d7-8b9a-bbc22f53b861"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:06:31 crc kubenswrapper[4854]: I0103 06:06:31.802489 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/217dbe70-f377-42d7-8b9a-bbc22f53b861-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "217dbe70-f377-42d7-8b9a-bbc22f53b861" (UID: "217dbe70-f377-42d7-8b9a-bbc22f53b861"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:06:31 crc kubenswrapper[4854]: I0103 06:06:31.802828 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/217dbe70-f377-42d7-8b9a-bbc22f53b861-kube-api-access-kfbnh" (OuterVolumeSpecName: "kube-api-access-kfbnh") pod "217dbe70-f377-42d7-8b9a-bbc22f53b861" (UID: "217dbe70-f377-42d7-8b9a-bbc22f53b861"). InnerVolumeSpecName "kube-api-access-kfbnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:06:31 crc kubenswrapper[4854]: I0103 06:06:31.803400 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/217dbe70-f377-42d7-8b9a-bbc22f53b861-scripts" (OuterVolumeSpecName: "scripts") pod "217dbe70-f377-42d7-8b9a-bbc22f53b861" (UID: "217dbe70-f377-42d7-8b9a-bbc22f53b861"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:06:31 crc kubenswrapper[4854]: I0103 06:06:31.847706 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/217dbe70-f377-42d7-8b9a-bbc22f53b861-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "217dbe70-f377-42d7-8b9a-bbc22f53b861" (UID: "217dbe70-f377-42d7-8b9a-bbc22f53b861"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:06:31 crc kubenswrapper[4854]: I0103 06:06:31.908013 4854 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/217dbe70-f377-42d7-8b9a-bbc22f53b861-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:31 crc kubenswrapper[4854]: I0103 06:06:31.908047 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfbnh\" (UniqueName: \"kubernetes.io/projected/217dbe70-f377-42d7-8b9a-bbc22f53b861-kube-api-access-kfbnh\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:31 crc kubenswrapper[4854]: I0103 06:06:31.908059 4854 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/217dbe70-f377-42d7-8b9a-bbc22f53b861-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:31 crc kubenswrapper[4854]: I0103 06:06:31.908068 4854 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/217dbe70-f377-42d7-8b9a-bbc22f53b861-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:31 crc kubenswrapper[4854]: I0103 06:06:31.908093 4854 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/217dbe70-f377-42d7-8b9a-bbc22f53b861-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:31 crc kubenswrapper[4854]: I0103 06:06:31.983620 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/217dbe70-f377-42d7-8b9a-bbc22f53b861-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "217dbe70-f377-42d7-8b9a-bbc22f53b861" (UID: "217dbe70-f377-42d7-8b9a-bbc22f53b861"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:06:32 crc kubenswrapper[4854]: I0103 06:06:32.006298 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/217dbe70-f377-42d7-8b9a-bbc22f53b861-config-data" (OuterVolumeSpecName: "config-data") pod "217dbe70-f377-42d7-8b9a-bbc22f53b861" (UID: "217dbe70-f377-42d7-8b9a-bbc22f53b861"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:06:32 crc kubenswrapper[4854]: I0103 06:06:32.013133 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/217dbe70-f377-42d7-8b9a-bbc22f53b861-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:32 crc kubenswrapper[4854]: I0103 06:06:32.013237 4854 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/217dbe70-f377-42d7-8b9a-bbc22f53b861-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:32 crc kubenswrapper[4854]: I0103 06:06:32.453542 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"217dbe70-f377-42d7-8b9a-bbc22f53b861","Type":"ContainerDied","Data":"7a89647d6254bacb8a55dfc918ac25342957c88b7f412d834f8d6379c8893791"} Jan 03 06:06:32 crc kubenswrapper[4854]: I0103 06:06:32.453612 4854 scope.go:117] "RemoveContainer" containerID="000200e55927526ddc97fdb2df68709ac6b8f08aebb43297bae5b548d4e75c59" Jan 03 06:06:32 crc kubenswrapper[4854]: I0103 06:06:32.453554 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 06:06:32 crc kubenswrapper[4854]: I0103 06:06:32.457577 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"861c2ac7-54e5-4409-ab4a-87bdfe074572","Type":"ContainerStarted","Data":"ecf4f92c6d74ef225075e118a7a6ff7f22b8f1dfaa4f14da96a2e66e7c89f734"} Jan 03 06:06:32 crc kubenswrapper[4854]: I0103 06:06:32.502616 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:06:32 crc kubenswrapper[4854]: I0103 06:06:32.519371 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:06:32 crc kubenswrapper[4854]: I0103 06:06:32.521810 4854 scope.go:117] "RemoveContainer" containerID="a1f982716199eeee216fa9cde5b54660f4d8cf68fd9401a187541846fa072f27" Jan 03 06:06:32 crc kubenswrapper[4854]: I0103 06:06:32.522444 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.522403012 podStartE2EDuration="3.522403012s" podCreationTimestamp="2026-01-03 06:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:06:32.511846449 +0000 UTC m=+1570.838423031" watchObservedRunningTime="2026-01-03 06:06:32.522403012 +0000 UTC m=+1570.848979584" Jan 03 06:06:32 crc kubenswrapper[4854]: I0103 06:06:32.548460 4854 scope.go:117] "RemoveContainer" containerID="eba305729573796e2491bfe8e22a01c956224f31b8ca5337ae374b8ac948ef69" Jan 03 06:06:32 crc kubenswrapper[4854]: I0103 06:06:32.554578 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:06:32 crc kubenswrapper[4854]: E0103 06:06:32.555122 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="217dbe70-f377-42d7-8b9a-bbc22f53b861" containerName="ceilometer-notification-agent" Jan 03 06:06:32 crc kubenswrapper[4854]: I0103 06:06:32.555141 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="217dbe70-f377-42d7-8b9a-bbc22f53b861" containerName="ceilometer-notification-agent" Jan 03 06:06:32 crc kubenswrapper[4854]: E0103 06:06:32.555158 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="217dbe70-f377-42d7-8b9a-bbc22f53b861" containerName="ceilometer-central-agent" Jan 03 06:06:32 crc kubenswrapper[4854]: I0103 06:06:32.555167 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="217dbe70-f377-42d7-8b9a-bbc22f53b861" containerName="ceilometer-central-agent" Jan 03 06:06:32 crc kubenswrapper[4854]: E0103 06:06:32.555200 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="217dbe70-f377-42d7-8b9a-bbc22f53b861" containerName="proxy-httpd" Jan 03 06:06:32 crc kubenswrapper[4854]: I0103 06:06:32.555207 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="217dbe70-f377-42d7-8b9a-bbc22f53b861" containerName="proxy-httpd" Jan 03 06:06:32 crc kubenswrapper[4854]: E0103 06:06:32.555219 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="217dbe70-f377-42d7-8b9a-bbc22f53b861" containerName="sg-core" Jan 03 06:06:32 crc kubenswrapper[4854]: I0103 06:06:32.555224 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="217dbe70-f377-42d7-8b9a-bbc22f53b861" containerName="sg-core" Jan 03 06:06:32 crc kubenswrapper[4854]: I0103 06:06:32.555424 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="217dbe70-f377-42d7-8b9a-bbc22f53b861" containerName="proxy-httpd" Jan 03 06:06:32 crc kubenswrapper[4854]: I0103 06:06:32.555446 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="217dbe70-f377-42d7-8b9a-bbc22f53b861" containerName="sg-core" Jan 03 06:06:32 crc kubenswrapper[4854]: I0103 06:06:32.555456 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="217dbe70-f377-42d7-8b9a-bbc22f53b861" containerName="ceilometer-central-agent" Jan 03 06:06:32 crc kubenswrapper[4854]: I0103 06:06:32.555471 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="217dbe70-f377-42d7-8b9a-bbc22f53b861" containerName="ceilometer-notification-agent" Jan 03 06:06:32 crc kubenswrapper[4854]: I0103 06:06:32.557471 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 06:06:32 crc kubenswrapper[4854]: I0103 06:06:32.567103 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 03 06:06:32 crc kubenswrapper[4854]: I0103 06:06:32.567255 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 03 06:06:32 crc kubenswrapper[4854]: I0103 06:06:32.579593 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:06:32 crc kubenswrapper[4854]: I0103 06:06:32.612372 4854 scope.go:117] "RemoveContainer" containerID="c36c0aa53f56e323a283942b7ab861f27bab7b9109892aac492aa97b18337f62" Jan 03 06:06:32 crc kubenswrapper[4854]: I0103 06:06:32.633669 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e65b3c77-9967-4e59-821c-d8583d0fd1f6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e65b3c77-9967-4e59-821c-d8583d0fd1f6\") " pod="openstack/ceilometer-0" Jan 03 06:06:32 crc kubenswrapper[4854]: I0103 06:06:32.633808 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e65b3c77-9967-4e59-821c-d8583d0fd1f6-config-data\") pod \"ceilometer-0\" (UID: \"e65b3c77-9967-4e59-821c-d8583d0fd1f6\") " pod="openstack/ceilometer-0" Jan 03 06:06:32 crc kubenswrapper[4854]: I0103 06:06:32.633892 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e65b3c77-9967-4e59-821c-d8583d0fd1f6-scripts\") pod \"ceilometer-0\" (UID: \"e65b3c77-9967-4e59-821c-d8583d0fd1f6\") " pod="openstack/ceilometer-0" Jan 03 06:06:32 crc kubenswrapper[4854]: I0103 06:06:32.633993 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e65b3c77-9967-4e59-821c-d8583d0fd1f6-log-httpd\") pod \"ceilometer-0\" (UID: \"e65b3c77-9967-4e59-821c-d8583d0fd1f6\") " pod="openstack/ceilometer-0" Jan 03 06:06:32 crc kubenswrapper[4854]: I0103 06:06:32.634013 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e65b3c77-9967-4e59-821c-d8583d0fd1f6-run-httpd\") pod \"ceilometer-0\" (UID: \"e65b3c77-9967-4e59-821c-d8583d0fd1f6\") " pod="openstack/ceilometer-0" Jan 03 06:06:32 crc kubenswrapper[4854]: I0103 06:06:32.634042 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz7pd\" (UniqueName: \"kubernetes.io/projected/e65b3c77-9967-4e59-821c-d8583d0fd1f6-kube-api-access-lz7pd\") pod \"ceilometer-0\" (UID: \"e65b3c77-9967-4e59-821c-d8583d0fd1f6\") " pod="openstack/ceilometer-0" Jan 03 06:06:32 crc kubenswrapper[4854]: I0103 06:06:32.634067 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e65b3c77-9967-4e59-821c-d8583d0fd1f6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e65b3c77-9967-4e59-821c-d8583d0fd1f6\") " pod="openstack/ceilometer-0" Jan 03 06:06:32 crc kubenswrapper[4854]: I0103 06:06:32.736397 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e65b3c77-9967-4e59-821c-d8583d0fd1f6-run-httpd\") pod \"ceilometer-0\" (UID: \"e65b3c77-9967-4e59-821c-d8583d0fd1f6\") " pod="openstack/ceilometer-0" Jan 03 06:06:32 crc kubenswrapper[4854]: I0103 06:06:32.736705 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e65b3c77-9967-4e59-821c-d8583d0fd1f6-log-httpd\") pod \"ceilometer-0\" (UID: \"e65b3c77-9967-4e59-821c-d8583d0fd1f6\") " pod="openstack/ceilometer-0" Jan 03 06:06:32 crc kubenswrapper[4854]: I0103 06:06:32.736757 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lz7pd\" (UniqueName: \"kubernetes.io/projected/e65b3c77-9967-4e59-821c-d8583d0fd1f6-kube-api-access-lz7pd\") pod \"ceilometer-0\" (UID: \"e65b3c77-9967-4e59-821c-d8583d0fd1f6\") " pod="openstack/ceilometer-0" Jan 03 06:06:32 crc kubenswrapper[4854]: I0103 06:06:32.736785 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e65b3c77-9967-4e59-821c-d8583d0fd1f6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e65b3c77-9967-4e59-821c-d8583d0fd1f6\") " pod="openstack/ceilometer-0" Jan 03 06:06:32 crc kubenswrapper[4854]: I0103 06:06:32.736867 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e65b3c77-9967-4e59-821c-d8583d0fd1f6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e65b3c77-9967-4e59-821c-d8583d0fd1f6\") " pod="openstack/ceilometer-0" Jan 03 06:06:32 crc kubenswrapper[4854]: I0103 06:06:32.736911 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e65b3c77-9967-4e59-821c-d8583d0fd1f6-config-data\") pod \"ceilometer-0\" (UID: \"e65b3c77-9967-4e59-821c-d8583d0fd1f6\") " pod="openstack/ceilometer-0" Jan 03 06:06:32 crc kubenswrapper[4854]: I0103 06:06:32.736975 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e65b3c77-9967-4e59-821c-d8583d0fd1f6-run-httpd\") pod \"ceilometer-0\" (UID: \"e65b3c77-9967-4e59-821c-d8583d0fd1f6\") " pod="openstack/ceilometer-0" Jan 03 06:06:32 crc kubenswrapper[4854]: I0103 06:06:32.737002 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e65b3c77-9967-4e59-821c-d8583d0fd1f6-scripts\") pod \"ceilometer-0\" (UID: \"e65b3c77-9967-4e59-821c-d8583d0fd1f6\") " pod="openstack/ceilometer-0" Jan 03 06:06:32 crc kubenswrapper[4854]: I0103 06:06:32.737685 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e65b3c77-9967-4e59-821c-d8583d0fd1f6-log-httpd\") pod \"ceilometer-0\" (UID: \"e65b3c77-9967-4e59-821c-d8583d0fd1f6\") " pod="openstack/ceilometer-0" Jan 03 06:06:32 crc kubenswrapper[4854]: I0103 06:06:32.742905 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e65b3c77-9967-4e59-821c-d8583d0fd1f6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e65b3c77-9967-4e59-821c-d8583d0fd1f6\") " pod="openstack/ceilometer-0" Jan 03 06:06:32 crc kubenswrapper[4854]: I0103 06:06:32.743611 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e65b3c77-9967-4e59-821c-d8583d0fd1f6-config-data\") pod \"ceilometer-0\" (UID: \"e65b3c77-9967-4e59-821c-d8583d0fd1f6\") " pod="openstack/ceilometer-0" Jan 03 06:06:32 crc kubenswrapper[4854]: I0103 06:06:32.745209 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e65b3c77-9967-4e59-821c-d8583d0fd1f6-scripts\") pod \"ceilometer-0\" (UID: \"e65b3c77-9967-4e59-821c-d8583d0fd1f6\") " pod="openstack/ceilometer-0" Jan 03 06:06:32 crc kubenswrapper[4854]: I0103 06:06:32.755142 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lz7pd\" (UniqueName: \"kubernetes.io/projected/e65b3c77-9967-4e59-821c-d8583d0fd1f6-kube-api-access-lz7pd\") pod \"ceilometer-0\" (UID: \"e65b3c77-9967-4e59-821c-d8583d0fd1f6\") " pod="openstack/ceilometer-0" Jan 03 06:06:32 crc kubenswrapper[4854]: I0103 06:06:32.758530 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e65b3c77-9967-4e59-821c-d8583d0fd1f6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e65b3c77-9967-4e59-821c-d8583d0fd1f6\") " pod="openstack/ceilometer-0" Jan 03 06:06:32 crc kubenswrapper[4854]: I0103 06:06:32.890165 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 06:06:33 crc kubenswrapper[4854]: W0103 06:06:33.437777 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode65b3c77_9967_4e59_821c_d8583d0fd1f6.slice/crio-14520fe35d0b5dd2da75d452eb1420d6b01f2544e377feb85f553bb7612c04c9 WatchSource:0}: Error finding container 14520fe35d0b5dd2da75d452eb1420d6b01f2544e377feb85f553bb7612c04c9: Status 404 returned error can't find the container with id 14520fe35d0b5dd2da75d452eb1420d6b01f2544e377feb85f553bb7612c04c9 Jan 03 06:06:33 crc kubenswrapper[4854]: I0103 06:06:33.439674 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:06:33 crc kubenswrapper[4854]: I0103 06:06:33.469998 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e65b3c77-9967-4e59-821c-d8583d0fd1f6","Type":"ContainerStarted","Data":"14520fe35d0b5dd2da75d452eb1420d6b01f2544e377feb85f553bb7612c04c9"} Jan 03 06:06:34 crc kubenswrapper[4854]: I0103 06:06:34.136037 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="217dbe70-f377-42d7-8b9a-bbc22f53b861" path="/var/lib/kubelet/pods/217dbe70-f377-42d7-8b9a-bbc22f53b861/volumes" Jan 03 06:06:34 crc kubenswrapper[4854]: I0103 06:06:34.483395 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e65b3c77-9967-4e59-821c-d8583d0fd1f6","Type":"ContainerStarted","Data":"ebb9eccbeca4a0daccefb5b4e582ca35ed32f35efbfc286a2aee91e6031726f4"} Jan 03 06:06:35 crc kubenswrapper[4854]: I0103 06:06:35.496829 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e65b3c77-9967-4e59-821c-d8583d0fd1f6","Type":"ContainerStarted","Data":"a349bbe3adb6a35cb52578c54316b3a78e5e485469d1144eef8c248ec8fe4be1"} Jan 03 06:06:35 crc kubenswrapper[4854]: I0103 06:06:35.826845 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4jpmx" Jan 03 06:06:35 crc kubenswrapper[4854]: I0103 06:06:35.892158 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4jpmx" Jan 03 06:06:36 crc kubenswrapper[4854]: I0103 06:06:36.068706 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4jpmx"] Jan 03 06:06:36 crc kubenswrapper[4854]: I0103 06:06:36.514030 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e65b3c77-9967-4e59-821c-d8583d0fd1f6","Type":"ContainerStarted","Data":"1813dd09dc49a522a079fa638a0302b98eaacaba4a03a7a7258a7269823e953a"} Jan 03 06:06:37 crc kubenswrapper[4854]: I0103 06:06:37.526188 4854 generic.go:334] "Generic (PLEG): container finished" podID="0ffba700-7bb8-458d-b50f-322985473e2d" containerID="601848ec57f14b23dfba8b7d3ce2cacb396834177d0434e06bcd0fcdb811bb8f" exitCode=0 Jan 03 06:06:37 crc kubenswrapper[4854]: I0103 06:06:37.526225 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ptr5k" event={"ID":"0ffba700-7bb8-458d-b50f-322985473e2d","Type":"ContainerDied","Data":"601848ec57f14b23dfba8b7d3ce2cacb396834177d0434e06bcd0fcdb811bb8f"} Jan 03 06:06:37 crc kubenswrapper[4854]: I0103 06:06:37.526699 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4jpmx" podUID="210d7917-5e9d-486f-a1bc-d9297b633a41" containerName="registry-server" containerID="cri-o://f7bcac251203930f839e6f54487da9cf8b8a44a4bc2047d8aaf6cffd2ccc8002" gracePeriod=2 Jan 03 06:06:37 crc kubenswrapper[4854]: I0103 06:06:37.636883 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 03 06:06:37 crc kubenswrapper[4854]: I0103 06:06:37.636942 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 03 06:06:37 crc kubenswrapper[4854]: I0103 06:06:37.690851 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 03 06:06:37 crc kubenswrapper[4854]: I0103 06:06:37.783630 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 03 06:06:38 crc kubenswrapper[4854]: I0103 06:06:38.542329 4854 generic.go:334] "Generic (PLEG): container finished" podID="210d7917-5e9d-486f-a1bc-d9297b633a41" containerID="f7bcac251203930f839e6f54487da9cf8b8a44a4bc2047d8aaf6cffd2ccc8002" exitCode=0 Jan 03 06:06:38 crc kubenswrapper[4854]: I0103 06:06:38.542410 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4jpmx" event={"ID":"210d7917-5e9d-486f-a1bc-d9297b633a41","Type":"ContainerDied","Data":"f7bcac251203930f839e6f54487da9cf8b8a44a4bc2047d8aaf6cffd2ccc8002"} Jan 03 06:06:38 crc kubenswrapper[4854]: I0103 06:06:38.544119 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 03 06:06:38 crc kubenswrapper[4854]: I0103 06:06:38.544231 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 03 06:06:39 crc kubenswrapper[4854]: I0103 06:06:39.073806 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ptr5k" Jan 03 06:06:39 crc kubenswrapper[4854]: I0103 06:06:39.201903 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ffba700-7bb8-458d-b50f-322985473e2d-combined-ca-bundle\") pod \"0ffba700-7bb8-458d-b50f-322985473e2d\" (UID: \"0ffba700-7bb8-458d-b50f-322985473e2d\") " Jan 03 06:06:39 crc kubenswrapper[4854]: I0103 06:06:39.202333 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ffba700-7bb8-458d-b50f-322985473e2d-scripts\") pod \"0ffba700-7bb8-458d-b50f-322985473e2d\" (UID: \"0ffba700-7bb8-458d-b50f-322985473e2d\") " Jan 03 06:06:39 crc kubenswrapper[4854]: I0103 06:06:39.202409 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ffba700-7bb8-458d-b50f-322985473e2d-config-data\") pod \"0ffba700-7bb8-458d-b50f-322985473e2d\" (UID: \"0ffba700-7bb8-458d-b50f-322985473e2d\") " Jan 03 06:06:39 crc kubenswrapper[4854]: I0103 06:06:39.202526 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9slk\" (UniqueName: \"kubernetes.io/projected/0ffba700-7bb8-458d-b50f-322985473e2d-kube-api-access-z9slk\") pod \"0ffba700-7bb8-458d-b50f-322985473e2d\" (UID: \"0ffba700-7bb8-458d-b50f-322985473e2d\") " Jan 03 06:06:39 crc kubenswrapper[4854]: I0103 06:06:39.210235 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ffba700-7bb8-458d-b50f-322985473e2d-scripts" (OuterVolumeSpecName: "scripts") pod "0ffba700-7bb8-458d-b50f-322985473e2d" (UID: "0ffba700-7bb8-458d-b50f-322985473e2d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:06:39 crc kubenswrapper[4854]: I0103 06:06:39.237301 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ffba700-7bb8-458d-b50f-322985473e2d-kube-api-access-z9slk" (OuterVolumeSpecName: "kube-api-access-z9slk") pod "0ffba700-7bb8-458d-b50f-322985473e2d" (UID: "0ffba700-7bb8-458d-b50f-322985473e2d"). InnerVolumeSpecName "kube-api-access-z9slk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:06:39 crc kubenswrapper[4854]: I0103 06:06:39.262581 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ffba700-7bb8-458d-b50f-322985473e2d-config-data" (OuterVolumeSpecName: "config-data") pod "0ffba700-7bb8-458d-b50f-322985473e2d" (UID: "0ffba700-7bb8-458d-b50f-322985473e2d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:06:39 crc kubenswrapper[4854]: I0103 06:06:39.299268 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ffba700-7bb8-458d-b50f-322985473e2d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ffba700-7bb8-458d-b50f-322985473e2d" (UID: "0ffba700-7bb8-458d-b50f-322985473e2d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:06:39 crc kubenswrapper[4854]: I0103 06:06:39.305704 4854 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ffba700-7bb8-458d-b50f-322985473e2d-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:39 crc kubenswrapper[4854]: I0103 06:06:39.305746 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ffba700-7bb8-458d-b50f-322985473e2d-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:39 crc kubenswrapper[4854]: I0103 06:06:39.305763 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9slk\" (UniqueName: \"kubernetes.io/projected/0ffba700-7bb8-458d-b50f-322985473e2d-kube-api-access-z9slk\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:39 crc kubenswrapper[4854]: I0103 06:06:39.305780 4854 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ffba700-7bb8-458d-b50f-322985473e2d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:39 crc kubenswrapper[4854]: I0103 06:06:39.465160 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4jpmx" Jan 03 06:06:39 crc kubenswrapper[4854]: I0103 06:06:39.514745 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-295fb\" (UniqueName: \"kubernetes.io/projected/210d7917-5e9d-486f-a1bc-d9297b633a41-kube-api-access-295fb\") pod \"210d7917-5e9d-486f-a1bc-d9297b633a41\" (UID: \"210d7917-5e9d-486f-a1bc-d9297b633a41\") " Jan 03 06:06:39 crc kubenswrapper[4854]: I0103 06:06:39.515341 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/210d7917-5e9d-486f-a1bc-d9297b633a41-catalog-content\") pod \"210d7917-5e9d-486f-a1bc-d9297b633a41\" (UID: \"210d7917-5e9d-486f-a1bc-d9297b633a41\") " Jan 03 06:06:39 crc kubenswrapper[4854]: I0103 06:06:39.515447 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/210d7917-5e9d-486f-a1bc-d9297b633a41-utilities\") pod \"210d7917-5e9d-486f-a1bc-d9297b633a41\" (UID: \"210d7917-5e9d-486f-a1bc-d9297b633a41\") " Jan 03 06:06:39 crc kubenswrapper[4854]: I0103 06:06:39.516710 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/210d7917-5e9d-486f-a1bc-d9297b633a41-utilities" (OuterVolumeSpecName: "utilities") pod "210d7917-5e9d-486f-a1bc-d9297b633a41" (UID: "210d7917-5e9d-486f-a1bc-d9297b633a41"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:06:39 crc kubenswrapper[4854]: I0103 06:06:39.526362 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d7917-5e9d-486f-a1bc-d9297b633a41-kube-api-access-295fb" (OuterVolumeSpecName: "kube-api-access-295fb") pod "210d7917-5e9d-486f-a1bc-d9297b633a41" (UID: "210d7917-5e9d-486f-a1bc-d9297b633a41"). InnerVolumeSpecName "kube-api-access-295fb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:06:39 crc kubenswrapper[4854]: I0103 06:06:39.564953 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ptr5k" Jan 03 06:06:39 crc kubenswrapper[4854]: I0103 06:06:39.564978 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ptr5k" event={"ID":"0ffba700-7bb8-458d-b50f-322985473e2d","Type":"ContainerDied","Data":"d6551415f4e6b154c364d59b89a663b412df7d6782b87ab7de911655d9af1081"} Jan 03 06:06:39 crc kubenswrapper[4854]: I0103 06:06:39.565014 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6551415f4e6b154c364d59b89a663b412df7d6782b87ab7de911655d9af1081" Jan 03 06:06:39 crc kubenswrapper[4854]: I0103 06:06:39.583046 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4jpmx" event={"ID":"210d7917-5e9d-486f-a1bc-d9297b633a41","Type":"ContainerDied","Data":"ef1ac32f42930d4b088d326fb841147c83fd0327fa206a292b009d1d830e2645"} Jan 03 06:06:39 crc kubenswrapper[4854]: I0103 06:06:39.583113 4854 scope.go:117] "RemoveContainer" containerID="f7bcac251203930f839e6f54487da9cf8b8a44a4bc2047d8aaf6cffd2ccc8002" Jan 03 06:06:39 crc kubenswrapper[4854]: I0103 06:06:39.583352 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4jpmx" Jan 03 06:06:39 crc kubenswrapper[4854]: I0103 06:06:39.585679 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/210d7917-5e9d-486f-a1bc-d9297b633a41-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "210d7917-5e9d-486f-a1bc-d9297b633a41" (UID: "210d7917-5e9d-486f-a1bc-d9297b633a41"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:06:39 crc kubenswrapper[4854]: I0103 06:06:39.603647 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e65b3c77-9967-4e59-821c-d8583d0fd1f6","Type":"ContainerStarted","Data":"0573a538943e115d388b31f5596b9742bcb15f71279a8e0fe93f783e9b6b8939"} Jan 03 06:06:39 crc kubenswrapper[4854]: I0103 06:06:39.603731 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 03 06:06:39 crc kubenswrapper[4854]: I0103 06:06:39.621572 4854 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/210d7917-5e9d-486f-a1bc-d9297b633a41-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:39 crc kubenswrapper[4854]: I0103 06:06:39.621900 4854 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/210d7917-5e9d-486f-a1bc-d9297b633a41-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:39 crc kubenswrapper[4854]: I0103 06:06:39.621938 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-295fb\" (UniqueName: \"kubernetes.io/projected/210d7917-5e9d-486f-a1bc-d9297b633a41-kube-api-access-295fb\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:39 crc kubenswrapper[4854]: I0103 06:06:39.636200 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.948616897 podStartE2EDuration="7.636179368s" podCreationTimestamp="2026-01-03 06:06:32 +0000 UTC" firstStartedPulling="2026-01-03 06:06:33.440415041 +0000 UTC m=+1571.766991613" lastFinishedPulling="2026-01-03 06:06:37.127977502 +0000 UTC m=+1575.454554084" observedRunningTime="2026-01-03 06:06:39.628300562 +0000 UTC m=+1577.954877144" watchObservedRunningTime="2026-01-03 06:06:39.636179368 +0000 UTC m=+1577.962755940" Jan 03 06:06:39 crc kubenswrapper[4854]: I0103 06:06:39.686762 4854 scope.go:117] "RemoveContainer" containerID="83e4ef196288962685d78d435bb370e8aad25320a944be8b26e55339f758daf4" Jan 03 06:06:39 crc kubenswrapper[4854]: I0103 06:06:39.716542 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 03 06:06:39 crc kubenswrapper[4854]: E0103 06:06:39.717221 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="210d7917-5e9d-486f-a1bc-d9297b633a41" containerName="extract-utilities" Jan 03 06:06:39 crc kubenswrapper[4854]: I0103 06:06:39.717245 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="210d7917-5e9d-486f-a1bc-d9297b633a41" containerName="extract-utilities" Jan 03 06:06:39 crc kubenswrapper[4854]: E0103 06:06:39.717283 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="210d7917-5e9d-486f-a1bc-d9297b633a41" containerName="extract-content" Jan 03 06:06:39 crc kubenswrapper[4854]: I0103 06:06:39.717294 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="210d7917-5e9d-486f-a1bc-d9297b633a41" containerName="extract-content" Jan 03 06:06:39 crc kubenswrapper[4854]: E0103 06:06:39.717308 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="210d7917-5e9d-486f-a1bc-d9297b633a41" containerName="registry-server" Jan 03 06:06:39 crc kubenswrapper[4854]: I0103 06:06:39.717317 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="210d7917-5e9d-486f-a1bc-d9297b633a41" containerName="registry-server" Jan 03 06:06:39 crc kubenswrapper[4854]: E0103 06:06:39.717338 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ffba700-7bb8-458d-b50f-322985473e2d" containerName="nova-cell0-conductor-db-sync" Jan 03 06:06:39 crc kubenswrapper[4854]: I0103 06:06:39.717346 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ffba700-7bb8-458d-b50f-322985473e2d" containerName="nova-cell0-conductor-db-sync" Jan 03 06:06:39 crc kubenswrapper[4854]: I0103 06:06:39.717663 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ffba700-7bb8-458d-b50f-322985473e2d" containerName="nova-cell0-conductor-db-sync" Jan 03 06:06:39 crc kubenswrapper[4854]: I0103 06:06:39.717688 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="210d7917-5e9d-486f-a1bc-d9297b633a41" containerName="registry-server" Jan 03 06:06:39 crc kubenswrapper[4854]: I0103 06:06:39.719194 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 03 06:06:39 crc kubenswrapper[4854]: I0103 06:06:39.738004 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 03 06:06:39 crc kubenswrapper[4854]: I0103 06:06:39.739364 4854 scope.go:117] "RemoveContainer" containerID="92d5ec8c664838f194363288ecd198757c60cf213d47ad871ba126da74846fea" Jan 03 06:06:39 crc kubenswrapper[4854]: I0103 06:06:39.740986 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 03 06:06:39 crc kubenswrapper[4854]: I0103 06:06:39.743345 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-jp9zh" Jan 03 06:06:39 crc kubenswrapper[4854]: I0103 06:06:39.843174 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/153a7b93-52c0-4abd-93bc-a3643bf4a897-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"153a7b93-52c0-4abd-93bc-a3643bf4a897\") " pod="openstack/nova-cell0-conductor-0" Jan 03 06:06:39 crc kubenswrapper[4854]: I0103 06:06:39.843227 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfl9x\" (UniqueName: \"kubernetes.io/projected/153a7b93-52c0-4abd-93bc-a3643bf4a897-kube-api-access-rfl9x\") pod \"nova-cell0-conductor-0\" (UID: \"153a7b93-52c0-4abd-93bc-a3643bf4a897\") " pod="openstack/nova-cell0-conductor-0" Jan 03 06:06:39 crc kubenswrapper[4854]: I0103 06:06:39.843387 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/153a7b93-52c0-4abd-93bc-a3643bf4a897-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"153a7b93-52c0-4abd-93bc-a3643bf4a897\") " pod="openstack/nova-cell0-conductor-0" Jan 03 06:06:39 crc kubenswrapper[4854]: I0103 06:06:39.918126 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4jpmx"] Jan 03 06:06:39 crc kubenswrapper[4854]: I0103 06:06:39.927441 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4jpmx"] Jan 03 06:06:39 crc kubenswrapper[4854]: I0103 06:06:39.945047 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/153a7b93-52c0-4abd-93bc-a3643bf4a897-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"153a7b93-52c0-4abd-93bc-a3643bf4a897\") " pod="openstack/nova-cell0-conductor-0" Jan 03 06:06:39 crc kubenswrapper[4854]: I0103 06:06:39.945204 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/153a7b93-52c0-4abd-93bc-a3643bf4a897-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"153a7b93-52c0-4abd-93bc-a3643bf4a897\") " pod="openstack/nova-cell0-conductor-0" Jan 03 06:06:39 crc kubenswrapper[4854]: I0103 06:06:39.945233 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfl9x\" (UniqueName: \"kubernetes.io/projected/153a7b93-52c0-4abd-93bc-a3643bf4a897-kube-api-access-rfl9x\") pod \"nova-cell0-conductor-0\" (UID: \"153a7b93-52c0-4abd-93bc-a3643bf4a897\") " pod="openstack/nova-cell0-conductor-0" Jan 03 06:06:39 crc kubenswrapper[4854]: I0103 06:06:39.948813 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/153a7b93-52c0-4abd-93bc-a3643bf4a897-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"153a7b93-52c0-4abd-93bc-a3643bf4a897\") " pod="openstack/nova-cell0-conductor-0" Jan 03 06:06:39 crc kubenswrapper[4854]: I0103 06:06:39.949982 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/153a7b93-52c0-4abd-93bc-a3643bf4a897-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"153a7b93-52c0-4abd-93bc-a3643bf4a897\") " pod="openstack/nova-cell0-conductor-0" Jan 03 06:06:39 crc kubenswrapper[4854]: I0103 06:06:39.964539 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfl9x\" (UniqueName: \"kubernetes.io/projected/153a7b93-52c0-4abd-93bc-a3643bf4a897-kube-api-access-rfl9x\") pod \"nova-cell0-conductor-0\" (UID: \"153a7b93-52c0-4abd-93bc-a3643bf4a897\") " pod="openstack/nova-cell0-conductor-0" Jan 03 06:06:40 crc kubenswrapper[4854]: I0103 06:06:40.130229 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d7917-5e9d-486f-a1bc-d9297b633a41" path="/var/lib/kubelet/pods/210d7917-5e9d-486f-a1bc-d9297b633a41/volumes" Jan 03 06:06:40 crc kubenswrapper[4854]: I0103 06:06:40.134670 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 03 06:06:40 crc kubenswrapper[4854]: I0103 06:06:40.140506 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 03 06:06:40 crc kubenswrapper[4854]: I0103 06:06:40.141264 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 03 06:06:40 crc kubenswrapper[4854]: I0103 06:06:40.179626 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 03 06:06:40 crc kubenswrapper[4854]: I0103 06:06:40.229586 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 03 06:06:40 crc kubenswrapper[4854]: I0103 06:06:40.615553 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 03 06:06:40 crc kubenswrapper[4854]: I0103 06:06:40.615871 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 03 06:06:40 crc kubenswrapper[4854]: I0103 06:06:40.829314 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 03 06:06:41 crc kubenswrapper[4854]: I0103 06:06:41.499667 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 03 06:06:41 crc kubenswrapper[4854]: I0103 06:06:41.500150 4854 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 03 06:06:41 crc kubenswrapper[4854]: I0103 06:06:41.512916 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 03 06:06:41 crc kubenswrapper[4854]: I0103 06:06:41.632754 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"153a7b93-52c0-4abd-93bc-a3643bf4a897","Type":"ContainerStarted","Data":"d66ea6889faa69868a7046f2c3f44afae85af1bc4f681c6274554f24c1ad4298"} Jan 03 06:06:41 crc kubenswrapper[4854]: I0103 06:06:41.633159 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"153a7b93-52c0-4abd-93bc-a3643bf4a897","Type":"ContainerStarted","Data":"fbb99e10765e134c2ac4f5b07a00be8309056c258b040e94444b44c97b7534c1"} Jan 03 06:06:41 crc kubenswrapper[4854]: I0103 06:06:41.634896 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 03 06:06:41 crc kubenswrapper[4854]: I0103 06:06:41.666493 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.666464469 podStartE2EDuration="2.666464469s" podCreationTimestamp="2026-01-03 06:06:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:06:41.660325386 +0000 UTC m=+1579.986901968" watchObservedRunningTime="2026-01-03 06:06:41.666464469 +0000 UTC m=+1579.993041041" Jan 03 06:06:42 crc kubenswrapper[4854]: I0103 06:06:42.746242 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 03 06:06:42 crc kubenswrapper[4854]: I0103 06:06:42.746573 4854 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 03 06:06:43 crc kubenswrapper[4854]: I0103 06:06:43.320798 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 03 06:06:43 crc kubenswrapper[4854]: I0103 06:06:43.887599 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:06:43 crc kubenswrapper[4854]: I0103 06:06:43.888472 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e65b3c77-9967-4e59-821c-d8583d0fd1f6" containerName="ceilometer-central-agent" containerID="cri-o://ebb9eccbeca4a0daccefb5b4e582ca35ed32f35efbfc286a2aee91e6031726f4" gracePeriod=30 Jan 03 06:06:43 crc kubenswrapper[4854]: I0103 06:06:43.888523 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e65b3c77-9967-4e59-821c-d8583d0fd1f6" containerName="sg-core" containerID="cri-o://1813dd09dc49a522a079fa638a0302b98eaacaba4a03a7a7258a7269823e953a" gracePeriod=30 Jan 03 06:06:43 crc kubenswrapper[4854]: I0103 06:06:43.888602 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e65b3c77-9967-4e59-821c-d8583d0fd1f6" containerName="proxy-httpd" containerID="cri-o://0573a538943e115d388b31f5596b9742bcb15f71279a8e0fe93f783e9b6b8939" gracePeriod=30 Jan 03 06:06:43 crc kubenswrapper[4854]: I0103 06:06:43.888818 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e65b3c77-9967-4e59-821c-d8583d0fd1f6" containerName="ceilometer-notification-agent" containerID="cri-o://a349bbe3adb6a35cb52578c54316b3a78e5e485469d1144eef8c248ec8fe4be1" gracePeriod=30 Jan 03 06:06:44 crc kubenswrapper[4854]: I0103 06:06:44.665523 4854 generic.go:334] "Generic (PLEG): container finished" podID="e65b3c77-9967-4e59-821c-d8583d0fd1f6" containerID="0573a538943e115d388b31f5596b9742bcb15f71279a8e0fe93f783e9b6b8939" exitCode=0 Jan 03 06:06:44 crc kubenswrapper[4854]: I0103 06:06:44.665859 4854 generic.go:334] "Generic (PLEG): container finished" podID="e65b3c77-9967-4e59-821c-d8583d0fd1f6" containerID="1813dd09dc49a522a079fa638a0302b98eaacaba4a03a7a7258a7269823e953a" exitCode=2 Jan 03 06:06:44 crc kubenswrapper[4854]: I0103 06:06:44.665868 4854 generic.go:334] "Generic (PLEG): container finished" podID="e65b3c77-9967-4e59-821c-d8583d0fd1f6" containerID="a349bbe3adb6a35cb52578c54316b3a78e5e485469d1144eef8c248ec8fe4be1" exitCode=0 Jan 03 06:06:44 crc kubenswrapper[4854]: I0103 06:06:44.665591 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e65b3c77-9967-4e59-821c-d8583d0fd1f6","Type":"ContainerDied","Data":"0573a538943e115d388b31f5596b9742bcb15f71279a8e0fe93f783e9b6b8939"} Jan 03 06:06:44 crc kubenswrapper[4854]: I0103 06:06:44.665915 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e65b3c77-9967-4e59-821c-d8583d0fd1f6","Type":"ContainerDied","Data":"1813dd09dc49a522a079fa638a0302b98eaacaba4a03a7a7258a7269823e953a"} Jan 03 06:06:44 crc kubenswrapper[4854]: I0103 06:06:44.665934 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e65b3c77-9967-4e59-821c-d8583d0fd1f6","Type":"ContainerDied","Data":"a349bbe3adb6a35cb52578c54316b3a78e5e485469d1144eef8c248ec8fe4be1"} Jan 03 06:06:48 crc kubenswrapper[4854]: I0103 06:06:48.719234 4854 generic.go:334] "Generic (PLEG): container finished" podID="e65b3c77-9967-4e59-821c-d8583d0fd1f6" containerID="ebb9eccbeca4a0daccefb5b4e582ca35ed32f35efbfc286a2aee91e6031726f4" exitCode=0 Jan 03 06:06:48 crc kubenswrapper[4854]: I0103 06:06:48.719622 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e65b3c77-9967-4e59-821c-d8583d0fd1f6","Type":"ContainerDied","Data":"ebb9eccbeca4a0daccefb5b4e582ca35ed32f35efbfc286a2aee91e6031726f4"} Jan 03 06:06:49 crc kubenswrapper[4854]: I0103 06:06:49.091448 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 06:06:49 crc kubenswrapper[4854]: I0103 06:06:49.184338 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e65b3c77-9967-4e59-821c-d8583d0fd1f6-log-httpd\") pod \"e65b3c77-9967-4e59-821c-d8583d0fd1f6\" (UID: \"e65b3c77-9967-4e59-821c-d8583d0fd1f6\") " Jan 03 06:06:49 crc kubenswrapper[4854]: I0103 06:06:49.184816 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e65b3c77-9967-4e59-821c-d8583d0fd1f6-run-httpd\") pod \"e65b3c77-9967-4e59-821c-d8583d0fd1f6\" (UID: \"e65b3c77-9967-4e59-821c-d8583d0fd1f6\") " Jan 03 06:06:49 crc kubenswrapper[4854]: I0103 06:06:49.185284 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz7pd\" (UniqueName: \"kubernetes.io/projected/e65b3c77-9967-4e59-821c-d8583d0fd1f6-kube-api-access-lz7pd\") pod \"e65b3c77-9967-4e59-821c-d8583d0fd1f6\" (UID: \"e65b3c77-9967-4e59-821c-d8583d0fd1f6\") " Jan 03 06:06:49 crc kubenswrapper[4854]: I0103 06:06:49.185433 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e65b3c77-9967-4e59-821c-d8583d0fd1f6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e65b3c77-9967-4e59-821c-d8583d0fd1f6" (UID: "e65b3c77-9967-4e59-821c-d8583d0fd1f6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:06:49 crc kubenswrapper[4854]: I0103 06:06:49.186231 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e65b3c77-9967-4e59-821c-d8583d0fd1f6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e65b3c77-9967-4e59-821c-d8583d0fd1f6" (UID: "e65b3c77-9967-4e59-821c-d8583d0fd1f6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:06:49 crc kubenswrapper[4854]: I0103 06:06:49.186471 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e65b3c77-9967-4e59-821c-d8583d0fd1f6-config-data\") pod \"e65b3c77-9967-4e59-821c-d8583d0fd1f6\" (UID: \"e65b3c77-9967-4e59-821c-d8583d0fd1f6\") " Jan 03 06:06:49 crc kubenswrapper[4854]: I0103 06:06:49.186595 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e65b3c77-9967-4e59-821c-d8583d0fd1f6-scripts\") pod \"e65b3c77-9967-4e59-821c-d8583d0fd1f6\" (UID: \"e65b3c77-9967-4e59-821c-d8583d0fd1f6\") " Jan 03 06:06:49 crc kubenswrapper[4854]: I0103 06:06:49.186684 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e65b3c77-9967-4e59-821c-d8583d0fd1f6-combined-ca-bundle\") pod \"e65b3c77-9967-4e59-821c-d8583d0fd1f6\" (UID: \"e65b3c77-9967-4e59-821c-d8583d0fd1f6\") " Jan 03 06:06:49 crc kubenswrapper[4854]: I0103 06:06:49.186841 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e65b3c77-9967-4e59-821c-d8583d0fd1f6-sg-core-conf-yaml\") pod \"e65b3c77-9967-4e59-821c-d8583d0fd1f6\" (UID: \"e65b3c77-9967-4e59-821c-d8583d0fd1f6\") " Jan 03 06:06:49 crc kubenswrapper[4854]: I0103 06:06:49.188914 4854 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e65b3c77-9967-4e59-821c-d8583d0fd1f6-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:49 crc kubenswrapper[4854]: I0103 06:06:49.188950 4854 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e65b3c77-9967-4e59-821c-d8583d0fd1f6-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:49 crc kubenswrapper[4854]: I0103 06:06:49.204833 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e65b3c77-9967-4e59-821c-d8583d0fd1f6-scripts" (OuterVolumeSpecName: "scripts") pod "e65b3c77-9967-4e59-821c-d8583d0fd1f6" (UID: "e65b3c77-9967-4e59-821c-d8583d0fd1f6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:06:49 crc kubenswrapper[4854]: I0103 06:06:49.209951 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e65b3c77-9967-4e59-821c-d8583d0fd1f6-kube-api-access-lz7pd" (OuterVolumeSpecName: "kube-api-access-lz7pd") pod "e65b3c77-9967-4e59-821c-d8583d0fd1f6" (UID: "e65b3c77-9967-4e59-821c-d8583d0fd1f6"). InnerVolumeSpecName "kube-api-access-lz7pd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:06:49 crc kubenswrapper[4854]: I0103 06:06:49.218616 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e65b3c77-9967-4e59-821c-d8583d0fd1f6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e65b3c77-9967-4e59-821c-d8583d0fd1f6" (UID: "e65b3c77-9967-4e59-821c-d8583d0fd1f6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:06:49 crc kubenswrapper[4854]: I0103 06:06:49.282462 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e65b3c77-9967-4e59-821c-d8583d0fd1f6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e65b3c77-9967-4e59-821c-d8583d0fd1f6" (UID: "e65b3c77-9967-4e59-821c-d8583d0fd1f6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:06:49 crc kubenswrapper[4854]: I0103 06:06:49.290753 4854 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e65b3c77-9967-4e59-821c-d8583d0fd1f6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:49 crc kubenswrapper[4854]: I0103 06:06:49.290779 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz7pd\" (UniqueName: \"kubernetes.io/projected/e65b3c77-9967-4e59-821c-d8583d0fd1f6-kube-api-access-lz7pd\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:49 crc kubenswrapper[4854]: I0103 06:06:49.290790 4854 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e65b3c77-9967-4e59-821c-d8583d0fd1f6-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:49 crc kubenswrapper[4854]: I0103 06:06:49.290800 4854 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e65b3c77-9967-4e59-821c-d8583d0fd1f6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:49 crc kubenswrapper[4854]: I0103 06:06:49.347686 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e65b3c77-9967-4e59-821c-d8583d0fd1f6-config-data" (OuterVolumeSpecName: "config-data") pod "e65b3c77-9967-4e59-821c-d8583d0fd1f6" (UID: "e65b3c77-9967-4e59-821c-d8583d0fd1f6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:06:49 crc kubenswrapper[4854]: I0103 06:06:49.393277 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e65b3c77-9967-4e59-821c-d8583d0fd1f6-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:49 crc kubenswrapper[4854]: I0103 06:06:49.742725 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e65b3c77-9967-4e59-821c-d8583d0fd1f6","Type":"ContainerDied","Data":"14520fe35d0b5dd2da75d452eb1420d6b01f2544e377feb85f553bb7612c04c9"} Jan 03 06:06:49 crc kubenswrapper[4854]: I0103 06:06:49.742796 4854 scope.go:117] "RemoveContainer" containerID="0573a538943e115d388b31f5596b9742bcb15f71279a8e0fe93f783e9b6b8939" Jan 03 06:06:49 crc kubenswrapper[4854]: I0103 06:06:49.742875 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 06:06:49 crc kubenswrapper[4854]: I0103 06:06:49.772384 4854 scope.go:117] "RemoveContainer" containerID="1813dd09dc49a522a079fa638a0302b98eaacaba4a03a7a7258a7269823e953a" Jan 03 06:06:49 crc kubenswrapper[4854]: I0103 06:06:49.796708 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:06:49 crc kubenswrapper[4854]: I0103 06:06:49.803808 4854 scope.go:117] "RemoveContainer" containerID="a349bbe3adb6a35cb52578c54316b3a78e5e485469d1144eef8c248ec8fe4be1" Jan 03 06:06:49 crc kubenswrapper[4854]: I0103 06:06:49.809599 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:06:49 crc kubenswrapper[4854]: I0103 06:06:49.835949 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:06:49 crc kubenswrapper[4854]: E0103 06:06:49.836641 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e65b3c77-9967-4e59-821c-d8583d0fd1f6" containerName="proxy-httpd" Jan 03 06:06:49 crc kubenswrapper[4854]: I0103 06:06:49.836659 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="e65b3c77-9967-4e59-821c-d8583d0fd1f6" containerName="proxy-httpd" Jan 03 06:06:49 crc kubenswrapper[4854]: E0103 06:06:49.836685 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e65b3c77-9967-4e59-821c-d8583d0fd1f6" containerName="ceilometer-notification-agent" Jan 03 06:06:49 crc kubenswrapper[4854]: I0103 06:06:49.836693 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="e65b3c77-9967-4e59-821c-d8583d0fd1f6" containerName="ceilometer-notification-agent" Jan 03 06:06:49 crc kubenswrapper[4854]: E0103 06:06:49.836713 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e65b3c77-9967-4e59-821c-d8583d0fd1f6" containerName="sg-core" Jan 03 06:06:49 crc kubenswrapper[4854]: I0103 06:06:49.836719 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="e65b3c77-9967-4e59-821c-d8583d0fd1f6" containerName="sg-core" Jan 03 06:06:49 crc kubenswrapper[4854]: E0103 06:06:49.836737 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e65b3c77-9967-4e59-821c-d8583d0fd1f6" containerName="ceilometer-central-agent" Jan 03 06:06:49 crc kubenswrapper[4854]: I0103 06:06:49.836743 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="e65b3c77-9967-4e59-821c-d8583d0fd1f6" containerName="ceilometer-central-agent" Jan 03 06:06:49 crc kubenswrapper[4854]: I0103 06:06:49.836943 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="e65b3c77-9967-4e59-821c-d8583d0fd1f6" containerName="sg-core" Jan 03 06:06:49 crc kubenswrapper[4854]: I0103 06:06:49.836960 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="e65b3c77-9967-4e59-821c-d8583d0fd1f6" containerName="ceilometer-central-agent" Jan 03 06:06:49 crc kubenswrapper[4854]: I0103 06:06:49.836970 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="e65b3c77-9967-4e59-821c-d8583d0fd1f6" containerName="ceilometer-notification-agent" Jan 03 06:06:49 crc kubenswrapper[4854]: I0103 06:06:49.836992 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="e65b3c77-9967-4e59-821c-d8583d0fd1f6" containerName="proxy-httpd" Jan 03 06:06:49 crc kubenswrapper[4854]: I0103 06:06:49.839237 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 06:06:49 crc kubenswrapper[4854]: I0103 06:06:49.841442 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 03 06:06:49 crc kubenswrapper[4854]: I0103 06:06:49.841899 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 03 06:06:49 crc kubenswrapper[4854]: I0103 06:06:49.845180 4854 scope.go:117] "RemoveContainer" containerID="ebb9eccbeca4a0daccefb5b4e582ca35ed32f35efbfc286a2aee91e6031726f4" Jan 03 06:06:49 crc kubenswrapper[4854]: I0103 06:06:49.873880 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:06:50 crc kubenswrapper[4854]: I0103 06:06:50.006785 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec4f77c3-e679-4e7f-92a4-dd888ba6522b-log-httpd\") pod \"ceilometer-0\" (UID: \"ec4f77c3-e679-4e7f-92a4-dd888ba6522b\") " pod="openstack/ceilometer-0" Jan 03 06:06:50 crc kubenswrapper[4854]: I0103 06:06:50.006852 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ec4f77c3-e679-4e7f-92a4-dd888ba6522b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ec4f77c3-e679-4e7f-92a4-dd888ba6522b\") " pod="openstack/ceilometer-0" Jan 03 06:06:50 crc kubenswrapper[4854]: I0103 06:06:50.006989 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec4f77c3-e679-4e7f-92a4-dd888ba6522b-run-httpd\") pod \"ceilometer-0\" (UID: \"ec4f77c3-e679-4e7f-92a4-dd888ba6522b\") " pod="openstack/ceilometer-0" Jan 03 06:06:50 crc kubenswrapper[4854]: I0103 06:06:50.007023 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec4f77c3-e679-4e7f-92a4-dd888ba6522b-config-data\") pod \"ceilometer-0\" (UID: \"ec4f77c3-e679-4e7f-92a4-dd888ba6522b\") " pod="openstack/ceilometer-0" Jan 03 06:06:50 crc kubenswrapper[4854]: I0103 06:06:50.007067 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec4f77c3-e679-4e7f-92a4-dd888ba6522b-scripts\") pod \"ceilometer-0\" (UID: \"ec4f77c3-e679-4e7f-92a4-dd888ba6522b\") " pod="openstack/ceilometer-0" Jan 03 06:06:50 crc kubenswrapper[4854]: I0103 06:06:50.007142 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb6vh\" (UniqueName: \"kubernetes.io/projected/ec4f77c3-e679-4e7f-92a4-dd888ba6522b-kube-api-access-wb6vh\") pod \"ceilometer-0\" (UID: \"ec4f77c3-e679-4e7f-92a4-dd888ba6522b\") " pod="openstack/ceilometer-0" Jan 03 06:06:50 crc kubenswrapper[4854]: I0103 06:06:50.007217 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec4f77c3-e679-4e7f-92a4-dd888ba6522b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ec4f77c3-e679-4e7f-92a4-dd888ba6522b\") " pod="openstack/ceilometer-0" Jan 03 06:06:50 crc kubenswrapper[4854]: I0103 06:06:50.108965 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec4f77c3-e679-4e7f-92a4-dd888ba6522b-config-data\") pod \"ceilometer-0\" (UID: \"ec4f77c3-e679-4e7f-92a4-dd888ba6522b\") " pod="openstack/ceilometer-0" Jan 03 06:06:50 crc kubenswrapper[4854]: I0103 06:06:50.109005 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec4f77c3-e679-4e7f-92a4-dd888ba6522b-scripts\") pod \"ceilometer-0\" (UID: \"ec4f77c3-e679-4e7f-92a4-dd888ba6522b\") " pod="openstack/ceilometer-0" Jan 03 06:06:50 crc kubenswrapper[4854]: I0103 06:06:50.109819 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb6vh\" (UniqueName: \"kubernetes.io/projected/ec4f77c3-e679-4e7f-92a4-dd888ba6522b-kube-api-access-wb6vh\") pod \"ceilometer-0\" (UID: \"ec4f77c3-e679-4e7f-92a4-dd888ba6522b\") " pod="openstack/ceilometer-0" Jan 03 06:06:50 crc kubenswrapper[4854]: I0103 06:06:50.109903 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec4f77c3-e679-4e7f-92a4-dd888ba6522b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ec4f77c3-e679-4e7f-92a4-dd888ba6522b\") " pod="openstack/ceilometer-0" Jan 03 06:06:50 crc kubenswrapper[4854]: I0103 06:06:50.109945 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec4f77c3-e679-4e7f-92a4-dd888ba6522b-log-httpd\") pod \"ceilometer-0\" (UID: \"ec4f77c3-e679-4e7f-92a4-dd888ba6522b\") " pod="openstack/ceilometer-0" Jan 03 06:06:50 crc kubenswrapper[4854]: I0103 06:06:50.109985 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ec4f77c3-e679-4e7f-92a4-dd888ba6522b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ec4f77c3-e679-4e7f-92a4-dd888ba6522b\") " pod="openstack/ceilometer-0" Jan 03 06:06:50 crc kubenswrapper[4854]: I0103 06:06:50.110058 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec4f77c3-e679-4e7f-92a4-dd888ba6522b-run-httpd\") pod \"ceilometer-0\" (UID: \"ec4f77c3-e679-4e7f-92a4-dd888ba6522b\") " pod="openstack/ceilometer-0" Jan 03 06:06:50 crc kubenswrapper[4854]: I0103 06:06:50.110730 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec4f77c3-e679-4e7f-92a4-dd888ba6522b-log-httpd\") pod \"ceilometer-0\" (UID: \"ec4f77c3-e679-4e7f-92a4-dd888ba6522b\") " pod="openstack/ceilometer-0" Jan 03 06:06:50 crc kubenswrapper[4854]: I0103 06:06:50.111179 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec4f77c3-e679-4e7f-92a4-dd888ba6522b-run-httpd\") pod \"ceilometer-0\" (UID: \"ec4f77c3-e679-4e7f-92a4-dd888ba6522b\") " pod="openstack/ceilometer-0" Jan 03 06:06:50 crc kubenswrapper[4854]: I0103 06:06:50.114757 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec4f77c3-e679-4e7f-92a4-dd888ba6522b-scripts\") pod \"ceilometer-0\" (UID: \"ec4f77c3-e679-4e7f-92a4-dd888ba6522b\") " pod="openstack/ceilometer-0" Jan 03 06:06:50 crc kubenswrapper[4854]: I0103 06:06:50.127895 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ec4f77c3-e679-4e7f-92a4-dd888ba6522b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ec4f77c3-e679-4e7f-92a4-dd888ba6522b\") " pod="openstack/ceilometer-0" Jan 03 06:06:50 crc kubenswrapper[4854]: I0103 06:06:50.138721 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec4f77c3-e679-4e7f-92a4-dd888ba6522b-config-data\") pod \"ceilometer-0\" (UID: \"ec4f77c3-e679-4e7f-92a4-dd888ba6522b\") " pod="openstack/ceilometer-0" Jan 03 06:06:50 crc kubenswrapper[4854]: I0103 06:06:50.143922 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec4f77c3-e679-4e7f-92a4-dd888ba6522b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ec4f77c3-e679-4e7f-92a4-dd888ba6522b\") " pod="openstack/ceilometer-0" Jan 03 06:06:50 crc kubenswrapper[4854]: I0103 06:06:50.153403 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb6vh\" (UniqueName: \"kubernetes.io/projected/ec4f77c3-e679-4e7f-92a4-dd888ba6522b-kube-api-access-wb6vh\") pod \"ceilometer-0\" (UID: \"ec4f77c3-e679-4e7f-92a4-dd888ba6522b\") " pod="openstack/ceilometer-0" Jan 03 06:06:50 crc kubenswrapper[4854]: I0103 06:06:50.161948 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e65b3c77-9967-4e59-821c-d8583d0fd1f6" path="/var/lib/kubelet/pods/e65b3c77-9967-4e59-821c-d8583d0fd1f6/volumes" Jan 03 06:06:50 crc kubenswrapper[4854]: I0103 06:06:50.169432 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 06:06:50 crc kubenswrapper[4854]: I0103 06:06:50.187985 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 03 06:06:50 crc kubenswrapper[4854]: I0103 06:06:50.673447 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-6q98m"] Jan 03 06:06:50 crc kubenswrapper[4854]: I0103 06:06:50.675749 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-6q98m" Jan 03 06:06:50 crc kubenswrapper[4854]: I0103 06:06:50.678108 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 03 06:06:50 crc kubenswrapper[4854]: I0103 06:06:50.678504 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 03 06:06:50 crc kubenswrapper[4854]: I0103 06:06:50.716000 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-6q98m"] Jan 03 06:06:50 crc kubenswrapper[4854]: I0103 06:06:50.763928 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:06:50 crc kubenswrapper[4854]: I0103 06:06:50.831752 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc666e24-12a2-4bea-bded-bb83c896dc9d-config-data\") pod \"nova-cell0-cell-mapping-6q98m\" (UID: \"bc666e24-12a2-4bea-bded-bb83c896dc9d\") " pod="openstack/nova-cell0-cell-mapping-6q98m" Jan 03 06:06:50 crc kubenswrapper[4854]: I0103 06:06:50.832015 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc666e24-12a2-4bea-bded-bb83c896dc9d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-6q98m\" (UID: \"bc666e24-12a2-4bea-bded-bb83c896dc9d\") " pod="openstack/nova-cell0-cell-mapping-6q98m" Jan 03 06:06:50 crc kubenswrapper[4854]: I0103 06:06:50.832053 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bszd\" (UniqueName: \"kubernetes.io/projected/bc666e24-12a2-4bea-bded-bb83c896dc9d-kube-api-access-8bszd\") pod \"nova-cell0-cell-mapping-6q98m\" (UID: \"bc666e24-12a2-4bea-bded-bb83c896dc9d\") " pod="openstack/nova-cell0-cell-mapping-6q98m" Jan 03 06:06:50 crc kubenswrapper[4854]: I0103 06:06:50.832178 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc666e24-12a2-4bea-bded-bb83c896dc9d-scripts\") pod \"nova-cell0-cell-mapping-6q98m\" (UID: \"bc666e24-12a2-4bea-bded-bb83c896dc9d\") " pod="openstack/nova-cell0-cell-mapping-6q98m" Jan 03 06:06:50 crc kubenswrapper[4854]: I0103 06:06:50.877583 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 03 06:06:50 crc kubenswrapper[4854]: I0103 06:06:50.879458 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 03 06:06:50 crc kubenswrapper[4854]: I0103 06:06:50.887735 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 03 06:06:50 crc kubenswrapper[4854]: I0103 06:06:50.910574 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 03 06:06:50 crc kubenswrapper[4854]: I0103 06:06:50.937193 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc666e24-12a2-4bea-bded-bb83c896dc9d-config-data\") pod \"nova-cell0-cell-mapping-6q98m\" (UID: \"bc666e24-12a2-4bea-bded-bb83c896dc9d\") " pod="openstack/nova-cell0-cell-mapping-6q98m" Jan 03 06:06:50 crc kubenswrapper[4854]: I0103 06:06:50.937255 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc666e24-12a2-4bea-bded-bb83c896dc9d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-6q98m\" (UID: \"bc666e24-12a2-4bea-bded-bb83c896dc9d\") " pod="openstack/nova-cell0-cell-mapping-6q98m" Jan 03 06:06:50 crc kubenswrapper[4854]: I0103 06:06:50.937309 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bszd\" (UniqueName: \"kubernetes.io/projected/bc666e24-12a2-4bea-bded-bb83c896dc9d-kube-api-access-8bszd\") pod \"nova-cell0-cell-mapping-6q98m\" (UID: \"bc666e24-12a2-4bea-bded-bb83c896dc9d\") " pod="openstack/nova-cell0-cell-mapping-6q98m" Jan 03 06:06:50 crc kubenswrapper[4854]: I0103 06:06:50.937403 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc666e24-12a2-4bea-bded-bb83c896dc9d-scripts\") pod \"nova-cell0-cell-mapping-6q98m\" (UID: \"bc666e24-12a2-4bea-bded-bb83c896dc9d\") " pod="openstack/nova-cell0-cell-mapping-6q98m" Jan 03 06:06:50 crc kubenswrapper[4854]: I0103 06:06:50.954808 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc666e24-12a2-4bea-bded-bb83c896dc9d-scripts\") pod \"nova-cell0-cell-mapping-6q98m\" (UID: \"bc666e24-12a2-4bea-bded-bb83c896dc9d\") " pod="openstack/nova-cell0-cell-mapping-6q98m" Jan 03 06:06:50 crc kubenswrapper[4854]: I0103 06:06:50.955416 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc666e24-12a2-4bea-bded-bb83c896dc9d-config-data\") pod \"nova-cell0-cell-mapping-6q98m\" (UID: \"bc666e24-12a2-4bea-bded-bb83c896dc9d\") " pod="openstack/nova-cell0-cell-mapping-6q98m" Jan 03 06:06:50 crc kubenswrapper[4854]: I0103 06:06:50.966412 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 03 06:06:50 crc kubenswrapper[4854]: I0103 06:06:50.968162 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 03 06:06:50 crc kubenswrapper[4854]: I0103 06:06:50.970209 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc666e24-12a2-4bea-bded-bb83c896dc9d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-6q98m\" (UID: \"bc666e24-12a2-4bea-bded-bb83c896dc9d\") " pod="openstack/nova-cell0-cell-mapping-6q98m" Jan 03 06:06:50 crc kubenswrapper[4854]: I0103 06:06:50.973161 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bszd\" (UniqueName: \"kubernetes.io/projected/bc666e24-12a2-4bea-bded-bb83c896dc9d-kube-api-access-8bszd\") pod \"nova-cell0-cell-mapping-6q98m\" (UID: \"bc666e24-12a2-4bea-bded-bb83c896dc9d\") " pod="openstack/nova-cell0-cell-mapping-6q98m" Jan 03 06:06:50 crc kubenswrapper[4854]: I0103 06:06:50.973392 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.007553 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-6q98m" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.016425 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-45z7t"] Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.018024 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-45z7t" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.033732 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.039608 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21e34b84-a209-4662-a8df-3e7aff354daa-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"21e34b84-a209-4662-a8df-3e7aff354daa\") " pod="openstack/nova-api-0" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.039660 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62pdw\" (UniqueName: \"kubernetes.io/projected/32c47867-8d22-4340-98a0-37ae6b098d80-kube-api-access-62pdw\") pod \"nova-cell1-novncproxy-0\" (UID: \"32c47867-8d22-4340-98a0-37ae6b098d80\") " pod="openstack/nova-cell1-novncproxy-0" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.039804 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21e34b84-a209-4662-a8df-3e7aff354daa-config-data\") pod \"nova-api-0\" (UID: \"21e34b84-a209-4662-a8df-3e7aff354daa\") " pod="openstack/nova-api-0" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.039868 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32c47867-8d22-4340-98a0-37ae6b098d80-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"32c47867-8d22-4340-98a0-37ae6b098d80\") " pod="openstack/nova-cell1-novncproxy-0" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.039941 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21e34b84-a209-4662-a8df-3e7aff354daa-logs\") pod \"nova-api-0\" (UID: \"21e34b84-a209-4662-a8df-3e7aff354daa\") " pod="openstack/nova-api-0" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.039999 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32c47867-8d22-4340-98a0-37ae6b098d80-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"32c47867-8d22-4340-98a0-37ae6b098d80\") " pod="openstack/nova-cell1-novncproxy-0" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.040030 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spphc\" (UniqueName: \"kubernetes.io/projected/21e34b84-a209-4662-a8df-3e7aff354daa-kube-api-access-spphc\") pod \"nova-api-0\" (UID: \"21e34b84-a209-4662-a8df-3e7aff354daa\") " pod="openstack/nova-api-0" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.115064 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-45z7t"] Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.137165 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-b805-account-create-update-hw8rw"] Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.138804 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-b805-account-create-update-hw8rw" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.145177 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21e34b84-a209-4662-a8df-3e7aff354daa-config-data\") pod \"nova-api-0\" (UID: \"21e34b84-a209-4662-a8df-3e7aff354daa\") " pod="openstack/nova-api-0" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.145225 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32c47867-8d22-4340-98a0-37ae6b098d80-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"32c47867-8d22-4340-98a0-37ae6b098d80\") " pod="openstack/nova-cell1-novncproxy-0" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.145256 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33d7a9cf-9ea2-4e02-b431-4c6b1df21337-operator-scripts\") pod \"aodh-db-create-45z7t\" (UID: \"33d7a9cf-9ea2-4e02-b431-4c6b1df21337\") " pod="openstack/aodh-db-create-45z7t" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.145309 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21e34b84-a209-4662-a8df-3e7aff354daa-logs\") pod \"nova-api-0\" (UID: \"21e34b84-a209-4662-a8df-3e7aff354daa\") " pod="openstack/nova-api-0" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.145361 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32c47867-8d22-4340-98a0-37ae6b098d80-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"32c47867-8d22-4340-98a0-37ae6b098d80\") " pod="openstack/nova-cell1-novncproxy-0" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.145390 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spphc\" (UniqueName: \"kubernetes.io/projected/21e34b84-a209-4662-a8df-3e7aff354daa-kube-api-access-spphc\") pod \"nova-api-0\" (UID: \"21e34b84-a209-4662-a8df-3e7aff354daa\") " pod="openstack/nova-api-0" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.145406 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq5nd\" (UniqueName: \"kubernetes.io/projected/33d7a9cf-9ea2-4e02-b431-4c6b1df21337-kube-api-access-gq5nd\") pod \"aodh-db-create-45z7t\" (UID: \"33d7a9cf-9ea2-4e02-b431-4c6b1df21337\") " pod="openstack/aodh-db-create-45z7t" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.145440 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21e34b84-a209-4662-a8df-3e7aff354daa-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"21e34b84-a209-4662-a8df-3e7aff354daa\") " pod="openstack/nova-api-0" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.145455 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62pdw\" (UniqueName: \"kubernetes.io/projected/32c47867-8d22-4340-98a0-37ae6b098d80-kube-api-access-62pdw\") pod \"nova-cell1-novncproxy-0\" (UID: \"32c47867-8d22-4340-98a0-37ae6b098d80\") " pod="openstack/nova-cell1-novncproxy-0" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.152554 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.154056 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21e34b84-a209-4662-a8df-3e7aff354daa-logs\") pod \"nova-api-0\" (UID: \"21e34b84-a209-4662-a8df-3e7aff354daa\") " pod="openstack/nova-api-0" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.154313 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32c47867-8d22-4340-98a0-37ae6b098d80-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"32c47867-8d22-4340-98a0-37ae6b098d80\") " pod="openstack/nova-cell1-novncproxy-0" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.165808 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21e34b84-a209-4662-a8df-3e7aff354daa-config-data\") pod \"nova-api-0\" (UID: \"21e34b84-a209-4662-a8df-3e7aff354daa\") " pod="openstack/nova-api-0" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.167653 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21e34b84-a209-4662-a8df-3e7aff354daa-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"21e34b84-a209-4662-a8df-3e7aff354daa\") " pod="openstack/nova-api-0" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.168058 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32c47867-8d22-4340-98a0-37ae6b098d80-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"32c47867-8d22-4340-98a0-37ae6b098d80\") " pod="openstack/nova-cell1-novncproxy-0" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.172221 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62pdw\" (UniqueName: \"kubernetes.io/projected/32c47867-8d22-4340-98a0-37ae6b098d80-kube-api-access-62pdw\") pod \"nova-cell1-novncproxy-0\" (UID: \"32c47867-8d22-4340-98a0-37ae6b098d80\") " pod="openstack/nova-cell1-novncproxy-0" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.172282 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.174856 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.177906 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.197477 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spphc\" (UniqueName: \"kubernetes.io/projected/21e34b84-a209-4662-a8df-3e7aff354daa-kube-api-access-spphc\") pod \"nova-api-0\" (UID: \"21e34b84-a209-4662-a8df-3e7aff354daa\") " pod="openstack/nova-api-0" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.212710 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.227159 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.244135 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-b805-account-create-update-hw8rw"] Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.247644 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a292205e-b4eb-4f28-a9a8-9fbceaea3f60-config-data\") pod \"nova-scheduler-0\" (UID: \"a292205e-b4eb-4f28-a9a8-9fbceaea3f60\") " pod="openstack/nova-scheduler-0" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.247830 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0800292-1f7a-4d53-85b2-f256b8b27b7f-operator-scripts\") pod \"aodh-b805-account-create-update-hw8rw\" (UID: \"c0800292-1f7a-4d53-85b2-f256b8b27b7f\") " pod="openstack/aodh-b805-account-create-update-hw8rw" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.247989 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2z59\" (UniqueName: \"kubernetes.io/projected/c0800292-1f7a-4d53-85b2-f256b8b27b7f-kube-api-access-n2z59\") pod \"aodh-b805-account-create-update-hw8rw\" (UID: \"c0800292-1f7a-4d53-85b2-f256b8b27b7f\") " pod="openstack/aodh-b805-account-create-update-hw8rw" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.248045 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33d7a9cf-9ea2-4e02-b431-4c6b1df21337-operator-scripts\") pod \"aodh-db-create-45z7t\" (UID: \"33d7a9cf-9ea2-4e02-b431-4c6b1df21337\") " pod="openstack/aodh-db-create-45z7t" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.248110 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a292205e-b4eb-4f28-a9a8-9fbceaea3f60-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a292205e-b4eb-4f28-a9a8-9fbceaea3f60\") " pod="openstack/nova-scheduler-0" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.248312 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gq5nd\" (UniqueName: \"kubernetes.io/projected/33d7a9cf-9ea2-4e02-b431-4c6b1df21337-kube-api-access-gq5nd\") pod \"aodh-db-create-45z7t\" (UID: \"33d7a9cf-9ea2-4e02-b431-4c6b1df21337\") " pod="openstack/aodh-db-create-45z7t" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.248366 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnsv5\" (UniqueName: \"kubernetes.io/projected/a292205e-b4eb-4f28-a9a8-9fbceaea3f60-kube-api-access-vnsv5\") pod \"nova-scheduler-0\" (UID: \"a292205e-b4eb-4f28-a9a8-9fbceaea3f60\") " pod="openstack/nova-scheduler-0" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.276808 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.289504 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33d7a9cf-9ea2-4e02-b431-4c6b1df21337-operator-scripts\") pod \"aodh-db-create-45z7t\" (UID: \"33d7a9cf-9ea2-4e02-b431-4c6b1df21337\") " pod="openstack/aodh-db-create-45z7t" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.300702 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gq5nd\" (UniqueName: \"kubernetes.io/projected/33d7a9cf-9ea2-4e02-b431-4c6b1df21337-kube-api-access-gq5nd\") pod \"aodh-db-create-45z7t\" (UID: \"33d7a9cf-9ea2-4e02-b431-4c6b1df21337\") " pod="openstack/aodh-db-create-45z7t" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.306072 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.307882 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-45z7t" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.308534 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.313459 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.352502 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0800292-1f7a-4d53-85b2-f256b8b27b7f-operator-scripts\") pod \"aodh-b805-account-create-update-hw8rw\" (UID: \"c0800292-1f7a-4d53-85b2-f256b8b27b7f\") " pod="openstack/aodh-b805-account-create-update-hw8rw" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.352777 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2z59\" (UniqueName: \"kubernetes.io/projected/c0800292-1f7a-4d53-85b2-f256b8b27b7f-kube-api-access-n2z59\") pod \"aodh-b805-account-create-update-hw8rw\" (UID: \"c0800292-1f7a-4d53-85b2-f256b8b27b7f\") " pod="openstack/aodh-b805-account-create-update-hw8rw" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.352863 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a292205e-b4eb-4f28-a9a8-9fbceaea3f60-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a292205e-b4eb-4f28-a9a8-9fbceaea3f60\") " pod="openstack/nova-scheduler-0" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.353002 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnsv5\" (UniqueName: \"kubernetes.io/projected/a292205e-b4eb-4f28-a9a8-9fbceaea3f60-kube-api-access-vnsv5\") pod \"nova-scheduler-0\" (UID: \"a292205e-b4eb-4f28-a9a8-9fbceaea3f60\") " pod="openstack/nova-scheduler-0" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.353035 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a292205e-b4eb-4f28-a9a8-9fbceaea3f60-config-data\") pod \"nova-scheduler-0\" (UID: \"a292205e-b4eb-4f28-a9a8-9fbceaea3f60\") " pod="openstack/nova-scheduler-0" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.357213 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0800292-1f7a-4d53-85b2-f256b8b27b7f-operator-scripts\") pod \"aodh-b805-account-create-update-hw8rw\" (UID: \"c0800292-1f7a-4d53-85b2-f256b8b27b7f\") " pod="openstack/aodh-b805-account-create-update-hw8rw" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.374174 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a292205e-b4eb-4f28-a9a8-9fbceaea3f60-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a292205e-b4eb-4f28-a9a8-9fbceaea3f60\") " pod="openstack/nova-scheduler-0" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.386499 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a292205e-b4eb-4f28-a9a8-9fbceaea3f60-config-data\") pod \"nova-scheduler-0\" (UID: \"a292205e-b4eb-4f28-a9a8-9fbceaea3f60\") " pod="openstack/nova-scheduler-0" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.390962 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2z59\" (UniqueName: \"kubernetes.io/projected/c0800292-1f7a-4d53-85b2-f256b8b27b7f-kube-api-access-n2z59\") pod \"aodh-b805-account-create-update-hw8rw\" (UID: \"c0800292-1f7a-4d53-85b2-f256b8b27b7f\") " pod="openstack/aodh-b805-account-create-update-hw8rw" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.419409 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.425826 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnsv5\" (UniqueName: \"kubernetes.io/projected/a292205e-b4eb-4f28-a9a8-9fbceaea3f60-kube-api-access-vnsv5\") pod \"nova-scheduler-0\" (UID: \"a292205e-b4eb-4f28-a9a8-9fbceaea3f60\") " pod="openstack/nova-scheduler-0" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.463560 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d880689e-e8bd-4759-9d27-699b26516946-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d880689e-e8bd-4759-9d27-699b26516946\") " pod="openstack/nova-metadata-0" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.463658 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d880689e-e8bd-4759-9d27-699b26516946-logs\") pod \"nova-metadata-0\" (UID: \"d880689e-e8bd-4759-9d27-699b26516946\") " pod="openstack/nova-metadata-0" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.463719 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d880689e-e8bd-4759-9d27-699b26516946-config-data\") pod \"nova-metadata-0\" (UID: \"d880689e-e8bd-4759-9d27-699b26516946\") " pod="openstack/nova-metadata-0" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.463744 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzhzt\" (UniqueName: \"kubernetes.io/projected/d880689e-e8bd-4759-9d27-699b26516946-kube-api-access-jzhzt\") pod \"nova-metadata-0\" (UID: \"d880689e-e8bd-4759-9d27-699b26516946\") " pod="openstack/nova-metadata-0" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.556850 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-vk4z5"] Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.559294 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-vk4z5" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.565635 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d880689e-e8bd-4759-9d27-699b26516946-config-data\") pod \"nova-metadata-0\" (UID: \"d880689e-e8bd-4759-9d27-699b26516946\") " pod="openstack/nova-metadata-0" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.565692 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzhzt\" (UniqueName: \"kubernetes.io/projected/d880689e-e8bd-4759-9d27-699b26516946-kube-api-access-jzhzt\") pod \"nova-metadata-0\" (UID: \"d880689e-e8bd-4759-9d27-699b26516946\") " pod="openstack/nova-metadata-0" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.565856 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d880689e-e8bd-4759-9d27-699b26516946-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d880689e-e8bd-4759-9d27-699b26516946\") " pod="openstack/nova-metadata-0" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.565936 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d880689e-e8bd-4759-9d27-699b26516946-logs\") pod \"nova-metadata-0\" (UID: \"d880689e-e8bd-4759-9d27-699b26516946\") " pod="openstack/nova-metadata-0" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.566579 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d880689e-e8bd-4759-9d27-699b26516946-logs\") pod \"nova-metadata-0\" (UID: \"d880689e-e8bd-4759-9d27-699b26516946\") " pod="openstack/nova-metadata-0" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.578309 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d880689e-e8bd-4759-9d27-699b26516946-config-data\") pod \"nova-metadata-0\" (UID: \"d880689e-e8bd-4759-9d27-699b26516946\") " pod="openstack/nova-metadata-0" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.585336 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzhzt\" (UniqueName: \"kubernetes.io/projected/d880689e-e8bd-4759-9d27-699b26516946-kube-api-access-jzhzt\") pod \"nova-metadata-0\" (UID: \"d880689e-e8bd-4759-9d27-699b26516946\") " pod="openstack/nova-metadata-0" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.586185 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d880689e-e8bd-4759-9d27-699b26516946-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d880689e-e8bd-4759-9d27-699b26516946\") " pod="openstack/nova-metadata-0" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.594213 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-vk4z5"] Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.656291 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.676583 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d274c7e9-dee8-408e-a5fe-2cbb9d319dbf-config\") pod \"dnsmasq-dns-9b86998b5-vk4z5\" (UID: \"d274c7e9-dee8-408e-a5fe-2cbb9d319dbf\") " pod="openstack/dnsmasq-dns-9b86998b5-vk4z5" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.676631 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg8x6\" (UniqueName: \"kubernetes.io/projected/d274c7e9-dee8-408e-a5fe-2cbb9d319dbf-kube-api-access-fg8x6\") pod \"dnsmasq-dns-9b86998b5-vk4z5\" (UID: \"d274c7e9-dee8-408e-a5fe-2cbb9d319dbf\") " pod="openstack/dnsmasq-dns-9b86998b5-vk4z5" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.676655 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d274c7e9-dee8-408e-a5fe-2cbb9d319dbf-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-vk4z5\" (UID: \"d274c7e9-dee8-408e-a5fe-2cbb9d319dbf\") " pod="openstack/dnsmasq-dns-9b86998b5-vk4z5" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.676717 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d274c7e9-dee8-408e-a5fe-2cbb9d319dbf-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-vk4z5\" (UID: \"d274c7e9-dee8-408e-a5fe-2cbb9d319dbf\") " pod="openstack/dnsmasq-dns-9b86998b5-vk4z5" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.676749 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d274c7e9-dee8-408e-a5fe-2cbb9d319dbf-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-vk4z5\" (UID: \"d274c7e9-dee8-408e-a5fe-2cbb9d319dbf\") " pod="openstack/dnsmasq-dns-9b86998b5-vk4z5" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.676872 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d274c7e9-dee8-408e-a5fe-2cbb9d319dbf-dns-svc\") pod \"dnsmasq-dns-9b86998b5-vk4z5\" (UID: \"d274c7e9-dee8-408e-a5fe-2cbb9d319dbf\") " pod="openstack/dnsmasq-dns-9b86998b5-vk4z5" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.677239 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-b805-account-create-update-hw8rw" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.695252 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.795955 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d274c7e9-dee8-408e-a5fe-2cbb9d319dbf-config\") pod \"dnsmasq-dns-9b86998b5-vk4z5\" (UID: \"d274c7e9-dee8-408e-a5fe-2cbb9d319dbf\") " pod="openstack/dnsmasq-dns-9b86998b5-vk4z5" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.796007 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg8x6\" (UniqueName: \"kubernetes.io/projected/d274c7e9-dee8-408e-a5fe-2cbb9d319dbf-kube-api-access-fg8x6\") pod \"dnsmasq-dns-9b86998b5-vk4z5\" (UID: \"d274c7e9-dee8-408e-a5fe-2cbb9d319dbf\") " pod="openstack/dnsmasq-dns-9b86998b5-vk4z5" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.796049 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d274c7e9-dee8-408e-a5fe-2cbb9d319dbf-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-vk4z5\" (UID: \"d274c7e9-dee8-408e-a5fe-2cbb9d319dbf\") " pod="openstack/dnsmasq-dns-9b86998b5-vk4z5" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.796213 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d274c7e9-dee8-408e-a5fe-2cbb9d319dbf-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-vk4z5\" (UID: \"d274c7e9-dee8-408e-a5fe-2cbb9d319dbf\") " pod="openstack/dnsmasq-dns-9b86998b5-vk4z5" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.796270 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d274c7e9-dee8-408e-a5fe-2cbb9d319dbf-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-vk4z5\" (UID: \"d274c7e9-dee8-408e-a5fe-2cbb9d319dbf\") " pod="openstack/dnsmasq-dns-9b86998b5-vk4z5" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.797993 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d274c7e9-dee8-408e-a5fe-2cbb9d319dbf-config\") pod \"dnsmasq-dns-9b86998b5-vk4z5\" (UID: \"d274c7e9-dee8-408e-a5fe-2cbb9d319dbf\") " pod="openstack/dnsmasq-dns-9b86998b5-vk4z5" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.798384 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d274c7e9-dee8-408e-a5fe-2cbb9d319dbf-dns-svc\") pod \"dnsmasq-dns-9b86998b5-vk4z5\" (UID: \"d274c7e9-dee8-408e-a5fe-2cbb9d319dbf\") " pod="openstack/dnsmasq-dns-9b86998b5-vk4z5" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.799129 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d274c7e9-dee8-408e-a5fe-2cbb9d319dbf-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-vk4z5\" (UID: \"d274c7e9-dee8-408e-a5fe-2cbb9d319dbf\") " pod="openstack/dnsmasq-dns-9b86998b5-vk4z5" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.799672 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d274c7e9-dee8-408e-a5fe-2cbb9d319dbf-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-vk4z5\" (UID: \"d274c7e9-dee8-408e-a5fe-2cbb9d319dbf\") " pod="openstack/dnsmasq-dns-9b86998b5-vk4z5" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.804977 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d274c7e9-dee8-408e-a5fe-2cbb9d319dbf-dns-svc\") pod \"dnsmasq-dns-9b86998b5-vk4z5\" (UID: \"d274c7e9-dee8-408e-a5fe-2cbb9d319dbf\") " pod="openstack/dnsmasq-dns-9b86998b5-vk4z5" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.822645 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d274c7e9-dee8-408e-a5fe-2cbb9d319dbf-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-vk4z5\" (UID: \"d274c7e9-dee8-408e-a5fe-2cbb9d319dbf\") " pod="openstack/dnsmasq-dns-9b86998b5-vk4z5" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.835886 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg8x6\" (UniqueName: \"kubernetes.io/projected/d274c7e9-dee8-408e-a5fe-2cbb9d319dbf-kube-api-access-fg8x6\") pod \"dnsmasq-dns-9b86998b5-vk4z5\" (UID: \"d274c7e9-dee8-408e-a5fe-2cbb9d319dbf\") " pod="openstack/dnsmasq-dns-9b86998b5-vk4z5" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.841962 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec4f77c3-e679-4e7f-92a4-dd888ba6522b","Type":"ContainerStarted","Data":"3b08cab72b3a44826e956f11f46978672b20914aae3fd119009ec8846f8cb322"} Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.914268 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-vk4z5" Jan 03 06:06:51 crc kubenswrapper[4854]: I0103 06:06:51.939816 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-6q98m"] Jan 03 06:06:52 crc kubenswrapper[4854]: W0103 06:06:52.015996 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc666e24_12a2_4bea_bded_bb83c896dc9d.slice/crio-d233bf72c3a05815089057cf9b2a97a69026d680b50173efd00cec03f50965ca WatchSource:0}: Error finding container d233bf72c3a05815089057cf9b2a97a69026d680b50173efd00cec03f50965ca: Status 404 returned error can't find the container with id d233bf72c3a05815089057cf9b2a97a69026d680b50173efd00cec03f50965ca Jan 03 06:06:52 crc kubenswrapper[4854]: I0103 06:06:52.286714 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-45z7t"] Jan 03 06:06:52 crc kubenswrapper[4854]: I0103 06:06:52.301534 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 03 06:06:52 crc kubenswrapper[4854]: I0103 06:06:52.662054 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-27kbq"] Jan 03 06:06:52 crc kubenswrapper[4854]: I0103 06:06:52.690070 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-27kbq" Jan 03 06:06:52 crc kubenswrapper[4854]: I0103 06:06:52.692751 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 03 06:06:52 crc kubenswrapper[4854]: I0103 06:06:52.694700 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 03 06:06:52 crc kubenswrapper[4854]: I0103 06:06:52.712650 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-27kbq"] Jan 03 06:06:52 crc kubenswrapper[4854]: I0103 06:06:52.796713 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 03 06:06:52 crc kubenswrapper[4854]: I0103 06:06:52.845900 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f93995e8-15fe-446c-b731-ade43a634b9b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-27kbq\" (UID: \"f93995e8-15fe-446c-b731-ade43a634b9b\") " pod="openstack/nova-cell1-conductor-db-sync-27kbq" Jan 03 06:06:52 crc kubenswrapper[4854]: I0103 06:06:52.846268 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9s2k\" (UniqueName: \"kubernetes.io/projected/f93995e8-15fe-446c-b731-ade43a634b9b-kube-api-access-k9s2k\") pod \"nova-cell1-conductor-db-sync-27kbq\" (UID: \"f93995e8-15fe-446c-b731-ade43a634b9b\") " pod="openstack/nova-cell1-conductor-db-sync-27kbq" Jan 03 06:06:52 crc kubenswrapper[4854]: I0103 06:06:52.846300 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f93995e8-15fe-446c-b731-ade43a634b9b-config-data\") pod \"nova-cell1-conductor-db-sync-27kbq\" (UID: \"f93995e8-15fe-446c-b731-ade43a634b9b\") " pod="openstack/nova-cell1-conductor-db-sync-27kbq" Jan 03 06:06:52 crc kubenswrapper[4854]: I0103 06:06:52.846394 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f93995e8-15fe-446c-b731-ade43a634b9b-scripts\") pod \"nova-cell1-conductor-db-sync-27kbq\" (UID: \"f93995e8-15fe-446c-b731-ade43a634b9b\") " pod="openstack/nova-cell1-conductor-db-sync-27kbq" Jan 03 06:06:52 crc kubenswrapper[4854]: I0103 06:06:52.873218 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-6q98m" event={"ID":"bc666e24-12a2-4bea-bded-bb83c896dc9d","Type":"ContainerStarted","Data":"d233bf72c3a05815089057cf9b2a97a69026d680b50173efd00cec03f50965ca"} Jan 03 06:06:52 crc kubenswrapper[4854]: I0103 06:06:52.883871 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec4f77c3-e679-4e7f-92a4-dd888ba6522b","Type":"ContainerStarted","Data":"1f2e45eb0d2f8739c77bd2ee198ff3f31bb081cff0105e884501be4a60fc41f4"} Jan 03 06:06:52 crc kubenswrapper[4854]: I0103 06:06:52.888619 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"32c47867-8d22-4340-98a0-37ae6b098d80","Type":"ContainerStarted","Data":"df1968a863c699ff1b208889de45fe0e2c47318aa9d79f9052a2e9bdee5da694"} Jan 03 06:06:52 crc kubenswrapper[4854]: I0103 06:06:52.889571 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-45z7t" event={"ID":"33d7a9cf-9ea2-4e02-b431-4c6b1df21337","Type":"ContainerStarted","Data":"06406695dcbaa8ab13a441e50ef6042f53827b57c7d87bc04479e4e2d5322d1e"} Jan 03 06:06:52 crc kubenswrapper[4854]: I0103 06:06:52.904574 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 03 06:06:52 crc kubenswrapper[4854]: I0103 06:06:52.950135 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f93995e8-15fe-446c-b731-ade43a634b9b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-27kbq\" (UID: \"f93995e8-15fe-446c-b731-ade43a634b9b\") " pod="openstack/nova-cell1-conductor-db-sync-27kbq" Jan 03 06:06:52 crc kubenswrapper[4854]: I0103 06:06:52.950195 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9s2k\" (UniqueName: \"kubernetes.io/projected/f93995e8-15fe-446c-b731-ade43a634b9b-kube-api-access-k9s2k\") pod \"nova-cell1-conductor-db-sync-27kbq\" (UID: \"f93995e8-15fe-446c-b731-ade43a634b9b\") " pod="openstack/nova-cell1-conductor-db-sync-27kbq" Jan 03 06:06:52 crc kubenswrapper[4854]: I0103 06:06:52.950226 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f93995e8-15fe-446c-b731-ade43a634b9b-config-data\") pod \"nova-cell1-conductor-db-sync-27kbq\" (UID: \"f93995e8-15fe-446c-b731-ade43a634b9b\") " pod="openstack/nova-cell1-conductor-db-sync-27kbq" Jan 03 06:06:52 crc kubenswrapper[4854]: I0103 06:06:52.950283 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f93995e8-15fe-446c-b731-ade43a634b9b-scripts\") pod \"nova-cell1-conductor-db-sync-27kbq\" (UID: \"f93995e8-15fe-446c-b731-ade43a634b9b\") " pod="openstack/nova-cell1-conductor-db-sync-27kbq" Jan 03 06:06:52 crc kubenswrapper[4854]: I0103 06:06:52.958717 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f93995e8-15fe-446c-b731-ade43a634b9b-scripts\") pod \"nova-cell1-conductor-db-sync-27kbq\" (UID: \"f93995e8-15fe-446c-b731-ade43a634b9b\") " pod="openstack/nova-cell1-conductor-db-sync-27kbq" Jan 03 06:06:52 crc kubenswrapper[4854]: I0103 06:06:52.969236 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f93995e8-15fe-446c-b731-ade43a634b9b-config-data\") pod \"nova-cell1-conductor-db-sync-27kbq\" (UID: \"f93995e8-15fe-446c-b731-ade43a634b9b\") " pod="openstack/nova-cell1-conductor-db-sync-27kbq" Jan 03 06:06:52 crc kubenswrapper[4854]: I0103 06:06:52.970679 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f93995e8-15fe-446c-b731-ade43a634b9b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-27kbq\" (UID: \"f93995e8-15fe-446c-b731-ade43a634b9b\") " pod="openstack/nova-cell1-conductor-db-sync-27kbq" Jan 03 06:06:52 crc kubenswrapper[4854]: I0103 06:06:52.987053 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9s2k\" (UniqueName: \"kubernetes.io/projected/f93995e8-15fe-446c-b731-ade43a634b9b-kube-api-access-k9s2k\") pod \"nova-cell1-conductor-db-sync-27kbq\" (UID: \"f93995e8-15fe-446c-b731-ade43a634b9b\") " pod="openstack/nova-cell1-conductor-db-sync-27kbq" Jan 03 06:06:53 crc kubenswrapper[4854]: I0103 06:06:53.036238 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-27kbq" Jan 03 06:06:53 crc kubenswrapper[4854]: I0103 06:06:53.375240 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-b805-account-create-update-hw8rw"] Jan 03 06:06:53 crc kubenswrapper[4854]: I0103 06:06:53.386710 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 03 06:06:53 crc kubenswrapper[4854]: I0103 06:06:53.401963 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-vk4z5"] Jan 03 06:06:53 crc kubenswrapper[4854]: W0103 06:06:53.473483 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd274c7e9_dee8_408e_a5fe_2cbb9d319dbf.slice/crio-1c6fe71aa07b68de3ff0e78e18d73bd939c87af8783f53825c108661e5812190 WatchSource:0}: Error finding container 1c6fe71aa07b68de3ff0e78e18d73bd939c87af8783f53825c108661e5812190: Status 404 returned error can't find the container with id 1c6fe71aa07b68de3ff0e78e18d73bd939c87af8783f53825c108661e5812190 Jan 03 06:06:53 crc kubenswrapper[4854]: W0103 06:06:53.478359 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0800292_1f7a_4d53_85b2_f256b8b27b7f.slice/crio-9e5f8efa1b89bd89fc15c3bb8fca9513bd555d7eb64edc4a4b58f2d06f520a29 WatchSource:0}: Error finding container 9e5f8efa1b89bd89fc15c3bb8fca9513bd555d7eb64edc4a4b58f2d06f520a29: Status 404 returned error can't find the container with id 9e5f8efa1b89bd89fc15c3bb8fca9513bd555d7eb64edc4a4b58f2d06f520a29 Jan 03 06:06:53 crc kubenswrapper[4854]: I0103 06:06:53.736034 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-27kbq"] Jan 03 06:06:53 crc kubenswrapper[4854]: I0103 06:06:53.942833 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"21e34b84-a209-4662-a8df-3e7aff354daa","Type":"ContainerStarted","Data":"c6ad0d53b5abfc1e3d8378bea5fd494b9239d5572fdc4a3b26bdb85ec765b4dc"} Jan 03 06:06:53 crc kubenswrapper[4854]: I0103 06:06:53.981455 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-6q98m" event={"ID":"bc666e24-12a2-4bea-bded-bb83c896dc9d","Type":"ContainerStarted","Data":"75759a88f790e3f8574815d73410c788d3731d4c04f53edf6e75193f1d017620"} Jan 03 06:06:54 crc kubenswrapper[4854]: I0103 06:06:54.003864 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec4f77c3-e679-4e7f-92a4-dd888ba6522b","Type":"ContainerStarted","Data":"11b07654612794b379eace89c3cb7bcac66467c9554d1270d913fb6a29971b18"} Jan 03 06:06:54 crc kubenswrapper[4854]: I0103 06:06:54.012617 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a292205e-b4eb-4f28-a9a8-9fbceaea3f60","Type":"ContainerStarted","Data":"1a837ecb8d541b9c7c1380a335194b7c910484c8d0884909015eaff8e1090ad0"} Jan 03 06:06:54 crc kubenswrapper[4854]: I0103 06:06:54.020783 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d880689e-e8bd-4759-9d27-699b26516946","Type":"ContainerStarted","Data":"c1f5ab964266b23295426357784ebf047e6b8f738b0cab1a98ef00d3e6bb58b2"} Jan 03 06:06:54 crc kubenswrapper[4854]: I0103 06:06:54.031271 4854 generic.go:334] "Generic (PLEG): container finished" podID="33d7a9cf-9ea2-4e02-b431-4c6b1df21337" containerID="a587d926d89f4f7548fe5710ef01a724d92db7802a84764b2b2f8e035c7622b1" exitCode=0 Jan 03 06:06:54 crc kubenswrapper[4854]: I0103 06:06:54.031398 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-45z7t" event={"ID":"33d7a9cf-9ea2-4e02-b431-4c6b1df21337","Type":"ContainerDied","Data":"a587d926d89f4f7548fe5710ef01a724d92db7802a84764b2b2f8e035c7622b1"} Jan 03 06:06:54 crc kubenswrapper[4854]: I0103 06:06:54.040699 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-b805-account-create-update-hw8rw" event={"ID":"c0800292-1f7a-4d53-85b2-f256b8b27b7f","Type":"ContainerStarted","Data":"9e5f8efa1b89bd89fc15c3bb8fca9513bd555d7eb64edc4a4b58f2d06f520a29"} Jan 03 06:06:54 crc kubenswrapper[4854]: I0103 06:06:54.045967 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-6q98m" podStartSLOduration=4.045947534 podStartE2EDuration="4.045947534s" podCreationTimestamp="2026-01-03 06:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:06:54.003033704 +0000 UTC m=+1592.329610306" watchObservedRunningTime="2026-01-03 06:06:54.045947534 +0000 UTC m=+1592.372524106" Jan 03 06:06:54 crc kubenswrapper[4854]: I0103 06:06:54.058783 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-vk4z5" event={"ID":"d274c7e9-dee8-408e-a5fe-2cbb9d319dbf","Type":"ContainerStarted","Data":"43566fcb9e3283924e6fa1df0a556243121897ff056cb783711758c151dc68d3"} Jan 03 06:06:54 crc kubenswrapper[4854]: I0103 06:06:54.058992 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-vk4z5" event={"ID":"d274c7e9-dee8-408e-a5fe-2cbb9d319dbf","Type":"ContainerStarted","Data":"1c6fe71aa07b68de3ff0e78e18d73bd939c87af8783f53825c108661e5812190"} Jan 03 06:06:54 crc kubenswrapper[4854]: I0103 06:06:54.069104 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-27kbq" event={"ID":"f93995e8-15fe-446c-b731-ade43a634b9b","Type":"ContainerStarted","Data":"ee7ec334159266f1d13884c423919b59b92ad8b5884016c359fe4bc86950f63f"} Jan 03 06:06:54 crc kubenswrapper[4854]: I0103 06:06:54.105225 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-b805-account-create-update-hw8rw" podStartSLOduration=3.105210841 podStartE2EDuration="3.105210841s" podCreationTimestamp="2026-01-03 06:06:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:06:54.069612224 +0000 UTC m=+1592.396188796" watchObservedRunningTime="2026-01-03 06:06:54.105210841 +0000 UTC m=+1592.431787413" Jan 03 06:06:55 crc kubenswrapper[4854]: I0103 06:06:55.072579 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 03 06:06:55 crc kubenswrapper[4854]: I0103 06:06:55.085452 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 03 06:06:55 crc kubenswrapper[4854]: I0103 06:06:55.100046 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec4f77c3-e679-4e7f-92a4-dd888ba6522b","Type":"ContainerStarted","Data":"8f4f6f985f7e59e6dfe40a39835d4193a83cc0ae9243ed7a0fee244b02ea2ade"} Jan 03 06:06:55 crc kubenswrapper[4854]: I0103 06:06:55.103465 4854 generic.go:334] "Generic (PLEG): container finished" podID="c0800292-1f7a-4d53-85b2-f256b8b27b7f" containerID="eb553f9592b832041cf1fdf2c9ef408e88c6069171e67e27b21406565c319ae4" exitCode=0 Jan 03 06:06:55 crc kubenswrapper[4854]: I0103 06:06:55.103540 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-b805-account-create-update-hw8rw" event={"ID":"c0800292-1f7a-4d53-85b2-f256b8b27b7f","Type":"ContainerDied","Data":"eb553f9592b832041cf1fdf2c9ef408e88c6069171e67e27b21406565c319ae4"} Jan 03 06:06:55 crc kubenswrapper[4854]: I0103 06:06:55.110940 4854 generic.go:334] "Generic (PLEG): container finished" podID="d274c7e9-dee8-408e-a5fe-2cbb9d319dbf" containerID="43566fcb9e3283924e6fa1df0a556243121897ff056cb783711758c151dc68d3" exitCode=0 Jan 03 06:06:55 crc kubenswrapper[4854]: I0103 06:06:55.111036 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-vk4z5" event={"ID":"d274c7e9-dee8-408e-a5fe-2cbb9d319dbf","Type":"ContainerDied","Data":"43566fcb9e3283924e6fa1df0a556243121897ff056cb783711758c151dc68d3"} Jan 03 06:06:55 crc kubenswrapper[4854]: I0103 06:06:55.111188 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-vk4z5" event={"ID":"d274c7e9-dee8-408e-a5fe-2cbb9d319dbf","Type":"ContainerStarted","Data":"cae3f5ca4314adfb5f28c0244a9aeda0e23fabf33665f95cfcbe5700382ebcda"} Jan 03 06:06:55 crc kubenswrapper[4854]: I0103 06:06:55.111515 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-9b86998b5-vk4z5" Jan 03 06:06:55 crc kubenswrapper[4854]: I0103 06:06:55.114673 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-27kbq" event={"ID":"f93995e8-15fe-446c-b731-ade43a634b9b","Type":"ContainerStarted","Data":"af0831dd29cf03129c1714e21950c8a6ef74079e760c0d89508cfbe7f72d2a74"} Jan 03 06:06:55 crc kubenswrapper[4854]: I0103 06:06:55.153023 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9b86998b5-vk4z5" podStartSLOduration=4.153002826 podStartE2EDuration="4.153002826s" podCreationTimestamp="2026-01-03 06:06:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:06:55.14395918 +0000 UTC m=+1593.470535752" watchObservedRunningTime="2026-01-03 06:06:55.153002826 +0000 UTC m=+1593.479579398" Jan 03 06:06:55 crc kubenswrapper[4854]: I0103 06:06:55.173406 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-27kbq" podStartSLOduration=3.173330233 podStartE2EDuration="3.173330233s" podCreationTimestamp="2026-01-03 06:06:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:06:55.166523203 +0000 UTC m=+1593.493099775" watchObservedRunningTime="2026-01-03 06:06:55.173330233 +0000 UTC m=+1593.499906805" Jan 03 06:06:56 crc kubenswrapper[4854]: I0103 06:06:56.131952 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-45z7t" Jan 03 06:06:56 crc kubenswrapper[4854]: I0103 06:06:56.136621 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-45z7t" event={"ID":"33d7a9cf-9ea2-4e02-b431-4c6b1df21337","Type":"ContainerDied","Data":"06406695dcbaa8ab13a441e50ef6042f53827b57c7d87bc04479e4e2d5322d1e"} Jan 03 06:06:56 crc kubenswrapper[4854]: I0103 06:06:56.136650 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06406695dcbaa8ab13a441e50ef6042f53827b57c7d87bc04479e4e2d5322d1e" Jan 03 06:06:56 crc kubenswrapper[4854]: I0103 06:06:56.242438 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33d7a9cf-9ea2-4e02-b431-4c6b1df21337-operator-scripts\") pod \"33d7a9cf-9ea2-4e02-b431-4c6b1df21337\" (UID: \"33d7a9cf-9ea2-4e02-b431-4c6b1df21337\") " Jan 03 06:06:56 crc kubenswrapper[4854]: I0103 06:06:56.242715 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gq5nd\" (UniqueName: \"kubernetes.io/projected/33d7a9cf-9ea2-4e02-b431-4c6b1df21337-kube-api-access-gq5nd\") pod \"33d7a9cf-9ea2-4e02-b431-4c6b1df21337\" (UID: \"33d7a9cf-9ea2-4e02-b431-4c6b1df21337\") " Jan 03 06:06:56 crc kubenswrapper[4854]: I0103 06:06:56.242928 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33d7a9cf-9ea2-4e02-b431-4c6b1df21337-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "33d7a9cf-9ea2-4e02-b431-4c6b1df21337" (UID: "33d7a9cf-9ea2-4e02-b431-4c6b1df21337"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:06:56 crc kubenswrapper[4854]: I0103 06:06:56.245180 4854 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33d7a9cf-9ea2-4e02-b431-4c6b1df21337-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:56 crc kubenswrapper[4854]: I0103 06:06:56.268149 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33d7a9cf-9ea2-4e02-b431-4c6b1df21337-kube-api-access-gq5nd" (OuterVolumeSpecName: "kube-api-access-gq5nd") pod "33d7a9cf-9ea2-4e02-b431-4c6b1df21337" (UID: "33d7a9cf-9ea2-4e02-b431-4c6b1df21337"). InnerVolumeSpecName "kube-api-access-gq5nd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:06:56 crc kubenswrapper[4854]: I0103 06:06:56.353288 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gq5nd\" (UniqueName: \"kubernetes.io/projected/33d7a9cf-9ea2-4e02-b431-4c6b1df21337-kube-api-access-gq5nd\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:57 crc kubenswrapper[4854]: I0103 06:06:57.146716 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-45z7t" Jan 03 06:06:58 crc kubenswrapper[4854]: I0103 06:06:58.197327 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-b805-account-create-update-hw8rw" event={"ID":"c0800292-1f7a-4d53-85b2-f256b8b27b7f","Type":"ContainerDied","Data":"9e5f8efa1b89bd89fc15c3bb8fca9513bd555d7eb64edc4a4b58f2d06f520a29"} Jan 03 06:06:58 crc kubenswrapper[4854]: I0103 06:06:58.197583 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e5f8efa1b89bd89fc15c3bb8fca9513bd555d7eb64edc4a4b58f2d06f520a29" Jan 03 06:06:58 crc kubenswrapper[4854]: I0103 06:06:58.242396 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-b805-account-create-update-hw8rw" Jan 03 06:06:58 crc kubenswrapper[4854]: I0103 06:06:58.250705 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2z59\" (UniqueName: \"kubernetes.io/projected/c0800292-1f7a-4d53-85b2-f256b8b27b7f-kube-api-access-n2z59\") pod \"c0800292-1f7a-4d53-85b2-f256b8b27b7f\" (UID: \"c0800292-1f7a-4d53-85b2-f256b8b27b7f\") " Jan 03 06:06:58 crc kubenswrapper[4854]: I0103 06:06:58.250779 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0800292-1f7a-4d53-85b2-f256b8b27b7f-operator-scripts\") pod \"c0800292-1f7a-4d53-85b2-f256b8b27b7f\" (UID: \"c0800292-1f7a-4d53-85b2-f256b8b27b7f\") " Jan 03 06:06:58 crc kubenswrapper[4854]: I0103 06:06:58.252229 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0800292-1f7a-4d53-85b2-f256b8b27b7f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c0800292-1f7a-4d53-85b2-f256b8b27b7f" (UID: "c0800292-1f7a-4d53-85b2-f256b8b27b7f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:06:58 crc kubenswrapper[4854]: I0103 06:06:58.268848 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0800292-1f7a-4d53-85b2-f256b8b27b7f-kube-api-access-n2z59" (OuterVolumeSpecName: "kube-api-access-n2z59") pod "c0800292-1f7a-4d53-85b2-f256b8b27b7f" (UID: "c0800292-1f7a-4d53-85b2-f256b8b27b7f"). InnerVolumeSpecName "kube-api-access-n2z59". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:06:58 crc kubenswrapper[4854]: I0103 06:06:58.355474 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2z59\" (UniqueName: \"kubernetes.io/projected/c0800292-1f7a-4d53-85b2-f256b8b27b7f-kube-api-access-n2z59\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:58 crc kubenswrapper[4854]: I0103 06:06:58.355786 4854 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0800292-1f7a-4d53-85b2-f256b8b27b7f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:06:59 crc kubenswrapper[4854]: I0103 06:06:59.213466 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"21e34b84-a209-4662-a8df-3e7aff354daa","Type":"ContainerStarted","Data":"f5e80a9ccb7f7973646c36c7fa736e64db3ed114174c3e54a4d050b491eccf34"} Jan 03 06:06:59 crc kubenswrapper[4854]: I0103 06:06:59.217799 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec4f77c3-e679-4e7f-92a4-dd888ba6522b","Type":"ContainerStarted","Data":"97065523b45de4a8d51b89d3329b7f3d4f2237bd0df136211ebb2dab116de8d1"} Jan 03 06:06:59 crc kubenswrapper[4854]: I0103 06:06:59.219890 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 03 06:06:59 crc kubenswrapper[4854]: I0103 06:06:59.224910 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"32c47867-8d22-4340-98a0-37ae6b098d80","Type":"ContainerStarted","Data":"1567b5bc5aef6ec8d4493e57203e2e1bb21a78b3d600a4a9d135fc61e19def87"} Jan 03 06:06:59 crc kubenswrapper[4854]: I0103 06:06:59.225050 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="32c47867-8d22-4340-98a0-37ae6b098d80" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://1567b5bc5aef6ec8d4493e57203e2e1bb21a78b3d600a4a9d135fc61e19def87" gracePeriod=30 Jan 03 06:06:59 crc kubenswrapper[4854]: I0103 06:06:59.236356 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a292205e-b4eb-4f28-a9a8-9fbceaea3f60","Type":"ContainerStarted","Data":"87adf904de741a7fbcaee828ec1aceb49648762de69b9abb00cf1e053a52268a"} Jan 03 06:06:59 crc kubenswrapper[4854]: I0103 06:06:59.241032 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-b805-account-create-update-hw8rw" Jan 03 06:06:59 crc kubenswrapper[4854]: I0103 06:06:59.241047 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d880689e-e8bd-4759-9d27-699b26516946","Type":"ContainerStarted","Data":"4284e03c918969bea5d7b01cff2a41950428e3660f47404a4acb462bab7135b0"} Jan 03 06:06:59 crc kubenswrapper[4854]: I0103 06:06:59.252400 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.982904816 podStartE2EDuration="10.252376765s" podCreationTimestamp="2026-01-03 06:06:49 +0000 UTC" firstStartedPulling="2026-01-03 06:06:50.830412501 +0000 UTC m=+1589.156989073" lastFinishedPulling="2026-01-03 06:06:58.09988445 +0000 UTC m=+1596.426461022" observedRunningTime="2026-01-03 06:06:59.240648262 +0000 UTC m=+1597.567224834" watchObservedRunningTime="2026-01-03 06:06:59.252376765 +0000 UTC m=+1597.578953337" Jan 03 06:06:59 crc kubenswrapper[4854]: I0103 06:06:59.267061 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.691946471 podStartE2EDuration="8.26703585s" podCreationTimestamp="2026-01-03 06:06:51 +0000 UTC" firstStartedPulling="2026-01-03 06:06:53.48746242 +0000 UTC m=+1591.814038992" lastFinishedPulling="2026-01-03 06:06:58.062551799 +0000 UTC m=+1596.389128371" observedRunningTime="2026-01-03 06:06:59.263606685 +0000 UTC m=+1597.590183267" watchObservedRunningTime="2026-01-03 06:06:59.26703585 +0000 UTC m=+1597.593612422" Jan 03 06:06:59 crc kubenswrapper[4854]: I0103 06:06:59.293451 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.6530795190000003 podStartE2EDuration="9.293430559s" podCreationTimestamp="2026-01-03 06:06:50 +0000 UTC" firstStartedPulling="2026-01-03 06:06:52.422472356 +0000 UTC m=+1590.749048928" lastFinishedPulling="2026-01-03 06:06:58.062823396 +0000 UTC m=+1596.389399968" observedRunningTime="2026-01-03 06:06:59.283026759 +0000 UTC m=+1597.609603331" watchObservedRunningTime="2026-01-03 06:06:59.293430559 +0000 UTC m=+1597.620007131" Jan 03 06:07:00 crc kubenswrapper[4854]: I0103 06:07:00.253634 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"21e34b84-a209-4662-a8df-3e7aff354daa","Type":"ContainerStarted","Data":"e08e40ad687e3a0569723d0f0de549270e85f504be05f48a295afb2249c6ac05"} Jan 03 06:07:00 crc kubenswrapper[4854]: I0103 06:07:00.255468 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d880689e-e8bd-4759-9d27-699b26516946","Type":"ContainerStarted","Data":"70ba6970d1273e00268143106b6ca81983253a5083ebcdf25a1ff59bf281d6a3"} Jan 03 06:07:00 crc kubenswrapper[4854]: I0103 06:07:00.255982 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d880689e-e8bd-4759-9d27-699b26516946" containerName="nova-metadata-log" containerID="cri-o://4284e03c918969bea5d7b01cff2a41950428e3660f47404a4acb462bab7135b0" gracePeriod=30 Jan 03 06:07:00 crc kubenswrapper[4854]: I0103 06:07:00.256024 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d880689e-e8bd-4759-9d27-699b26516946" containerName="nova-metadata-metadata" containerID="cri-o://70ba6970d1273e00268143106b6ca81983253a5083ebcdf25a1ff59bf281d6a3" gracePeriod=30 Jan 03 06:07:00 crc kubenswrapper[4854]: I0103 06:07:00.284718 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=5.10261574 podStartE2EDuration="10.284679573s" podCreationTimestamp="2026-01-03 06:06:50 +0000 UTC" firstStartedPulling="2026-01-03 06:06:52.880441235 +0000 UTC m=+1591.207017807" lastFinishedPulling="2026-01-03 06:06:58.062505068 +0000 UTC m=+1596.389081640" observedRunningTime="2026-01-03 06:07:00.275359911 +0000 UTC m=+1598.601936503" watchObservedRunningTime="2026-01-03 06:07:00.284679573 +0000 UTC m=+1598.611256155" Jan 03 06:07:00 crc kubenswrapper[4854]: I0103 06:07:00.298197 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.156625727 podStartE2EDuration="9.29817731s" podCreationTimestamp="2026-01-03 06:06:51 +0000 UTC" firstStartedPulling="2026-01-03 06:06:52.940489822 +0000 UTC m=+1591.267066394" lastFinishedPulling="2026-01-03 06:06:58.082041405 +0000 UTC m=+1596.408617977" observedRunningTime="2026-01-03 06:07:00.295909233 +0000 UTC m=+1598.622485805" watchObservedRunningTime="2026-01-03 06:07:00.29817731 +0000 UTC m=+1598.624753882" Jan 03 06:07:00 crc kubenswrapper[4854]: I0103 06:07:00.953169 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.037559 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d880689e-e8bd-4759-9d27-699b26516946-logs\") pod \"d880689e-e8bd-4759-9d27-699b26516946\" (UID: \"d880689e-e8bd-4759-9d27-699b26516946\") " Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.037877 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzhzt\" (UniqueName: \"kubernetes.io/projected/d880689e-e8bd-4759-9d27-699b26516946-kube-api-access-jzhzt\") pod \"d880689e-e8bd-4759-9d27-699b26516946\" (UID: \"d880689e-e8bd-4759-9d27-699b26516946\") " Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.038000 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d880689e-e8bd-4759-9d27-699b26516946-logs" (OuterVolumeSpecName: "logs") pod "d880689e-e8bd-4759-9d27-699b26516946" (UID: "d880689e-e8bd-4759-9d27-699b26516946"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.038015 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d880689e-e8bd-4759-9d27-699b26516946-combined-ca-bundle\") pod \"d880689e-e8bd-4759-9d27-699b26516946\" (UID: \"d880689e-e8bd-4759-9d27-699b26516946\") " Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.038232 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d880689e-e8bd-4759-9d27-699b26516946-config-data\") pod \"d880689e-e8bd-4759-9d27-699b26516946\" (UID: \"d880689e-e8bd-4759-9d27-699b26516946\") " Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.039307 4854 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d880689e-e8bd-4759-9d27-699b26516946-logs\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.043517 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d880689e-e8bd-4759-9d27-699b26516946-kube-api-access-jzhzt" (OuterVolumeSpecName: "kube-api-access-jzhzt") pod "d880689e-e8bd-4759-9d27-699b26516946" (UID: "d880689e-e8bd-4759-9d27-699b26516946"). InnerVolumeSpecName "kube-api-access-jzhzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.070548 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d880689e-e8bd-4759-9d27-699b26516946-config-data" (OuterVolumeSpecName: "config-data") pod "d880689e-e8bd-4759-9d27-699b26516946" (UID: "d880689e-e8bd-4759-9d27-699b26516946"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.110904 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d880689e-e8bd-4759-9d27-699b26516946-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d880689e-e8bd-4759-9d27-699b26516946" (UID: "d880689e-e8bd-4759-9d27-699b26516946"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.141667 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzhzt\" (UniqueName: \"kubernetes.io/projected/d880689e-e8bd-4759-9d27-699b26516946-kube-api-access-jzhzt\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.141699 4854 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d880689e-e8bd-4759-9d27-699b26516946-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.141709 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d880689e-e8bd-4759-9d27-699b26516946-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.213662 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.213733 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.270932 4854 generic.go:334] "Generic (PLEG): container finished" podID="d880689e-e8bd-4759-9d27-699b26516946" containerID="70ba6970d1273e00268143106b6ca81983253a5083ebcdf25a1ff59bf281d6a3" exitCode=0 Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.270973 4854 generic.go:334] "Generic (PLEG): container finished" podID="d880689e-e8bd-4759-9d27-699b26516946" containerID="4284e03c918969bea5d7b01cff2a41950428e3660f47404a4acb462bab7135b0" exitCode=143 Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.270995 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d880689e-e8bd-4759-9d27-699b26516946","Type":"ContainerDied","Data":"70ba6970d1273e00268143106b6ca81983253a5083ebcdf25a1ff59bf281d6a3"} Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.271036 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d880689e-e8bd-4759-9d27-699b26516946","Type":"ContainerDied","Data":"4284e03c918969bea5d7b01cff2a41950428e3660f47404a4acb462bab7135b0"} Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.271050 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d880689e-e8bd-4759-9d27-699b26516946","Type":"ContainerDied","Data":"c1f5ab964266b23295426357784ebf047e6b8f738b0cab1a98ef00d3e6bb58b2"} Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.271069 4854 scope.go:117] "RemoveContainer" containerID="70ba6970d1273e00268143106b6ca81983253a5083ebcdf25a1ff59bf281d6a3" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.272314 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.279178 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.307353 4854 scope.go:117] "RemoveContainer" containerID="4284e03c918969bea5d7b01cff2a41950428e3660f47404a4acb462bab7135b0" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.329764 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.353482 4854 scope.go:117] "RemoveContainer" containerID="70ba6970d1273e00268143106b6ca81983253a5083ebcdf25a1ff59bf281d6a3" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.353608 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 03 06:07:01 crc kubenswrapper[4854]: E0103 06:07:01.356361 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70ba6970d1273e00268143106b6ca81983253a5083ebcdf25a1ff59bf281d6a3\": container with ID starting with 70ba6970d1273e00268143106b6ca81983253a5083ebcdf25a1ff59bf281d6a3 not found: ID does not exist" containerID="70ba6970d1273e00268143106b6ca81983253a5083ebcdf25a1ff59bf281d6a3" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.356408 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70ba6970d1273e00268143106b6ca81983253a5083ebcdf25a1ff59bf281d6a3"} err="failed to get container status \"70ba6970d1273e00268143106b6ca81983253a5083ebcdf25a1ff59bf281d6a3\": rpc error: code = NotFound desc = could not find container \"70ba6970d1273e00268143106b6ca81983253a5083ebcdf25a1ff59bf281d6a3\": container with ID starting with 70ba6970d1273e00268143106b6ca81983253a5083ebcdf25a1ff59bf281d6a3 not found: ID does not exist" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.356436 4854 scope.go:117] "RemoveContainer" containerID="4284e03c918969bea5d7b01cff2a41950428e3660f47404a4acb462bab7135b0" Jan 03 06:07:01 crc kubenswrapper[4854]: E0103 06:07:01.357594 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4284e03c918969bea5d7b01cff2a41950428e3660f47404a4acb462bab7135b0\": container with ID starting with 4284e03c918969bea5d7b01cff2a41950428e3660f47404a4acb462bab7135b0 not found: ID does not exist" containerID="4284e03c918969bea5d7b01cff2a41950428e3660f47404a4acb462bab7135b0" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.357622 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4284e03c918969bea5d7b01cff2a41950428e3660f47404a4acb462bab7135b0"} err="failed to get container status \"4284e03c918969bea5d7b01cff2a41950428e3660f47404a4acb462bab7135b0\": rpc error: code = NotFound desc = could not find container \"4284e03c918969bea5d7b01cff2a41950428e3660f47404a4acb462bab7135b0\": container with ID starting with 4284e03c918969bea5d7b01cff2a41950428e3660f47404a4acb462bab7135b0 not found: ID does not exist" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.357641 4854 scope.go:117] "RemoveContainer" containerID="70ba6970d1273e00268143106b6ca81983253a5083ebcdf25a1ff59bf281d6a3" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.358336 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70ba6970d1273e00268143106b6ca81983253a5083ebcdf25a1ff59bf281d6a3"} err="failed to get container status \"70ba6970d1273e00268143106b6ca81983253a5083ebcdf25a1ff59bf281d6a3\": rpc error: code = NotFound desc = could not find container \"70ba6970d1273e00268143106b6ca81983253a5083ebcdf25a1ff59bf281d6a3\": container with ID starting with 70ba6970d1273e00268143106b6ca81983253a5083ebcdf25a1ff59bf281d6a3 not found: ID does not exist" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.358366 4854 scope.go:117] "RemoveContainer" containerID="4284e03c918969bea5d7b01cff2a41950428e3660f47404a4acb462bab7135b0" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.359249 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4284e03c918969bea5d7b01cff2a41950428e3660f47404a4acb462bab7135b0"} err="failed to get container status \"4284e03c918969bea5d7b01cff2a41950428e3660f47404a4acb462bab7135b0\": rpc error: code = NotFound desc = could not find container \"4284e03c918969bea5d7b01cff2a41950428e3660f47404a4acb462bab7135b0\": container with ID starting with 4284e03c918969bea5d7b01cff2a41950428e3660f47404a4acb462bab7135b0 not found: ID does not exist" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.368927 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 03 06:07:01 crc kubenswrapper[4854]: E0103 06:07:01.369593 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d880689e-e8bd-4759-9d27-699b26516946" containerName="nova-metadata-metadata" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.369614 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="d880689e-e8bd-4759-9d27-699b26516946" containerName="nova-metadata-metadata" Jan 03 06:07:01 crc kubenswrapper[4854]: E0103 06:07:01.369627 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d880689e-e8bd-4759-9d27-699b26516946" containerName="nova-metadata-log" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.369635 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="d880689e-e8bd-4759-9d27-699b26516946" containerName="nova-metadata-log" Jan 03 06:07:01 crc kubenswrapper[4854]: E0103 06:07:01.369648 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33d7a9cf-9ea2-4e02-b431-4c6b1df21337" containerName="mariadb-database-create" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.369654 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="33d7a9cf-9ea2-4e02-b431-4c6b1df21337" containerName="mariadb-database-create" Jan 03 06:07:01 crc kubenswrapper[4854]: E0103 06:07:01.369663 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0800292-1f7a-4d53-85b2-f256b8b27b7f" containerName="mariadb-account-create-update" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.369669 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0800292-1f7a-4d53-85b2-f256b8b27b7f" containerName="mariadb-account-create-update" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.369920 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0800292-1f7a-4d53-85b2-f256b8b27b7f" containerName="mariadb-account-create-update" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.369944 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="d880689e-e8bd-4759-9d27-699b26516946" containerName="nova-metadata-metadata" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.369959 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="d880689e-e8bd-4759-9d27-699b26516946" containerName="nova-metadata-log" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.369972 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="33d7a9cf-9ea2-4e02-b431-4c6b1df21337" containerName="mariadb-database-create" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.371314 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.373908 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.374171 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.407826 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.458772 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c02d25bb-c44c-498c-811d-05e60b417640-logs\") pod \"nova-metadata-0\" (UID: \"c02d25bb-c44c-498c-811d-05e60b417640\") " pod="openstack/nova-metadata-0" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.460664 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbcmr\" (UniqueName: \"kubernetes.io/projected/c02d25bb-c44c-498c-811d-05e60b417640-kube-api-access-qbcmr\") pod \"nova-metadata-0\" (UID: \"c02d25bb-c44c-498c-811d-05e60b417640\") " pod="openstack/nova-metadata-0" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.460732 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c02d25bb-c44c-498c-811d-05e60b417640-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c02d25bb-c44c-498c-811d-05e60b417640\") " pod="openstack/nova-metadata-0" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.460785 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c02d25bb-c44c-498c-811d-05e60b417640-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c02d25bb-c44c-498c-811d-05e60b417640\") " pod="openstack/nova-metadata-0" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.460962 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c02d25bb-c44c-498c-811d-05e60b417640-config-data\") pod \"nova-metadata-0\" (UID: \"c02d25bb-c44c-498c-811d-05e60b417640\") " pod="openstack/nova-metadata-0" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.521220 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-6hxsh"] Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.522940 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-6hxsh" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.525595 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.525828 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.525949 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-bkf2n" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.526232 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.545832 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-6hxsh"] Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.563944 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43d451de-7824-46ed-9709-d884d2df08e0-config-data\") pod \"aodh-db-sync-6hxsh\" (UID: \"43d451de-7824-46ed-9709-d884d2df08e0\") " pod="openstack/aodh-db-sync-6hxsh" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.564131 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43d451de-7824-46ed-9709-d884d2df08e0-combined-ca-bundle\") pod \"aodh-db-sync-6hxsh\" (UID: \"43d451de-7824-46ed-9709-d884d2df08e0\") " pod="openstack/aodh-db-sync-6hxsh" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.564189 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c02d25bb-c44c-498c-811d-05e60b417640-logs\") pod \"nova-metadata-0\" (UID: \"c02d25bb-c44c-498c-811d-05e60b417640\") " pod="openstack/nova-metadata-0" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.564217 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbcmr\" (UniqueName: \"kubernetes.io/projected/c02d25bb-c44c-498c-811d-05e60b417640-kube-api-access-qbcmr\") pod \"nova-metadata-0\" (UID: \"c02d25bb-c44c-498c-811d-05e60b417640\") " pod="openstack/nova-metadata-0" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.564249 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c02d25bb-c44c-498c-811d-05e60b417640-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c02d25bb-c44c-498c-811d-05e60b417640\") " pod="openstack/nova-metadata-0" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.564276 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c02d25bb-c44c-498c-811d-05e60b417640-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c02d25bb-c44c-498c-811d-05e60b417640\") " pod="openstack/nova-metadata-0" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.564310 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j48xd\" (UniqueName: \"kubernetes.io/projected/43d451de-7824-46ed-9709-d884d2df08e0-kube-api-access-j48xd\") pod \"aodh-db-sync-6hxsh\" (UID: \"43d451de-7824-46ed-9709-d884d2df08e0\") " pod="openstack/aodh-db-sync-6hxsh" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.564361 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c02d25bb-c44c-498c-811d-05e60b417640-config-data\") pod \"nova-metadata-0\" (UID: \"c02d25bb-c44c-498c-811d-05e60b417640\") " pod="openstack/nova-metadata-0" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.564390 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43d451de-7824-46ed-9709-d884d2df08e0-scripts\") pod \"aodh-db-sync-6hxsh\" (UID: \"43d451de-7824-46ed-9709-d884d2df08e0\") " pod="openstack/aodh-db-sync-6hxsh" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.564927 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c02d25bb-c44c-498c-811d-05e60b417640-logs\") pod \"nova-metadata-0\" (UID: \"c02d25bb-c44c-498c-811d-05e60b417640\") " pod="openstack/nova-metadata-0" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.569665 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c02d25bb-c44c-498c-811d-05e60b417640-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c02d25bb-c44c-498c-811d-05e60b417640\") " pod="openstack/nova-metadata-0" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.574188 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c02d25bb-c44c-498c-811d-05e60b417640-config-data\") pod \"nova-metadata-0\" (UID: \"c02d25bb-c44c-498c-811d-05e60b417640\") " pod="openstack/nova-metadata-0" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.577226 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c02d25bb-c44c-498c-811d-05e60b417640-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c02d25bb-c44c-498c-811d-05e60b417640\") " pod="openstack/nova-metadata-0" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.582569 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbcmr\" (UniqueName: \"kubernetes.io/projected/c02d25bb-c44c-498c-811d-05e60b417640-kube-api-access-qbcmr\") pod \"nova-metadata-0\" (UID: \"c02d25bb-c44c-498c-811d-05e60b417640\") " pod="openstack/nova-metadata-0" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.656842 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.656895 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.671910 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43d451de-7824-46ed-9709-d884d2df08e0-combined-ca-bundle\") pod \"aodh-db-sync-6hxsh\" (UID: \"43d451de-7824-46ed-9709-d884d2df08e0\") " pod="openstack/aodh-db-sync-6hxsh" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.672443 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j48xd\" (UniqueName: \"kubernetes.io/projected/43d451de-7824-46ed-9709-d884d2df08e0-kube-api-access-j48xd\") pod \"aodh-db-sync-6hxsh\" (UID: \"43d451de-7824-46ed-9709-d884d2df08e0\") " pod="openstack/aodh-db-sync-6hxsh" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.672789 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43d451de-7824-46ed-9709-d884d2df08e0-scripts\") pod \"aodh-db-sync-6hxsh\" (UID: \"43d451de-7824-46ed-9709-d884d2df08e0\") " pod="openstack/aodh-db-sync-6hxsh" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.673048 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43d451de-7824-46ed-9709-d884d2df08e0-config-data\") pod \"aodh-db-sync-6hxsh\" (UID: \"43d451de-7824-46ed-9709-d884d2df08e0\") " pod="openstack/aodh-db-sync-6hxsh" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.683001 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43d451de-7824-46ed-9709-d884d2df08e0-combined-ca-bundle\") pod \"aodh-db-sync-6hxsh\" (UID: \"43d451de-7824-46ed-9709-d884d2df08e0\") " pod="openstack/aodh-db-sync-6hxsh" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.699894 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j48xd\" (UniqueName: \"kubernetes.io/projected/43d451de-7824-46ed-9709-d884d2df08e0-kube-api-access-j48xd\") pod \"aodh-db-sync-6hxsh\" (UID: \"43d451de-7824-46ed-9709-d884d2df08e0\") " pod="openstack/aodh-db-sync-6hxsh" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.718407 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43d451de-7824-46ed-9709-d884d2df08e0-config-data\") pod \"aodh-db-sync-6hxsh\" (UID: \"43d451de-7824-46ed-9709-d884d2df08e0\") " pod="openstack/aodh-db-sync-6hxsh" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.719329 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.724668 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43d451de-7824-46ed-9709-d884d2df08e0-scripts\") pod \"aodh-db-sync-6hxsh\" (UID: \"43d451de-7824-46ed-9709-d884d2df08e0\") " pod="openstack/aodh-db-sync-6hxsh" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.752664 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.843602 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-6hxsh" Jan 03 06:07:01 crc kubenswrapper[4854]: I0103 06:07:01.920449 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-9b86998b5-vk4z5" Jan 03 06:07:02 crc kubenswrapper[4854]: I0103 06:07:02.034874 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-b57b6"] Jan 03 06:07:02 crc kubenswrapper[4854]: I0103 06:07:02.035155 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7756b9d78c-b57b6" podUID="5cec124c-cb6f-4b93-a398-6c766bbc6c19" containerName="dnsmasq-dns" containerID="cri-o://f988a279c8fb2ded71d0f512dbaa1059e65ef2a1dba26d3e8a2c68b1eef14c3a" gracePeriod=10 Jan 03 06:07:02 crc kubenswrapper[4854]: I0103 06:07:02.177581 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d880689e-e8bd-4759-9d27-699b26516946" path="/var/lib/kubelet/pods/d880689e-e8bd-4759-9d27-699b26516946/volumes" Jan 03 06:07:02 crc kubenswrapper[4854]: I0103 06:07:02.306236 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="21e34b84-a209-4662-a8df-3e7aff354daa" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.239:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 06:07:02 crc kubenswrapper[4854]: I0103 06:07:02.306603 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="21e34b84-a209-4662-a8df-3e7aff354daa" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.239:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 06:07:02 crc kubenswrapper[4854]: I0103 06:07:02.469507 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 03 06:07:02 crc kubenswrapper[4854]: W0103 06:07:02.735595 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc02d25bb_c44c_498c_811d_05e60b417640.slice/crio-314f6118fddabbbb9ad248d39dac5c580652fb217ee544250843bafd3c88371d WatchSource:0}: Error finding container 314f6118fddabbbb9ad248d39dac5c580652fb217ee544250843bafd3c88371d: Status 404 returned error can't find the container with id 314f6118fddabbbb9ad248d39dac5c580652fb217ee544250843bafd3c88371d Jan 03 06:07:02 crc kubenswrapper[4854]: I0103 06:07:02.777781 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 03 06:07:02 crc kubenswrapper[4854]: I0103 06:07:02.955955 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-6hxsh"] Jan 03 06:07:03 crc kubenswrapper[4854]: I0103 06:07:03.354511 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-b57b6" Jan 03 06:07:03 crc kubenswrapper[4854]: I0103 06:07:03.391988 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c02d25bb-c44c-498c-811d-05e60b417640","Type":"ContainerStarted","Data":"2144215f1ff17b0540f1e1e22f8c1590a1af5b221215b5a9733a8571f8143a9e"} Jan 03 06:07:03 crc kubenswrapper[4854]: I0103 06:07:03.392946 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c02d25bb-c44c-498c-811d-05e60b417640","Type":"ContainerStarted","Data":"314f6118fddabbbb9ad248d39dac5c580652fb217ee544250843bafd3c88371d"} Jan 03 06:07:03 crc kubenswrapper[4854]: I0103 06:07:03.412876 4854 generic.go:334] "Generic (PLEG): container finished" podID="5cec124c-cb6f-4b93-a398-6c766bbc6c19" containerID="f988a279c8fb2ded71d0f512dbaa1059e65ef2a1dba26d3e8a2c68b1eef14c3a" exitCode=0 Jan 03 06:07:03 crc kubenswrapper[4854]: I0103 06:07:03.412942 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-b57b6" event={"ID":"5cec124c-cb6f-4b93-a398-6c766bbc6c19","Type":"ContainerDied","Data":"f988a279c8fb2ded71d0f512dbaa1059e65ef2a1dba26d3e8a2c68b1eef14c3a"} Jan 03 06:07:03 crc kubenswrapper[4854]: I0103 06:07:03.412974 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-b57b6" event={"ID":"5cec124c-cb6f-4b93-a398-6c766bbc6c19","Type":"ContainerDied","Data":"2b57228bbacbe5beefb4c2d8277742fa5b1f137600954bda7f674edcc7dd5ea7"} Jan 03 06:07:03 crc kubenswrapper[4854]: I0103 06:07:03.412991 4854 scope.go:117] "RemoveContainer" containerID="f988a279c8fb2ded71d0f512dbaa1059e65ef2a1dba26d3e8a2c68b1eef14c3a" Jan 03 06:07:03 crc kubenswrapper[4854]: I0103 06:07:03.413152 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-b57b6" Jan 03 06:07:03 crc kubenswrapper[4854]: I0103 06:07:03.430242 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-6hxsh" event={"ID":"43d451de-7824-46ed-9709-d884d2df08e0","Type":"ContainerStarted","Data":"cd4181c63b623c31f8d79f5ef872b4ccd452eeb4f0d3d61c969639b3f136bf04"} Jan 03 06:07:03 crc kubenswrapper[4854]: I0103 06:07:03.544741 4854 scope.go:117] "RemoveContainer" containerID="9e5a1641fa55ab66b8a3223d27b0cbd7bcca652b583524d49f8e6a005a25b2b6" Jan 03 06:07:03 crc kubenswrapper[4854]: I0103 06:07:03.558670 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5cec124c-cb6f-4b93-a398-6c766bbc6c19-dns-svc\") pod \"5cec124c-cb6f-4b93-a398-6c766bbc6c19\" (UID: \"5cec124c-cb6f-4b93-a398-6c766bbc6c19\") " Jan 03 06:07:03 crc kubenswrapper[4854]: I0103 06:07:03.558824 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5cec124c-cb6f-4b93-a398-6c766bbc6c19-ovsdbserver-nb\") pod \"5cec124c-cb6f-4b93-a398-6c766bbc6c19\" (UID: \"5cec124c-cb6f-4b93-a398-6c766bbc6c19\") " Jan 03 06:07:03 crc kubenswrapper[4854]: I0103 06:07:03.558982 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5cec124c-cb6f-4b93-a398-6c766bbc6c19-ovsdbserver-sb\") pod \"5cec124c-cb6f-4b93-a398-6c766bbc6c19\" (UID: \"5cec124c-cb6f-4b93-a398-6c766bbc6c19\") " Jan 03 06:07:03 crc kubenswrapper[4854]: I0103 06:07:03.559109 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5cec124c-cb6f-4b93-a398-6c766bbc6c19-dns-swift-storage-0\") pod \"5cec124c-cb6f-4b93-a398-6c766bbc6c19\" (UID: \"5cec124c-cb6f-4b93-a398-6c766bbc6c19\") " Jan 03 06:07:03 crc kubenswrapper[4854]: I0103 06:07:03.559790 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dfbf\" (UniqueName: \"kubernetes.io/projected/5cec124c-cb6f-4b93-a398-6c766bbc6c19-kube-api-access-9dfbf\") pod \"5cec124c-cb6f-4b93-a398-6c766bbc6c19\" (UID: \"5cec124c-cb6f-4b93-a398-6c766bbc6c19\") " Jan 03 06:07:03 crc kubenswrapper[4854]: I0103 06:07:03.559825 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cec124c-cb6f-4b93-a398-6c766bbc6c19-config\") pod \"5cec124c-cb6f-4b93-a398-6c766bbc6c19\" (UID: \"5cec124c-cb6f-4b93-a398-6c766bbc6c19\") " Jan 03 06:07:03 crc kubenswrapper[4854]: I0103 06:07:03.569897 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cec124c-cb6f-4b93-a398-6c766bbc6c19-kube-api-access-9dfbf" (OuterVolumeSpecName: "kube-api-access-9dfbf") pod "5cec124c-cb6f-4b93-a398-6c766bbc6c19" (UID: "5cec124c-cb6f-4b93-a398-6c766bbc6c19"). InnerVolumeSpecName "kube-api-access-9dfbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:07:03 crc kubenswrapper[4854]: I0103 06:07:03.638745 4854 scope.go:117] "RemoveContainer" containerID="f988a279c8fb2ded71d0f512dbaa1059e65ef2a1dba26d3e8a2c68b1eef14c3a" Jan 03 06:07:03 crc kubenswrapper[4854]: E0103 06:07:03.644178 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f988a279c8fb2ded71d0f512dbaa1059e65ef2a1dba26d3e8a2c68b1eef14c3a\": container with ID starting with f988a279c8fb2ded71d0f512dbaa1059e65ef2a1dba26d3e8a2c68b1eef14c3a not found: ID does not exist" containerID="f988a279c8fb2ded71d0f512dbaa1059e65ef2a1dba26d3e8a2c68b1eef14c3a" Jan 03 06:07:03 crc kubenswrapper[4854]: I0103 06:07:03.644221 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f988a279c8fb2ded71d0f512dbaa1059e65ef2a1dba26d3e8a2c68b1eef14c3a"} err="failed to get container status \"f988a279c8fb2ded71d0f512dbaa1059e65ef2a1dba26d3e8a2c68b1eef14c3a\": rpc error: code = NotFound desc = could not find container \"f988a279c8fb2ded71d0f512dbaa1059e65ef2a1dba26d3e8a2c68b1eef14c3a\": container with ID starting with f988a279c8fb2ded71d0f512dbaa1059e65ef2a1dba26d3e8a2c68b1eef14c3a not found: ID does not exist" Jan 03 06:07:03 crc kubenswrapper[4854]: I0103 06:07:03.644250 4854 scope.go:117] "RemoveContainer" containerID="9e5a1641fa55ab66b8a3223d27b0cbd7bcca652b583524d49f8e6a005a25b2b6" Jan 03 06:07:03 crc kubenswrapper[4854]: E0103 06:07:03.645236 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e5a1641fa55ab66b8a3223d27b0cbd7bcca652b583524d49f8e6a005a25b2b6\": container with ID starting with 9e5a1641fa55ab66b8a3223d27b0cbd7bcca652b583524d49f8e6a005a25b2b6 not found: ID does not exist" containerID="9e5a1641fa55ab66b8a3223d27b0cbd7bcca652b583524d49f8e6a005a25b2b6" Jan 03 06:07:03 crc kubenswrapper[4854]: I0103 06:07:03.645278 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e5a1641fa55ab66b8a3223d27b0cbd7bcca652b583524d49f8e6a005a25b2b6"} err="failed to get container status \"9e5a1641fa55ab66b8a3223d27b0cbd7bcca652b583524d49f8e6a005a25b2b6\": rpc error: code = NotFound desc = could not find container \"9e5a1641fa55ab66b8a3223d27b0cbd7bcca652b583524d49f8e6a005a25b2b6\": container with ID starting with 9e5a1641fa55ab66b8a3223d27b0cbd7bcca652b583524d49f8e6a005a25b2b6 not found: ID does not exist" Jan 03 06:07:03 crc kubenswrapper[4854]: I0103 06:07:03.662767 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dfbf\" (UniqueName: \"kubernetes.io/projected/5cec124c-cb6f-4b93-a398-6c766bbc6c19-kube-api-access-9dfbf\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:03 crc kubenswrapper[4854]: I0103 06:07:03.852241 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cec124c-cb6f-4b93-a398-6c766bbc6c19-config" (OuterVolumeSpecName: "config") pod "5cec124c-cb6f-4b93-a398-6c766bbc6c19" (UID: "5cec124c-cb6f-4b93-a398-6c766bbc6c19"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:07:03 crc kubenswrapper[4854]: I0103 06:07:03.884830 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cec124c-cb6f-4b93-a398-6c766bbc6c19-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5cec124c-cb6f-4b93-a398-6c766bbc6c19" (UID: "5cec124c-cb6f-4b93-a398-6c766bbc6c19"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:07:03 crc kubenswrapper[4854]: I0103 06:07:03.911472 4854 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5cec124c-cb6f-4b93-a398-6c766bbc6c19-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:03 crc kubenswrapper[4854]: I0103 06:07:03.911509 4854 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cec124c-cb6f-4b93-a398-6c766bbc6c19-config\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:03 crc kubenswrapper[4854]: I0103 06:07:03.918742 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cec124c-cb6f-4b93-a398-6c766bbc6c19-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5cec124c-cb6f-4b93-a398-6c766bbc6c19" (UID: "5cec124c-cb6f-4b93-a398-6c766bbc6c19"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:07:03 crc kubenswrapper[4854]: I0103 06:07:03.957256 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cec124c-cb6f-4b93-a398-6c766bbc6c19-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5cec124c-cb6f-4b93-a398-6c766bbc6c19" (UID: "5cec124c-cb6f-4b93-a398-6c766bbc6c19"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:07:03 crc kubenswrapper[4854]: I0103 06:07:03.969234 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cec124c-cb6f-4b93-a398-6c766bbc6c19-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5cec124c-cb6f-4b93-a398-6c766bbc6c19" (UID: "5cec124c-cb6f-4b93-a398-6c766bbc6c19"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:07:04 crc kubenswrapper[4854]: I0103 06:07:04.014670 4854 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5cec124c-cb6f-4b93-a398-6c766bbc6c19-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:04 crc kubenswrapper[4854]: I0103 06:07:04.015108 4854 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5cec124c-cb6f-4b93-a398-6c766bbc6c19-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:04 crc kubenswrapper[4854]: I0103 06:07:04.015134 4854 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5cec124c-cb6f-4b93-a398-6c766bbc6c19-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:04 crc kubenswrapper[4854]: I0103 06:07:04.062871 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-b57b6"] Jan 03 06:07:04 crc kubenswrapper[4854]: I0103 06:07:04.074375 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-b57b6"] Jan 03 06:07:04 crc kubenswrapper[4854]: I0103 06:07:04.133774 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cec124c-cb6f-4b93-a398-6c766bbc6c19" path="/var/lib/kubelet/pods/5cec124c-cb6f-4b93-a398-6c766bbc6c19/volumes" Jan 03 06:07:04 crc kubenswrapper[4854]: I0103 06:07:04.469471 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c02d25bb-c44c-498c-811d-05e60b417640","Type":"ContainerStarted","Data":"579674215d90bdef9e4c5ec6e21c19d182adcfe481ca1f26f43fcc0a469ee82f"} Jan 03 06:07:06 crc kubenswrapper[4854]: I0103 06:07:06.512114 4854 generic.go:334] "Generic (PLEG): container finished" podID="bc666e24-12a2-4bea-bded-bb83c896dc9d" containerID="75759a88f790e3f8574815d73410c788d3731d4c04f53edf6e75193f1d017620" exitCode=0 Jan 03 06:07:06 crc kubenswrapper[4854]: I0103 06:07:06.512240 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-6q98m" event={"ID":"bc666e24-12a2-4bea-bded-bb83c896dc9d","Type":"ContainerDied","Data":"75759a88f790e3f8574815d73410c788d3731d4c04f53edf6e75193f1d017620"} Jan 03 06:07:06 crc kubenswrapper[4854]: I0103 06:07:06.546589 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=5.546568349 podStartE2EDuration="5.546568349s" podCreationTimestamp="2026-01-03 06:07:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:07:04.499576212 +0000 UTC m=+1602.826152784" watchObservedRunningTime="2026-01-03 06:07:06.546568349 +0000 UTC m=+1604.873144921" Jan 03 06:07:06 crc kubenswrapper[4854]: I0103 06:07:06.754218 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 03 06:07:06 crc kubenswrapper[4854]: I0103 06:07:06.754273 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 03 06:07:07 crc kubenswrapper[4854]: I0103 06:07:07.526476 4854 generic.go:334] "Generic (PLEG): container finished" podID="f93995e8-15fe-446c-b731-ade43a634b9b" containerID="af0831dd29cf03129c1714e21950c8a6ef74079e760c0d89508cfbe7f72d2a74" exitCode=0 Jan 03 06:07:07 crc kubenswrapper[4854]: I0103 06:07:07.526564 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-27kbq" event={"ID":"f93995e8-15fe-446c-b731-ade43a634b9b","Type":"ContainerDied","Data":"af0831dd29cf03129c1714e21950c8a6ef74079e760c0d89508cfbe7f72d2a74"} Jan 03 06:07:09 crc kubenswrapper[4854]: I0103 06:07:09.226573 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-6q98m" Jan 03 06:07:09 crc kubenswrapper[4854]: I0103 06:07:09.233716 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-27kbq" Jan 03 06:07:09 crc kubenswrapper[4854]: I0103 06:07:09.376816 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bszd\" (UniqueName: \"kubernetes.io/projected/bc666e24-12a2-4bea-bded-bb83c896dc9d-kube-api-access-8bszd\") pod \"bc666e24-12a2-4bea-bded-bb83c896dc9d\" (UID: \"bc666e24-12a2-4bea-bded-bb83c896dc9d\") " Jan 03 06:07:09 crc kubenswrapper[4854]: I0103 06:07:09.376984 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc666e24-12a2-4bea-bded-bb83c896dc9d-scripts\") pod \"bc666e24-12a2-4bea-bded-bb83c896dc9d\" (UID: \"bc666e24-12a2-4bea-bded-bb83c896dc9d\") " Jan 03 06:07:09 crc kubenswrapper[4854]: I0103 06:07:09.377016 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f93995e8-15fe-446c-b731-ade43a634b9b-combined-ca-bundle\") pod \"f93995e8-15fe-446c-b731-ade43a634b9b\" (UID: \"f93995e8-15fe-446c-b731-ade43a634b9b\") " Jan 03 06:07:09 crc kubenswrapper[4854]: I0103 06:07:09.377127 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9s2k\" (UniqueName: \"kubernetes.io/projected/f93995e8-15fe-446c-b731-ade43a634b9b-kube-api-access-k9s2k\") pod \"f93995e8-15fe-446c-b731-ade43a634b9b\" (UID: \"f93995e8-15fe-446c-b731-ade43a634b9b\") " Jan 03 06:07:09 crc kubenswrapper[4854]: I0103 06:07:09.377153 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f93995e8-15fe-446c-b731-ade43a634b9b-scripts\") pod \"f93995e8-15fe-446c-b731-ade43a634b9b\" (UID: \"f93995e8-15fe-446c-b731-ade43a634b9b\") " Jan 03 06:07:09 crc kubenswrapper[4854]: I0103 06:07:09.377332 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc666e24-12a2-4bea-bded-bb83c896dc9d-config-data\") pod \"bc666e24-12a2-4bea-bded-bb83c896dc9d\" (UID: \"bc666e24-12a2-4bea-bded-bb83c896dc9d\") " Jan 03 06:07:09 crc kubenswrapper[4854]: I0103 06:07:09.377380 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f93995e8-15fe-446c-b731-ade43a634b9b-config-data\") pod \"f93995e8-15fe-446c-b731-ade43a634b9b\" (UID: \"f93995e8-15fe-446c-b731-ade43a634b9b\") " Jan 03 06:07:09 crc kubenswrapper[4854]: I0103 06:07:09.377440 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc666e24-12a2-4bea-bded-bb83c896dc9d-combined-ca-bundle\") pod \"bc666e24-12a2-4bea-bded-bb83c896dc9d\" (UID: \"bc666e24-12a2-4bea-bded-bb83c896dc9d\") " Jan 03 06:07:09 crc kubenswrapper[4854]: I0103 06:07:09.397868 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc666e24-12a2-4bea-bded-bb83c896dc9d-kube-api-access-8bszd" (OuterVolumeSpecName: "kube-api-access-8bszd") pod "bc666e24-12a2-4bea-bded-bb83c896dc9d" (UID: "bc666e24-12a2-4bea-bded-bb83c896dc9d"). InnerVolumeSpecName "kube-api-access-8bszd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:07:09 crc kubenswrapper[4854]: I0103 06:07:09.398720 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f93995e8-15fe-446c-b731-ade43a634b9b-scripts" (OuterVolumeSpecName: "scripts") pod "f93995e8-15fe-446c-b731-ade43a634b9b" (UID: "f93995e8-15fe-446c-b731-ade43a634b9b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:07:09 crc kubenswrapper[4854]: I0103 06:07:09.398756 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc666e24-12a2-4bea-bded-bb83c896dc9d-scripts" (OuterVolumeSpecName: "scripts") pod "bc666e24-12a2-4bea-bded-bb83c896dc9d" (UID: "bc666e24-12a2-4bea-bded-bb83c896dc9d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:07:09 crc kubenswrapper[4854]: I0103 06:07:09.405276 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f93995e8-15fe-446c-b731-ade43a634b9b-kube-api-access-k9s2k" (OuterVolumeSpecName: "kube-api-access-k9s2k") pod "f93995e8-15fe-446c-b731-ade43a634b9b" (UID: "f93995e8-15fe-446c-b731-ade43a634b9b"). InnerVolumeSpecName "kube-api-access-k9s2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:07:09 crc kubenswrapper[4854]: I0103 06:07:09.420707 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f93995e8-15fe-446c-b731-ade43a634b9b-config-data" (OuterVolumeSpecName: "config-data") pod "f93995e8-15fe-446c-b731-ade43a634b9b" (UID: "f93995e8-15fe-446c-b731-ade43a634b9b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:07:09 crc kubenswrapper[4854]: I0103 06:07:09.432289 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc666e24-12a2-4bea-bded-bb83c896dc9d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc666e24-12a2-4bea-bded-bb83c896dc9d" (UID: "bc666e24-12a2-4bea-bded-bb83c896dc9d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:07:09 crc kubenswrapper[4854]: I0103 06:07:09.433094 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f93995e8-15fe-446c-b731-ade43a634b9b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f93995e8-15fe-446c-b731-ade43a634b9b" (UID: "f93995e8-15fe-446c-b731-ade43a634b9b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:07:09 crc kubenswrapper[4854]: I0103 06:07:09.443436 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc666e24-12a2-4bea-bded-bb83c896dc9d-config-data" (OuterVolumeSpecName: "config-data") pod "bc666e24-12a2-4bea-bded-bb83c896dc9d" (UID: "bc666e24-12a2-4bea-bded-bb83c896dc9d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:07:09 crc kubenswrapper[4854]: I0103 06:07:09.480418 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f93995e8-15fe-446c-b731-ade43a634b9b-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:09 crc kubenswrapper[4854]: I0103 06:07:09.480454 4854 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc666e24-12a2-4bea-bded-bb83c896dc9d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:09 crc kubenswrapper[4854]: I0103 06:07:09.480466 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bszd\" (UniqueName: \"kubernetes.io/projected/bc666e24-12a2-4bea-bded-bb83c896dc9d-kube-api-access-8bszd\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:09 crc kubenswrapper[4854]: I0103 06:07:09.480477 4854 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc666e24-12a2-4bea-bded-bb83c896dc9d-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:09 crc kubenswrapper[4854]: I0103 06:07:09.480487 4854 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f93995e8-15fe-446c-b731-ade43a634b9b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:09 crc kubenswrapper[4854]: I0103 06:07:09.480495 4854 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f93995e8-15fe-446c-b731-ade43a634b9b-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:09 crc kubenswrapper[4854]: I0103 06:07:09.480503 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9s2k\" (UniqueName: \"kubernetes.io/projected/f93995e8-15fe-446c-b731-ade43a634b9b-kube-api-access-k9s2k\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:09 crc kubenswrapper[4854]: I0103 06:07:09.480510 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc666e24-12a2-4bea-bded-bb83c896dc9d-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:09 crc kubenswrapper[4854]: I0103 06:07:09.548435 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-6hxsh" event={"ID":"43d451de-7824-46ed-9709-d884d2df08e0","Type":"ContainerStarted","Data":"f27a53f5524cc3c7b67b51715250a75176ef26bb2738c5386a7e095b62bd085c"} Jan 03 06:07:09 crc kubenswrapper[4854]: I0103 06:07:09.550603 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-6q98m" event={"ID":"bc666e24-12a2-4bea-bded-bb83c896dc9d","Type":"ContainerDied","Data":"d233bf72c3a05815089057cf9b2a97a69026d680b50173efd00cec03f50965ca"} Jan 03 06:07:09 crc kubenswrapper[4854]: I0103 06:07:09.550664 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d233bf72c3a05815089057cf9b2a97a69026d680b50173efd00cec03f50965ca" Jan 03 06:07:09 crc kubenswrapper[4854]: I0103 06:07:09.550638 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-6q98m" Jan 03 06:07:09 crc kubenswrapper[4854]: I0103 06:07:09.552593 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-27kbq" event={"ID":"f93995e8-15fe-446c-b731-ade43a634b9b","Type":"ContainerDied","Data":"ee7ec334159266f1d13884c423919b59b92ad8b5884016c359fe4bc86950f63f"} Jan 03 06:07:09 crc kubenswrapper[4854]: I0103 06:07:09.552626 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee7ec334159266f1d13884c423919b59b92ad8b5884016c359fe4bc86950f63f" Jan 03 06:07:09 crc kubenswrapper[4854]: I0103 06:07:09.552681 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-27kbq" Jan 03 06:07:09 crc kubenswrapper[4854]: I0103 06:07:09.572534 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-6hxsh" podStartSLOduration=2.535980206 podStartE2EDuration="8.572518194s" podCreationTimestamp="2026-01-03 06:07:01 +0000 UTC" firstStartedPulling="2026-01-03 06:07:03.034728009 +0000 UTC m=+1601.361304581" lastFinishedPulling="2026-01-03 06:07:09.071265987 +0000 UTC m=+1607.397842569" observedRunningTime="2026-01-03 06:07:09.566176826 +0000 UTC m=+1607.892753388" watchObservedRunningTime="2026-01-03 06:07:09.572518194 +0000 UTC m=+1607.899094766" Jan 03 06:07:09 crc kubenswrapper[4854]: I0103 06:07:09.646525 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 03 06:07:09 crc kubenswrapper[4854]: E0103 06:07:09.647406 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cec124c-cb6f-4b93-a398-6c766bbc6c19" containerName="init" Jan 03 06:07:09 crc kubenswrapper[4854]: I0103 06:07:09.647494 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cec124c-cb6f-4b93-a398-6c766bbc6c19" containerName="init" Jan 03 06:07:09 crc kubenswrapper[4854]: E0103 06:07:09.647578 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f93995e8-15fe-446c-b731-ade43a634b9b" containerName="nova-cell1-conductor-db-sync" Jan 03 06:07:09 crc kubenswrapper[4854]: I0103 06:07:09.647640 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="f93995e8-15fe-446c-b731-ade43a634b9b" containerName="nova-cell1-conductor-db-sync" Jan 03 06:07:09 crc kubenswrapper[4854]: E0103 06:07:09.647711 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cec124c-cb6f-4b93-a398-6c766bbc6c19" containerName="dnsmasq-dns" Jan 03 06:07:09 crc kubenswrapper[4854]: I0103 06:07:09.647805 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cec124c-cb6f-4b93-a398-6c766bbc6c19" containerName="dnsmasq-dns" Jan 03 06:07:09 crc kubenswrapper[4854]: E0103 06:07:09.647872 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc666e24-12a2-4bea-bded-bb83c896dc9d" containerName="nova-manage" Jan 03 06:07:09 crc kubenswrapper[4854]: I0103 06:07:09.647939 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc666e24-12a2-4bea-bded-bb83c896dc9d" containerName="nova-manage" Jan 03 06:07:09 crc kubenswrapper[4854]: I0103 06:07:09.648308 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="f93995e8-15fe-446c-b731-ade43a634b9b" containerName="nova-cell1-conductor-db-sync" Jan 03 06:07:09 crc kubenswrapper[4854]: I0103 06:07:09.648406 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc666e24-12a2-4bea-bded-bb83c896dc9d" containerName="nova-manage" Jan 03 06:07:09 crc kubenswrapper[4854]: I0103 06:07:09.648477 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cec124c-cb6f-4b93-a398-6c766bbc6c19" containerName="dnsmasq-dns" Jan 03 06:07:09 crc kubenswrapper[4854]: I0103 06:07:09.649553 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 03 06:07:09 crc kubenswrapper[4854]: I0103 06:07:09.652030 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 03 06:07:09 crc kubenswrapper[4854]: I0103 06:07:09.660644 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 03 06:07:09 crc kubenswrapper[4854]: I0103 06:07:09.688888 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3819ce23-1307-4429-985f-019216905070-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3819ce23-1307-4429-985f-019216905070\") " pod="openstack/nova-cell1-conductor-0" Jan 03 06:07:09 crc kubenswrapper[4854]: I0103 06:07:09.688939 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3819ce23-1307-4429-985f-019216905070-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3819ce23-1307-4429-985f-019216905070\") " pod="openstack/nova-cell1-conductor-0" Jan 03 06:07:09 crc kubenswrapper[4854]: I0103 06:07:09.689145 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ms4c\" (UniqueName: \"kubernetes.io/projected/3819ce23-1307-4429-985f-019216905070-kube-api-access-5ms4c\") pod \"nova-cell1-conductor-0\" (UID: \"3819ce23-1307-4429-985f-019216905070\") " pod="openstack/nova-cell1-conductor-0" Jan 03 06:07:09 crc kubenswrapper[4854]: I0103 06:07:09.791127 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3819ce23-1307-4429-985f-019216905070-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3819ce23-1307-4429-985f-019216905070\") " pod="openstack/nova-cell1-conductor-0" Jan 03 06:07:09 crc kubenswrapper[4854]: I0103 06:07:09.791180 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3819ce23-1307-4429-985f-019216905070-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3819ce23-1307-4429-985f-019216905070\") " pod="openstack/nova-cell1-conductor-0" Jan 03 06:07:09 crc kubenswrapper[4854]: I0103 06:07:09.791265 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ms4c\" (UniqueName: \"kubernetes.io/projected/3819ce23-1307-4429-985f-019216905070-kube-api-access-5ms4c\") pod \"nova-cell1-conductor-0\" (UID: \"3819ce23-1307-4429-985f-019216905070\") " pod="openstack/nova-cell1-conductor-0" Jan 03 06:07:09 crc kubenswrapper[4854]: I0103 06:07:09.796158 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3819ce23-1307-4429-985f-019216905070-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3819ce23-1307-4429-985f-019216905070\") " pod="openstack/nova-cell1-conductor-0" Jan 03 06:07:09 crc kubenswrapper[4854]: I0103 06:07:09.797763 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3819ce23-1307-4429-985f-019216905070-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3819ce23-1307-4429-985f-019216905070\") " pod="openstack/nova-cell1-conductor-0" Jan 03 06:07:09 crc kubenswrapper[4854]: I0103 06:07:09.818006 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ms4c\" (UniqueName: \"kubernetes.io/projected/3819ce23-1307-4429-985f-019216905070-kube-api-access-5ms4c\") pod \"nova-cell1-conductor-0\" (UID: \"3819ce23-1307-4429-985f-019216905070\") " pod="openstack/nova-cell1-conductor-0" Jan 03 06:07:09 crc kubenswrapper[4854]: I0103 06:07:09.991554 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 03 06:07:10 crc kubenswrapper[4854]: I0103 06:07:10.448635 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 03 06:07:10 crc kubenswrapper[4854]: I0103 06:07:10.449246 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="21e34b84-a209-4662-a8df-3e7aff354daa" containerName="nova-api-log" containerID="cri-o://f5e80a9ccb7f7973646c36c7fa736e64db3ed114174c3e54a4d050b491eccf34" gracePeriod=30 Jan 03 06:07:10 crc kubenswrapper[4854]: I0103 06:07:10.449373 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="21e34b84-a209-4662-a8df-3e7aff354daa" containerName="nova-api-api" containerID="cri-o://e08e40ad687e3a0569723d0f0de549270e85f504be05f48a295afb2249c6ac05" gracePeriod=30 Jan 03 06:07:10 crc kubenswrapper[4854]: I0103 06:07:10.476767 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 03 06:07:10 crc kubenswrapper[4854]: I0103 06:07:10.477101 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="a292205e-b4eb-4f28-a9a8-9fbceaea3f60" containerName="nova-scheduler-scheduler" containerID="cri-o://87adf904de741a7fbcaee828ec1aceb49648762de69b9abb00cf1e053a52268a" gracePeriod=30 Jan 03 06:07:10 crc kubenswrapper[4854]: I0103 06:07:10.502494 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 03 06:07:10 crc kubenswrapper[4854]: I0103 06:07:10.503101 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c02d25bb-c44c-498c-811d-05e60b417640" containerName="nova-metadata-log" containerID="cri-o://2144215f1ff17b0540f1e1e22f8c1590a1af5b221215b5a9733a8571f8143a9e" gracePeriod=30 Jan 03 06:07:10 crc kubenswrapper[4854]: I0103 06:07:10.503565 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c02d25bb-c44c-498c-811d-05e60b417640" containerName="nova-metadata-metadata" containerID="cri-o://579674215d90bdef9e4c5ec6e21c19d182adcfe481ca1f26f43fcc0a469ee82f" gracePeriod=30 Jan 03 06:07:10 crc kubenswrapper[4854]: W0103 06:07:10.554549 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3819ce23_1307_4429_985f_019216905070.slice/crio-324e5cbfd271b92d5b2cecb53cb5be3a0d3a23d6426911c1fb2b781edcfd6934 WatchSource:0}: Error finding container 324e5cbfd271b92d5b2cecb53cb5be3a0d3a23d6426911c1fb2b781edcfd6934: Status 404 returned error can't find the container with id 324e5cbfd271b92d5b2cecb53cb5be3a0d3a23d6426911c1fb2b781edcfd6934 Jan 03 06:07:10 crc kubenswrapper[4854]: I0103 06:07:10.555067 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.068666 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.140097 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbcmr\" (UniqueName: \"kubernetes.io/projected/c02d25bb-c44c-498c-811d-05e60b417640-kube-api-access-qbcmr\") pod \"c02d25bb-c44c-498c-811d-05e60b417640\" (UID: \"c02d25bb-c44c-498c-811d-05e60b417640\") " Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.140403 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c02d25bb-c44c-498c-811d-05e60b417640-logs\") pod \"c02d25bb-c44c-498c-811d-05e60b417640\" (UID: \"c02d25bb-c44c-498c-811d-05e60b417640\") " Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.140537 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c02d25bb-c44c-498c-811d-05e60b417640-config-data\") pod \"c02d25bb-c44c-498c-811d-05e60b417640\" (UID: \"c02d25bb-c44c-498c-811d-05e60b417640\") " Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.140666 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c02d25bb-c44c-498c-811d-05e60b417640-nova-metadata-tls-certs\") pod \"c02d25bb-c44c-498c-811d-05e60b417640\" (UID: \"c02d25bb-c44c-498c-811d-05e60b417640\") " Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.140831 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c02d25bb-c44c-498c-811d-05e60b417640-combined-ca-bundle\") pod \"c02d25bb-c44c-498c-811d-05e60b417640\" (UID: \"c02d25bb-c44c-498c-811d-05e60b417640\") " Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.140814 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c02d25bb-c44c-498c-811d-05e60b417640-logs" (OuterVolumeSpecName: "logs") pod "c02d25bb-c44c-498c-811d-05e60b417640" (UID: "c02d25bb-c44c-498c-811d-05e60b417640"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.141878 4854 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c02d25bb-c44c-498c-811d-05e60b417640-logs\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.144842 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c02d25bb-c44c-498c-811d-05e60b417640-kube-api-access-qbcmr" (OuterVolumeSpecName: "kube-api-access-qbcmr") pod "c02d25bb-c44c-498c-811d-05e60b417640" (UID: "c02d25bb-c44c-498c-811d-05e60b417640"). InnerVolumeSpecName "kube-api-access-qbcmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.189143 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c02d25bb-c44c-498c-811d-05e60b417640-config-data" (OuterVolumeSpecName: "config-data") pod "c02d25bb-c44c-498c-811d-05e60b417640" (UID: "c02d25bb-c44c-498c-811d-05e60b417640"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.205482 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c02d25bb-c44c-498c-811d-05e60b417640-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c02d25bb-c44c-498c-811d-05e60b417640" (UID: "c02d25bb-c44c-498c-811d-05e60b417640"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.238454 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c02d25bb-c44c-498c-811d-05e60b417640-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "c02d25bb-c44c-498c-811d-05e60b417640" (UID: "c02d25bb-c44c-498c-811d-05e60b417640"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.244359 4854 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c02d25bb-c44c-498c-811d-05e60b417640-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.244597 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbcmr\" (UniqueName: \"kubernetes.io/projected/c02d25bb-c44c-498c-811d-05e60b417640-kube-api-access-qbcmr\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.244609 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c02d25bb-c44c-498c-811d-05e60b417640-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.244617 4854 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c02d25bb-c44c-498c-811d-05e60b417640-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.576646 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"3819ce23-1307-4429-985f-019216905070","Type":"ContainerStarted","Data":"c8b33764ca1e32de9e81731b43e10517d223e13789b1b66d2559b8eb2c362632"} Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.576698 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"3819ce23-1307-4429-985f-019216905070","Type":"ContainerStarted","Data":"324e5cbfd271b92d5b2cecb53cb5be3a0d3a23d6426911c1fb2b781edcfd6934"} Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.576718 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.580007 4854 generic.go:334] "Generic (PLEG): container finished" podID="c02d25bb-c44c-498c-811d-05e60b417640" containerID="579674215d90bdef9e4c5ec6e21c19d182adcfe481ca1f26f43fcc0a469ee82f" exitCode=0 Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.580042 4854 generic.go:334] "Generic (PLEG): container finished" podID="c02d25bb-c44c-498c-811d-05e60b417640" containerID="2144215f1ff17b0540f1e1e22f8c1590a1af5b221215b5a9733a8571f8143a9e" exitCode=143 Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.580055 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c02d25bb-c44c-498c-811d-05e60b417640","Type":"ContainerDied","Data":"579674215d90bdef9e4c5ec6e21c19d182adcfe481ca1f26f43fcc0a469ee82f"} Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.580074 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.580121 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c02d25bb-c44c-498c-811d-05e60b417640","Type":"ContainerDied","Data":"2144215f1ff17b0540f1e1e22f8c1590a1af5b221215b5a9733a8571f8143a9e"} Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.580140 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c02d25bb-c44c-498c-811d-05e60b417640","Type":"ContainerDied","Data":"314f6118fddabbbb9ad248d39dac5c580652fb217ee544250843bafd3c88371d"} Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.580167 4854 scope.go:117] "RemoveContainer" containerID="579674215d90bdef9e4c5ec6e21c19d182adcfe481ca1f26f43fcc0a469ee82f" Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.591583 4854 generic.go:334] "Generic (PLEG): container finished" podID="21e34b84-a209-4662-a8df-3e7aff354daa" containerID="f5e80a9ccb7f7973646c36c7fa736e64db3ed114174c3e54a4d050b491eccf34" exitCode=143 Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.591926 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"21e34b84-a209-4662-a8df-3e7aff354daa","Type":"ContainerDied","Data":"f5e80a9ccb7f7973646c36c7fa736e64db3ed114174c3e54a4d050b491eccf34"} Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.613907 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.613886462 podStartE2EDuration="2.613886462s" podCreationTimestamp="2026-01-03 06:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:07:11.600469418 +0000 UTC m=+1609.927045980" watchObservedRunningTime="2026-01-03 06:07:11.613886462 +0000 UTC m=+1609.940463304" Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.643893 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 03 06:07:11 crc kubenswrapper[4854]: E0103 06:07:11.663743 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 87adf904de741a7fbcaee828ec1aceb49648762de69b9abb00cf1e053a52268a is running failed: container process not found" containerID="87adf904de741a7fbcaee828ec1aceb49648762de69b9abb00cf1e053a52268a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.665242 4854 scope.go:117] "RemoveContainer" containerID="2144215f1ff17b0540f1e1e22f8c1590a1af5b221215b5a9733a8571f8143a9e" Jan 03 06:07:11 crc kubenswrapper[4854]: E0103 06:07:11.665451 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 87adf904de741a7fbcaee828ec1aceb49648762de69b9abb00cf1e053a52268a is running failed: container process not found" containerID="87adf904de741a7fbcaee828ec1aceb49648762de69b9abb00cf1e053a52268a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.669140 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 03 06:07:11 crc kubenswrapper[4854]: E0103 06:07:11.669480 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 87adf904de741a7fbcaee828ec1aceb49648762de69b9abb00cf1e053a52268a is running failed: container process not found" containerID="87adf904de741a7fbcaee828ec1aceb49648762de69b9abb00cf1e053a52268a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 03 06:07:11 crc kubenswrapper[4854]: E0103 06:07:11.669560 4854 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 87adf904de741a7fbcaee828ec1aceb49648762de69b9abb00cf1e053a52268a is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="a292205e-b4eb-4f28-a9a8-9fbceaea3f60" containerName="nova-scheduler-scheduler" Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.698683 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 03 06:07:11 crc kubenswrapper[4854]: E0103 06:07:11.699633 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c02d25bb-c44c-498c-811d-05e60b417640" containerName="nova-metadata-log" Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.699655 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="c02d25bb-c44c-498c-811d-05e60b417640" containerName="nova-metadata-log" Jan 03 06:07:11 crc kubenswrapper[4854]: E0103 06:07:11.699708 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c02d25bb-c44c-498c-811d-05e60b417640" containerName="nova-metadata-metadata" Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.699716 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="c02d25bb-c44c-498c-811d-05e60b417640" containerName="nova-metadata-metadata" Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.700101 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="c02d25bb-c44c-498c-811d-05e60b417640" containerName="nova-metadata-log" Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.700140 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="c02d25bb-c44c-498c-811d-05e60b417640" containerName="nova-metadata-metadata" Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.703061 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.705637 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.705973 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.718616 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.767595 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6bhr\" (UniqueName: \"kubernetes.io/projected/261c8dea-757c-4e06-9bd5-a39fdb96f34e-kube-api-access-t6bhr\") pod \"nova-metadata-0\" (UID: \"261c8dea-757c-4e06-9bd5-a39fdb96f34e\") " pod="openstack/nova-metadata-0" Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.767716 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/261c8dea-757c-4e06-9bd5-a39fdb96f34e-config-data\") pod \"nova-metadata-0\" (UID: \"261c8dea-757c-4e06-9bd5-a39fdb96f34e\") " pod="openstack/nova-metadata-0" Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.767760 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/261c8dea-757c-4e06-9bd5-a39fdb96f34e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"261c8dea-757c-4e06-9bd5-a39fdb96f34e\") " pod="openstack/nova-metadata-0" Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.767856 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/261c8dea-757c-4e06-9bd5-a39fdb96f34e-logs\") pod \"nova-metadata-0\" (UID: \"261c8dea-757c-4e06-9bd5-a39fdb96f34e\") " pod="openstack/nova-metadata-0" Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.767937 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/261c8dea-757c-4e06-9bd5-a39fdb96f34e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"261c8dea-757c-4e06-9bd5-a39fdb96f34e\") " pod="openstack/nova-metadata-0" Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.792196 4854 scope.go:117] "RemoveContainer" containerID="579674215d90bdef9e4c5ec6e21c19d182adcfe481ca1f26f43fcc0a469ee82f" Jan 03 06:07:11 crc kubenswrapper[4854]: E0103 06:07:11.792806 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"579674215d90bdef9e4c5ec6e21c19d182adcfe481ca1f26f43fcc0a469ee82f\": container with ID starting with 579674215d90bdef9e4c5ec6e21c19d182adcfe481ca1f26f43fcc0a469ee82f not found: ID does not exist" containerID="579674215d90bdef9e4c5ec6e21c19d182adcfe481ca1f26f43fcc0a469ee82f" Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.792839 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"579674215d90bdef9e4c5ec6e21c19d182adcfe481ca1f26f43fcc0a469ee82f"} err="failed to get container status \"579674215d90bdef9e4c5ec6e21c19d182adcfe481ca1f26f43fcc0a469ee82f\": rpc error: code = NotFound desc = could not find container \"579674215d90bdef9e4c5ec6e21c19d182adcfe481ca1f26f43fcc0a469ee82f\": container with ID starting with 579674215d90bdef9e4c5ec6e21c19d182adcfe481ca1f26f43fcc0a469ee82f not found: ID does not exist" Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.792862 4854 scope.go:117] "RemoveContainer" containerID="2144215f1ff17b0540f1e1e22f8c1590a1af5b221215b5a9733a8571f8143a9e" Jan 03 06:07:11 crc kubenswrapper[4854]: E0103 06:07:11.793189 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2144215f1ff17b0540f1e1e22f8c1590a1af5b221215b5a9733a8571f8143a9e\": container with ID starting with 2144215f1ff17b0540f1e1e22f8c1590a1af5b221215b5a9733a8571f8143a9e not found: ID does not exist" containerID="2144215f1ff17b0540f1e1e22f8c1590a1af5b221215b5a9733a8571f8143a9e" Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.793213 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2144215f1ff17b0540f1e1e22f8c1590a1af5b221215b5a9733a8571f8143a9e"} err="failed to get container status \"2144215f1ff17b0540f1e1e22f8c1590a1af5b221215b5a9733a8571f8143a9e\": rpc error: code = NotFound desc = could not find container \"2144215f1ff17b0540f1e1e22f8c1590a1af5b221215b5a9733a8571f8143a9e\": container with ID starting with 2144215f1ff17b0540f1e1e22f8c1590a1af5b221215b5a9733a8571f8143a9e not found: ID does not exist" Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.793229 4854 scope.go:117] "RemoveContainer" containerID="579674215d90bdef9e4c5ec6e21c19d182adcfe481ca1f26f43fcc0a469ee82f" Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.793519 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"579674215d90bdef9e4c5ec6e21c19d182adcfe481ca1f26f43fcc0a469ee82f"} err="failed to get container status \"579674215d90bdef9e4c5ec6e21c19d182adcfe481ca1f26f43fcc0a469ee82f\": rpc error: code = NotFound desc = could not find container \"579674215d90bdef9e4c5ec6e21c19d182adcfe481ca1f26f43fcc0a469ee82f\": container with ID starting with 579674215d90bdef9e4c5ec6e21c19d182adcfe481ca1f26f43fcc0a469ee82f not found: ID does not exist" Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.793562 4854 scope.go:117] "RemoveContainer" containerID="2144215f1ff17b0540f1e1e22f8c1590a1af5b221215b5a9733a8571f8143a9e" Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.793857 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2144215f1ff17b0540f1e1e22f8c1590a1af5b221215b5a9733a8571f8143a9e"} err="failed to get container status \"2144215f1ff17b0540f1e1e22f8c1590a1af5b221215b5a9733a8571f8143a9e\": rpc error: code = NotFound desc = could not find container \"2144215f1ff17b0540f1e1e22f8c1590a1af5b221215b5a9733a8571f8143a9e\": container with ID starting with 2144215f1ff17b0540f1e1e22f8c1590a1af5b221215b5a9733a8571f8143a9e not found: ID does not exist" Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.870051 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/261c8dea-757c-4e06-9bd5-a39fdb96f34e-logs\") pod \"nova-metadata-0\" (UID: \"261c8dea-757c-4e06-9bd5-a39fdb96f34e\") " pod="openstack/nova-metadata-0" Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.870438 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/261c8dea-757c-4e06-9bd5-a39fdb96f34e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"261c8dea-757c-4e06-9bd5-a39fdb96f34e\") " pod="openstack/nova-metadata-0" Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.870505 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6bhr\" (UniqueName: \"kubernetes.io/projected/261c8dea-757c-4e06-9bd5-a39fdb96f34e-kube-api-access-t6bhr\") pod \"nova-metadata-0\" (UID: \"261c8dea-757c-4e06-9bd5-a39fdb96f34e\") " pod="openstack/nova-metadata-0" Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.870580 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/261c8dea-757c-4e06-9bd5-a39fdb96f34e-config-data\") pod \"nova-metadata-0\" (UID: \"261c8dea-757c-4e06-9bd5-a39fdb96f34e\") " pod="openstack/nova-metadata-0" Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.870621 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/261c8dea-757c-4e06-9bd5-a39fdb96f34e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"261c8dea-757c-4e06-9bd5-a39fdb96f34e\") " pod="openstack/nova-metadata-0" Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.871068 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/261c8dea-757c-4e06-9bd5-a39fdb96f34e-logs\") pod \"nova-metadata-0\" (UID: \"261c8dea-757c-4e06-9bd5-a39fdb96f34e\") " pod="openstack/nova-metadata-0" Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.877782 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/261c8dea-757c-4e06-9bd5-a39fdb96f34e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"261c8dea-757c-4e06-9bd5-a39fdb96f34e\") " pod="openstack/nova-metadata-0" Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.878324 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/261c8dea-757c-4e06-9bd5-a39fdb96f34e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"261c8dea-757c-4e06-9bd5-a39fdb96f34e\") " pod="openstack/nova-metadata-0" Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.879584 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/261c8dea-757c-4e06-9bd5-a39fdb96f34e-config-data\") pod \"nova-metadata-0\" (UID: \"261c8dea-757c-4e06-9bd5-a39fdb96f34e\") " pod="openstack/nova-metadata-0" Jan 03 06:07:11 crc kubenswrapper[4854]: I0103 06:07:11.894851 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6bhr\" (UniqueName: \"kubernetes.io/projected/261c8dea-757c-4e06-9bd5-a39fdb96f34e-kube-api-access-t6bhr\") pod \"nova-metadata-0\" (UID: \"261c8dea-757c-4e06-9bd5-a39fdb96f34e\") " pod="openstack/nova-metadata-0" Jan 03 06:07:12 crc kubenswrapper[4854]: I0103 06:07:12.079943 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 03 06:07:12 crc kubenswrapper[4854]: I0103 06:07:12.133213 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c02d25bb-c44c-498c-811d-05e60b417640" path="/var/lib/kubelet/pods/c02d25bb-c44c-498c-811d-05e60b417640/volumes" Jan 03 06:07:12 crc kubenswrapper[4854]: I0103 06:07:12.271996 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 03 06:07:12 crc kubenswrapper[4854]: I0103 06:07:12.403978 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a292205e-b4eb-4f28-a9a8-9fbceaea3f60-config-data\") pod \"a292205e-b4eb-4f28-a9a8-9fbceaea3f60\" (UID: \"a292205e-b4eb-4f28-a9a8-9fbceaea3f60\") " Jan 03 06:07:12 crc kubenswrapper[4854]: I0103 06:07:12.404039 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a292205e-b4eb-4f28-a9a8-9fbceaea3f60-combined-ca-bundle\") pod \"a292205e-b4eb-4f28-a9a8-9fbceaea3f60\" (UID: \"a292205e-b4eb-4f28-a9a8-9fbceaea3f60\") " Jan 03 06:07:12 crc kubenswrapper[4854]: I0103 06:07:12.404169 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnsv5\" (UniqueName: \"kubernetes.io/projected/a292205e-b4eb-4f28-a9a8-9fbceaea3f60-kube-api-access-vnsv5\") pod \"a292205e-b4eb-4f28-a9a8-9fbceaea3f60\" (UID: \"a292205e-b4eb-4f28-a9a8-9fbceaea3f60\") " Jan 03 06:07:12 crc kubenswrapper[4854]: I0103 06:07:12.412334 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a292205e-b4eb-4f28-a9a8-9fbceaea3f60-kube-api-access-vnsv5" (OuterVolumeSpecName: "kube-api-access-vnsv5") pod "a292205e-b4eb-4f28-a9a8-9fbceaea3f60" (UID: "a292205e-b4eb-4f28-a9a8-9fbceaea3f60"). InnerVolumeSpecName "kube-api-access-vnsv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:07:12 crc kubenswrapper[4854]: I0103 06:07:12.449010 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a292205e-b4eb-4f28-a9a8-9fbceaea3f60-config-data" (OuterVolumeSpecName: "config-data") pod "a292205e-b4eb-4f28-a9a8-9fbceaea3f60" (UID: "a292205e-b4eb-4f28-a9a8-9fbceaea3f60"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:07:12 crc kubenswrapper[4854]: I0103 06:07:12.458229 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a292205e-b4eb-4f28-a9a8-9fbceaea3f60-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a292205e-b4eb-4f28-a9a8-9fbceaea3f60" (UID: "a292205e-b4eb-4f28-a9a8-9fbceaea3f60"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:07:12 crc kubenswrapper[4854]: I0103 06:07:12.508301 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a292205e-b4eb-4f28-a9a8-9fbceaea3f60-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:12 crc kubenswrapper[4854]: I0103 06:07:12.508485 4854 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a292205e-b4eb-4f28-a9a8-9fbceaea3f60-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:12 crc kubenswrapper[4854]: I0103 06:07:12.508545 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnsv5\" (UniqueName: \"kubernetes.io/projected/a292205e-b4eb-4f28-a9a8-9fbceaea3f60-kube-api-access-vnsv5\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:12 crc kubenswrapper[4854]: I0103 06:07:12.572901 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 03 06:07:12 crc kubenswrapper[4854]: I0103 06:07:12.605825 4854 generic.go:334] "Generic (PLEG): container finished" podID="43d451de-7824-46ed-9709-d884d2df08e0" containerID="f27a53f5524cc3c7b67b51715250a75176ef26bb2738c5386a7e095b62bd085c" exitCode=0 Jan 03 06:07:12 crc kubenswrapper[4854]: I0103 06:07:12.605883 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-6hxsh" event={"ID":"43d451de-7824-46ed-9709-d884d2df08e0","Type":"ContainerDied","Data":"f27a53f5524cc3c7b67b51715250a75176ef26bb2738c5386a7e095b62bd085c"} Jan 03 06:07:12 crc kubenswrapper[4854]: I0103 06:07:12.609959 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"261c8dea-757c-4e06-9bd5-a39fdb96f34e","Type":"ContainerStarted","Data":"19a41f2c2b4aeb03ad8e5e02bce7b35d2cd782e31a11ec03eab4f6befc656f61"} Jan 03 06:07:12 crc kubenswrapper[4854]: I0103 06:07:12.611985 4854 generic.go:334] "Generic (PLEG): container finished" podID="a292205e-b4eb-4f28-a9a8-9fbceaea3f60" containerID="87adf904de741a7fbcaee828ec1aceb49648762de69b9abb00cf1e053a52268a" exitCode=0 Jan 03 06:07:12 crc kubenswrapper[4854]: I0103 06:07:12.612304 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a292205e-b4eb-4f28-a9a8-9fbceaea3f60","Type":"ContainerDied","Data":"87adf904de741a7fbcaee828ec1aceb49648762de69b9abb00cf1e053a52268a"} Jan 03 06:07:12 crc kubenswrapper[4854]: I0103 06:07:12.612350 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a292205e-b4eb-4f28-a9a8-9fbceaea3f60","Type":"ContainerDied","Data":"1a837ecb8d541b9c7c1380a335194b7c910484c8d0884909015eaff8e1090ad0"} Jan 03 06:07:12 crc kubenswrapper[4854]: I0103 06:07:12.612366 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 03 06:07:12 crc kubenswrapper[4854]: I0103 06:07:12.612374 4854 scope.go:117] "RemoveContainer" containerID="87adf904de741a7fbcaee828ec1aceb49648762de69b9abb00cf1e053a52268a" Jan 03 06:07:12 crc kubenswrapper[4854]: I0103 06:07:12.648885 4854 scope.go:117] "RemoveContainer" containerID="87adf904de741a7fbcaee828ec1aceb49648762de69b9abb00cf1e053a52268a" Jan 03 06:07:12 crc kubenswrapper[4854]: E0103 06:07:12.650894 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87adf904de741a7fbcaee828ec1aceb49648762de69b9abb00cf1e053a52268a\": container with ID starting with 87adf904de741a7fbcaee828ec1aceb49648762de69b9abb00cf1e053a52268a not found: ID does not exist" containerID="87adf904de741a7fbcaee828ec1aceb49648762de69b9abb00cf1e053a52268a" Jan 03 06:07:12 crc kubenswrapper[4854]: I0103 06:07:12.650941 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87adf904de741a7fbcaee828ec1aceb49648762de69b9abb00cf1e053a52268a"} err="failed to get container status \"87adf904de741a7fbcaee828ec1aceb49648762de69b9abb00cf1e053a52268a\": rpc error: code = NotFound desc = could not find container \"87adf904de741a7fbcaee828ec1aceb49648762de69b9abb00cf1e053a52268a\": container with ID starting with 87adf904de741a7fbcaee828ec1aceb49648762de69b9abb00cf1e053a52268a not found: ID does not exist" Jan 03 06:07:12 crc kubenswrapper[4854]: I0103 06:07:12.678349 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 03 06:07:12 crc kubenswrapper[4854]: I0103 06:07:12.696531 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 03 06:07:12 crc kubenswrapper[4854]: I0103 06:07:12.714888 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 03 06:07:12 crc kubenswrapper[4854]: E0103 06:07:12.715808 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a292205e-b4eb-4f28-a9a8-9fbceaea3f60" containerName="nova-scheduler-scheduler" Jan 03 06:07:12 crc kubenswrapper[4854]: I0103 06:07:12.715905 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="a292205e-b4eb-4f28-a9a8-9fbceaea3f60" containerName="nova-scheduler-scheduler" Jan 03 06:07:12 crc kubenswrapper[4854]: I0103 06:07:12.716391 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="a292205e-b4eb-4f28-a9a8-9fbceaea3f60" containerName="nova-scheduler-scheduler" Jan 03 06:07:12 crc kubenswrapper[4854]: I0103 06:07:12.721450 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 03 06:07:12 crc kubenswrapper[4854]: I0103 06:07:12.724346 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 03 06:07:12 crc kubenswrapper[4854]: I0103 06:07:12.731888 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 03 06:07:12 crc kubenswrapper[4854]: I0103 06:07:12.815596 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6912a97-3357-44a4-b06f-284d4ec6c357-config-data\") pod \"nova-scheduler-0\" (UID: \"b6912a97-3357-44a4-b06f-284d4ec6c357\") " pod="openstack/nova-scheduler-0" Jan 03 06:07:12 crc kubenswrapper[4854]: I0103 06:07:12.815662 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6912a97-3357-44a4-b06f-284d4ec6c357-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b6912a97-3357-44a4-b06f-284d4ec6c357\") " pod="openstack/nova-scheduler-0" Jan 03 06:07:12 crc kubenswrapper[4854]: I0103 06:07:12.815867 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shpcb\" (UniqueName: \"kubernetes.io/projected/b6912a97-3357-44a4-b06f-284d4ec6c357-kube-api-access-shpcb\") pod \"nova-scheduler-0\" (UID: \"b6912a97-3357-44a4-b06f-284d4ec6c357\") " pod="openstack/nova-scheduler-0" Jan 03 06:07:12 crc kubenswrapper[4854]: I0103 06:07:12.917430 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shpcb\" (UniqueName: \"kubernetes.io/projected/b6912a97-3357-44a4-b06f-284d4ec6c357-kube-api-access-shpcb\") pod \"nova-scheduler-0\" (UID: \"b6912a97-3357-44a4-b06f-284d4ec6c357\") " pod="openstack/nova-scheduler-0" Jan 03 06:07:12 crc kubenswrapper[4854]: I0103 06:07:12.917610 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6912a97-3357-44a4-b06f-284d4ec6c357-config-data\") pod \"nova-scheduler-0\" (UID: \"b6912a97-3357-44a4-b06f-284d4ec6c357\") " pod="openstack/nova-scheduler-0" Jan 03 06:07:12 crc kubenswrapper[4854]: I0103 06:07:12.917707 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6912a97-3357-44a4-b06f-284d4ec6c357-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b6912a97-3357-44a4-b06f-284d4ec6c357\") " pod="openstack/nova-scheduler-0" Jan 03 06:07:12 crc kubenswrapper[4854]: I0103 06:07:12.921252 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6912a97-3357-44a4-b06f-284d4ec6c357-config-data\") pod \"nova-scheduler-0\" (UID: \"b6912a97-3357-44a4-b06f-284d4ec6c357\") " pod="openstack/nova-scheduler-0" Jan 03 06:07:12 crc kubenswrapper[4854]: I0103 06:07:12.921524 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6912a97-3357-44a4-b06f-284d4ec6c357-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b6912a97-3357-44a4-b06f-284d4ec6c357\") " pod="openstack/nova-scheduler-0" Jan 03 06:07:12 crc kubenswrapper[4854]: I0103 06:07:12.940871 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shpcb\" (UniqueName: \"kubernetes.io/projected/b6912a97-3357-44a4-b06f-284d4ec6c357-kube-api-access-shpcb\") pod \"nova-scheduler-0\" (UID: \"b6912a97-3357-44a4-b06f-284d4ec6c357\") " pod="openstack/nova-scheduler-0" Jan 03 06:07:13 crc kubenswrapper[4854]: I0103 06:07:13.044947 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 03 06:07:13 crc kubenswrapper[4854]: I0103 06:07:13.556923 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 03 06:07:13 crc kubenswrapper[4854]: I0103 06:07:13.624121 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b6912a97-3357-44a4-b06f-284d4ec6c357","Type":"ContainerStarted","Data":"0c3f2acec7326b78719a0cec7ab785a8dffa2b53329970846f9ea254fc24b287"} Jan 03 06:07:13 crc kubenswrapper[4854]: I0103 06:07:13.634703 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"261c8dea-757c-4e06-9bd5-a39fdb96f34e","Type":"ContainerStarted","Data":"fb7017417a322e9530ca80496fb14a84bbcde2e3df4b43025cf7e1315f818941"} Jan 03 06:07:13 crc kubenswrapper[4854]: I0103 06:07:13.634743 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"261c8dea-757c-4e06-9bd5-a39fdb96f34e","Type":"ContainerStarted","Data":"3c3deddc92d214919071046f9984daf5656b1a6e4ff0ce2e45db1651ac1ac96d"} Jan 03 06:07:13 crc kubenswrapper[4854]: I0103 06:07:13.694828 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.694808006 podStartE2EDuration="2.694808006s" podCreationTimestamp="2026-01-03 06:07:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:07:13.688328484 +0000 UTC m=+1612.014905056" watchObservedRunningTime="2026-01-03 06:07:13.694808006 +0000 UTC m=+1612.021384578" Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.152618 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a292205e-b4eb-4f28-a9a8-9fbceaea3f60" path="/var/lib/kubelet/pods/a292205e-b4eb-4f28-a9a8-9fbceaea3f60/volumes" Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.294502 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-6hxsh" Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.372910 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j48xd\" (UniqueName: \"kubernetes.io/projected/43d451de-7824-46ed-9709-d884d2df08e0-kube-api-access-j48xd\") pod \"43d451de-7824-46ed-9709-d884d2df08e0\" (UID: \"43d451de-7824-46ed-9709-d884d2df08e0\") " Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.373038 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43d451de-7824-46ed-9709-d884d2df08e0-combined-ca-bundle\") pod \"43d451de-7824-46ed-9709-d884d2df08e0\" (UID: \"43d451de-7824-46ed-9709-d884d2df08e0\") " Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.373160 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43d451de-7824-46ed-9709-d884d2df08e0-config-data\") pod \"43d451de-7824-46ed-9709-d884d2df08e0\" (UID: \"43d451de-7824-46ed-9709-d884d2df08e0\") " Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.373245 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43d451de-7824-46ed-9709-d884d2df08e0-scripts\") pod \"43d451de-7824-46ed-9709-d884d2df08e0\" (UID: \"43d451de-7824-46ed-9709-d884d2df08e0\") " Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.378503 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43d451de-7824-46ed-9709-d884d2df08e0-scripts" (OuterVolumeSpecName: "scripts") pod "43d451de-7824-46ed-9709-d884d2df08e0" (UID: "43d451de-7824-46ed-9709-d884d2df08e0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.379282 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43d451de-7824-46ed-9709-d884d2df08e0-kube-api-access-j48xd" (OuterVolumeSpecName: "kube-api-access-j48xd") pod "43d451de-7824-46ed-9709-d884d2df08e0" (UID: "43d451de-7824-46ed-9709-d884d2df08e0"). InnerVolumeSpecName "kube-api-access-j48xd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.410996 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43d451de-7824-46ed-9709-d884d2df08e0-config-data" (OuterVolumeSpecName: "config-data") pod "43d451de-7824-46ed-9709-d884d2df08e0" (UID: "43d451de-7824-46ed-9709-d884d2df08e0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.411738 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43d451de-7824-46ed-9709-d884d2df08e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "43d451de-7824-46ed-9709-d884d2df08e0" (UID: "43d451de-7824-46ed-9709-d884d2df08e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.475657 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j48xd\" (UniqueName: \"kubernetes.io/projected/43d451de-7824-46ed-9709-d884d2df08e0-kube-api-access-j48xd\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.475903 4854 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43d451de-7824-46ed-9709-d884d2df08e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.475914 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43d451de-7824-46ed-9709-d884d2df08e0-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.475922 4854 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43d451de-7824-46ed-9709-d884d2df08e0-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.479420 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.577598 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21e34b84-a209-4662-a8df-3e7aff354daa-combined-ca-bundle\") pod \"21e34b84-a209-4662-a8df-3e7aff354daa\" (UID: \"21e34b84-a209-4662-a8df-3e7aff354daa\") " Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.577671 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spphc\" (UniqueName: \"kubernetes.io/projected/21e34b84-a209-4662-a8df-3e7aff354daa-kube-api-access-spphc\") pod \"21e34b84-a209-4662-a8df-3e7aff354daa\" (UID: \"21e34b84-a209-4662-a8df-3e7aff354daa\") " Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.577957 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21e34b84-a209-4662-a8df-3e7aff354daa-logs\") pod \"21e34b84-a209-4662-a8df-3e7aff354daa\" (UID: \"21e34b84-a209-4662-a8df-3e7aff354daa\") " Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.578115 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21e34b84-a209-4662-a8df-3e7aff354daa-config-data\") pod \"21e34b84-a209-4662-a8df-3e7aff354daa\" (UID: \"21e34b84-a209-4662-a8df-3e7aff354daa\") " Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.578581 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21e34b84-a209-4662-a8df-3e7aff354daa-logs" (OuterVolumeSpecName: "logs") pod "21e34b84-a209-4662-a8df-3e7aff354daa" (UID: "21e34b84-a209-4662-a8df-3e7aff354daa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.579477 4854 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21e34b84-a209-4662-a8df-3e7aff354daa-logs\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.580978 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21e34b84-a209-4662-a8df-3e7aff354daa-kube-api-access-spphc" (OuterVolumeSpecName: "kube-api-access-spphc") pod "21e34b84-a209-4662-a8df-3e7aff354daa" (UID: "21e34b84-a209-4662-a8df-3e7aff354daa"). InnerVolumeSpecName "kube-api-access-spphc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.608543 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21e34b84-a209-4662-a8df-3e7aff354daa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "21e34b84-a209-4662-a8df-3e7aff354daa" (UID: "21e34b84-a209-4662-a8df-3e7aff354daa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.618227 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21e34b84-a209-4662-a8df-3e7aff354daa-config-data" (OuterVolumeSpecName: "config-data") pod "21e34b84-a209-4662-a8df-3e7aff354daa" (UID: "21e34b84-a209-4662-a8df-3e7aff354daa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.652073 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b6912a97-3357-44a4-b06f-284d4ec6c357","Type":"ContainerStarted","Data":"74bd7b9349f9f238f6154ab99c91cd976a90d67f76d30b501e2bfc9b0024e9ca"} Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.655600 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-6hxsh" Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.655656 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-6hxsh" event={"ID":"43d451de-7824-46ed-9709-d884d2df08e0","Type":"ContainerDied","Data":"cd4181c63b623c31f8d79f5ef872b4ccd452eeb4f0d3d61c969639b3f136bf04"} Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.655717 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd4181c63b623c31f8d79f5ef872b4ccd452eeb4f0d3d61c969639b3f136bf04" Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.658762 4854 generic.go:334] "Generic (PLEG): container finished" podID="21e34b84-a209-4662-a8df-3e7aff354daa" containerID="e08e40ad687e3a0569723d0f0de549270e85f504be05f48a295afb2249c6ac05" exitCode=0 Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.659630 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.663746 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"21e34b84-a209-4662-a8df-3e7aff354daa","Type":"ContainerDied","Data":"e08e40ad687e3a0569723d0f0de549270e85f504be05f48a295afb2249c6ac05"} Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.663805 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"21e34b84-a209-4662-a8df-3e7aff354daa","Type":"ContainerDied","Data":"c6ad0d53b5abfc1e3d8378bea5fd494b9239d5572fdc4a3b26bdb85ec765b4dc"} Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.663825 4854 scope.go:117] "RemoveContainer" containerID="e08e40ad687e3a0569723d0f0de549270e85f504be05f48a295afb2249c6ac05" Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.672107 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.672061232 podStartE2EDuration="2.672061232s" podCreationTimestamp="2026-01-03 06:07:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:07:14.669528899 +0000 UTC m=+1612.996105481" watchObservedRunningTime="2026-01-03 06:07:14.672061232 +0000 UTC m=+1612.998637824" Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.681133 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21e34b84-a209-4662-a8df-3e7aff354daa-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.681162 4854 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21e34b84-a209-4662-a8df-3e7aff354daa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.681173 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spphc\" (UniqueName: \"kubernetes.io/projected/21e34b84-a209-4662-a8df-3e7aff354daa-kube-api-access-spphc\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.710691 4854 scope.go:117] "RemoveContainer" containerID="f5e80a9ccb7f7973646c36c7fa736e64db3ed114174c3e54a4d050b491eccf34" Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.727848 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.740418 4854 scope.go:117] "RemoveContainer" containerID="e08e40ad687e3a0569723d0f0de549270e85f504be05f48a295afb2249c6ac05" Jan 03 06:07:14 crc kubenswrapper[4854]: E0103 06:07:14.742963 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e08e40ad687e3a0569723d0f0de549270e85f504be05f48a295afb2249c6ac05\": container with ID starting with e08e40ad687e3a0569723d0f0de549270e85f504be05f48a295afb2249c6ac05 not found: ID does not exist" containerID="e08e40ad687e3a0569723d0f0de549270e85f504be05f48a295afb2249c6ac05" Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.743022 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e08e40ad687e3a0569723d0f0de549270e85f504be05f48a295afb2249c6ac05"} err="failed to get container status \"e08e40ad687e3a0569723d0f0de549270e85f504be05f48a295afb2249c6ac05\": rpc error: code = NotFound desc = could not find container \"e08e40ad687e3a0569723d0f0de549270e85f504be05f48a295afb2249c6ac05\": container with ID starting with e08e40ad687e3a0569723d0f0de549270e85f504be05f48a295afb2249c6ac05 not found: ID does not exist" Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.743058 4854 scope.go:117] "RemoveContainer" containerID="f5e80a9ccb7f7973646c36c7fa736e64db3ed114174c3e54a4d050b491eccf34" Jan 03 06:07:14 crc kubenswrapper[4854]: E0103 06:07:14.746055 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5e80a9ccb7f7973646c36c7fa736e64db3ed114174c3e54a4d050b491eccf34\": container with ID starting with f5e80a9ccb7f7973646c36c7fa736e64db3ed114174c3e54a4d050b491eccf34 not found: ID does not exist" containerID="f5e80a9ccb7f7973646c36c7fa736e64db3ed114174c3e54a4d050b491eccf34" Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.746109 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5e80a9ccb7f7973646c36c7fa736e64db3ed114174c3e54a4d050b491eccf34"} err="failed to get container status \"f5e80a9ccb7f7973646c36c7fa736e64db3ed114174c3e54a4d050b491eccf34\": rpc error: code = NotFound desc = could not find container \"f5e80a9ccb7f7973646c36c7fa736e64db3ed114174c3e54a4d050b491eccf34\": container with ID starting with f5e80a9ccb7f7973646c36c7fa736e64db3ed114174c3e54a4d050b491eccf34 not found: ID does not exist" Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.749907 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.773593 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 03 06:07:14 crc kubenswrapper[4854]: E0103 06:07:14.774241 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21e34b84-a209-4662-a8df-3e7aff354daa" containerName="nova-api-log" Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.774261 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="21e34b84-a209-4662-a8df-3e7aff354daa" containerName="nova-api-log" Jan 03 06:07:14 crc kubenswrapper[4854]: E0103 06:07:14.774276 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21e34b84-a209-4662-a8df-3e7aff354daa" containerName="nova-api-api" Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.774282 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="21e34b84-a209-4662-a8df-3e7aff354daa" containerName="nova-api-api" Jan 03 06:07:14 crc kubenswrapper[4854]: E0103 06:07:14.774305 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43d451de-7824-46ed-9709-d884d2df08e0" containerName="aodh-db-sync" Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.774311 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="43d451de-7824-46ed-9709-d884d2df08e0" containerName="aodh-db-sync" Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.774559 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="43d451de-7824-46ed-9709-d884d2df08e0" containerName="aodh-db-sync" Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.774585 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="21e34b84-a209-4662-a8df-3e7aff354daa" containerName="nova-api-api" Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.774601 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="21e34b84-a209-4662-a8df-3e7aff354daa" containerName="nova-api-log" Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.783517 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.786782 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.787266 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.885812 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrhcj\" (UniqueName: \"kubernetes.io/projected/acc395e2-fa94-4736-8167-960fa4c2779b-kube-api-access-hrhcj\") pod \"nova-api-0\" (UID: \"acc395e2-fa94-4736-8167-960fa4c2779b\") " pod="openstack/nova-api-0" Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.885916 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/acc395e2-fa94-4736-8167-960fa4c2779b-logs\") pod \"nova-api-0\" (UID: \"acc395e2-fa94-4736-8167-960fa4c2779b\") " pod="openstack/nova-api-0" Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.885990 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acc395e2-fa94-4736-8167-960fa4c2779b-config-data\") pod \"nova-api-0\" (UID: \"acc395e2-fa94-4736-8167-960fa4c2779b\") " pod="openstack/nova-api-0" Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.886106 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acc395e2-fa94-4736-8167-960fa4c2779b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"acc395e2-fa94-4736-8167-960fa4c2779b\") " pod="openstack/nova-api-0" Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.990361 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/acc395e2-fa94-4736-8167-960fa4c2779b-logs\") pod \"nova-api-0\" (UID: \"acc395e2-fa94-4736-8167-960fa4c2779b\") " pod="openstack/nova-api-0" Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.990474 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acc395e2-fa94-4736-8167-960fa4c2779b-config-data\") pod \"nova-api-0\" (UID: \"acc395e2-fa94-4736-8167-960fa4c2779b\") " pod="openstack/nova-api-0" Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.990530 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acc395e2-fa94-4736-8167-960fa4c2779b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"acc395e2-fa94-4736-8167-960fa4c2779b\") " pod="openstack/nova-api-0" Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.990664 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrhcj\" (UniqueName: \"kubernetes.io/projected/acc395e2-fa94-4736-8167-960fa4c2779b-kube-api-access-hrhcj\") pod \"nova-api-0\" (UID: \"acc395e2-fa94-4736-8167-960fa4c2779b\") " pod="openstack/nova-api-0" Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.991014 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/acc395e2-fa94-4736-8167-960fa4c2779b-logs\") pod \"nova-api-0\" (UID: \"acc395e2-fa94-4736-8167-960fa4c2779b\") " pod="openstack/nova-api-0" Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.994211 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acc395e2-fa94-4736-8167-960fa4c2779b-config-data\") pod \"nova-api-0\" (UID: \"acc395e2-fa94-4736-8167-960fa4c2779b\") " pod="openstack/nova-api-0" Jan 03 06:07:14 crc kubenswrapper[4854]: I0103 06:07:14.994446 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acc395e2-fa94-4736-8167-960fa4c2779b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"acc395e2-fa94-4736-8167-960fa4c2779b\") " pod="openstack/nova-api-0" Jan 03 06:07:15 crc kubenswrapper[4854]: I0103 06:07:15.009745 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrhcj\" (UniqueName: \"kubernetes.io/projected/acc395e2-fa94-4736-8167-960fa4c2779b-kube-api-access-hrhcj\") pod \"nova-api-0\" (UID: \"acc395e2-fa94-4736-8167-960fa4c2779b\") " pod="openstack/nova-api-0" Jan 03 06:07:15 crc kubenswrapper[4854]: I0103 06:07:15.109355 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 03 06:07:15 crc kubenswrapper[4854]: I0103 06:07:15.609307 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 03 06:07:15 crc kubenswrapper[4854]: I0103 06:07:15.672123 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"acc395e2-fa94-4736-8167-960fa4c2779b","Type":"ContainerStarted","Data":"b0479bea2b9b539411b3ad41c363bd9144919d649ddaa482cfc520c4bd101f48"} Jan 03 06:07:16 crc kubenswrapper[4854]: I0103 06:07:16.132932 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21e34b84-a209-4662-a8df-3e7aff354daa" path="/var/lib/kubelet/pods/21e34b84-a209-4662-a8df-3e7aff354daa/volumes" Jan 03 06:07:16 crc kubenswrapper[4854]: I0103 06:07:16.574996 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 03 06:07:16 crc kubenswrapper[4854]: I0103 06:07:16.581547 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 03 06:07:16 crc kubenswrapper[4854]: I0103 06:07:16.584439 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 03 06:07:16 crc kubenswrapper[4854]: I0103 06:07:16.587217 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-bkf2n" Jan 03 06:07:16 crc kubenswrapper[4854]: I0103 06:07:16.587605 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 03 06:07:16 crc kubenswrapper[4854]: I0103 06:07:16.594756 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 03 06:07:16 crc kubenswrapper[4854]: I0103 06:07:16.631059 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7eb108e4-cabe-4eca-afb4-4104b147b759-combined-ca-bundle\") pod \"aodh-0\" (UID: \"7eb108e4-cabe-4eca-afb4-4104b147b759\") " pod="openstack/aodh-0" Jan 03 06:07:16 crc kubenswrapper[4854]: I0103 06:07:16.631325 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7eb108e4-cabe-4eca-afb4-4104b147b759-scripts\") pod \"aodh-0\" (UID: \"7eb108e4-cabe-4eca-afb4-4104b147b759\") " pod="openstack/aodh-0" Jan 03 06:07:16 crc kubenswrapper[4854]: I0103 06:07:16.631374 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7eb108e4-cabe-4eca-afb4-4104b147b759-config-data\") pod \"aodh-0\" (UID: \"7eb108e4-cabe-4eca-afb4-4104b147b759\") " pod="openstack/aodh-0" Jan 03 06:07:16 crc kubenswrapper[4854]: I0103 06:07:16.631506 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwf7f\" (UniqueName: \"kubernetes.io/projected/7eb108e4-cabe-4eca-afb4-4104b147b759-kube-api-access-cwf7f\") pod \"aodh-0\" (UID: \"7eb108e4-cabe-4eca-afb4-4104b147b759\") " pod="openstack/aodh-0" Jan 03 06:07:16 crc kubenswrapper[4854]: I0103 06:07:16.688610 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"acc395e2-fa94-4736-8167-960fa4c2779b","Type":"ContainerStarted","Data":"429070de9fd55bf2a9a4d6f50b9de2fbb8365cde0423c80f974c2dcd4846e2bf"} Jan 03 06:07:16 crc kubenswrapper[4854]: I0103 06:07:16.688653 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"acc395e2-fa94-4736-8167-960fa4c2779b","Type":"ContainerStarted","Data":"78a330b505babe0bf45b21c1b1b536720b2fede8ebf290f95c7629fa2d11ca43"} Jan 03 06:07:16 crc kubenswrapper[4854]: I0103 06:07:16.720760 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.720737781 podStartE2EDuration="2.720737781s" podCreationTimestamp="2026-01-03 06:07:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:07:16.712629148 +0000 UTC m=+1615.039205720" watchObservedRunningTime="2026-01-03 06:07:16.720737781 +0000 UTC m=+1615.047314363" Jan 03 06:07:16 crc kubenswrapper[4854]: I0103 06:07:16.733269 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7eb108e4-cabe-4eca-afb4-4104b147b759-combined-ca-bundle\") pod \"aodh-0\" (UID: \"7eb108e4-cabe-4eca-afb4-4104b147b759\") " pod="openstack/aodh-0" Jan 03 06:07:16 crc kubenswrapper[4854]: I0103 06:07:16.733452 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7eb108e4-cabe-4eca-afb4-4104b147b759-scripts\") pod \"aodh-0\" (UID: \"7eb108e4-cabe-4eca-afb4-4104b147b759\") " pod="openstack/aodh-0" Jan 03 06:07:16 crc kubenswrapper[4854]: I0103 06:07:16.733495 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7eb108e4-cabe-4eca-afb4-4104b147b759-config-data\") pod \"aodh-0\" (UID: \"7eb108e4-cabe-4eca-afb4-4104b147b759\") " pod="openstack/aodh-0" Jan 03 06:07:16 crc kubenswrapper[4854]: I0103 06:07:16.733569 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwf7f\" (UniqueName: \"kubernetes.io/projected/7eb108e4-cabe-4eca-afb4-4104b147b759-kube-api-access-cwf7f\") pod \"aodh-0\" (UID: \"7eb108e4-cabe-4eca-afb4-4104b147b759\") " pod="openstack/aodh-0" Jan 03 06:07:16 crc kubenswrapper[4854]: I0103 06:07:16.738838 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7eb108e4-cabe-4eca-afb4-4104b147b759-scripts\") pod \"aodh-0\" (UID: \"7eb108e4-cabe-4eca-afb4-4104b147b759\") " pod="openstack/aodh-0" Jan 03 06:07:16 crc kubenswrapper[4854]: I0103 06:07:16.741820 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7eb108e4-cabe-4eca-afb4-4104b147b759-config-data\") pod \"aodh-0\" (UID: \"7eb108e4-cabe-4eca-afb4-4104b147b759\") " pod="openstack/aodh-0" Jan 03 06:07:16 crc kubenswrapper[4854]: I0103 06:07:16.754943 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7eb108e4-cabe-4eca-afb4-4104b147b759-combined-ca-bundle\") pod \"aodh-0\" (UID: \"7eb108e4-cabe-4eca-afb4-4104b147b759\") " pod="openstack/aodh-0" Jan 03 06:07:16 crc kubenswrapper[4854]: I0103 06:07:16.756664 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwf7f\" (UniqueName: \"kubernetes.io/projected/7eb108e4-cabe-4eca-afb4-4104b147b759-kube-api-access-cwf7f\") pod \"aodh-0\" (UID: \"7eb108e4-cabe-4eca-afb4-4104b147b759\") " pod="openstack/aodh-0" Jan 03 06:07:16 crc kubenswrapper[4854]: I0103 06:07:16.907141 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 03 06:07:17 crc kubenswrapper[4854]: I0103 06:07:17.081031 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 03 06:07:17 crc kubenswrapper[4854]: I0103 06:07:17.081120 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 03 06:07:17 crc kubenswrapper[4854]: W0103 06:07:17.531419 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7eb108e4_cabe_4eca_afb4_4104b147b759.slice/crio-38b30f234c6927ac913a173a496b85168711da8e88c0d3d3981155cda1803182 WatchSource:0}: Error finding container 38b30f234c6927ac913a173a496b85168711da8e88c0d3d3981155cda1803182: Status 404 returned error can't find the container with id 38b30f234c6927ac913a173a496b85168711da8e88c0d3d3981155cda1803182 Jan 03 06:07:17 crc kubenswrapper[4854]: I0103 06:07:17.536429 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 03 06:07:17 crc kubenswrapper[4854]: I0103 06:07:17.700504 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7eb108e4-cabe-4eca-afb4-4104b147b759","Type":"ContainerStarted","Data":"38b30f234c6927ac913a173a496b85168711da8e88c0d3d3981155cda1803182"} Jan 03 06:07:18 crc kubenswrapper[4854]: I0103 06:07:18.045195 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 03 06:07:18 crc kubenswrapper[4854]: I0103 06:07:18.719032 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7eb108e4-cabe-4eca-afb4-4104b147b759","Type":"ContainerStarted","Data":"0e2dc2ee6a888765aa49f91ae537905fd2fc0ad221a0bd1eeb89aa8528c0811d"} Jan 03 06:07:20 crc kubenswrapper[4854]: I0103 06:07:20.003598 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:07:20 crc kubenswrapper[4854]: I0103 06:07:20.004204 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ec4f77c3-e679-4e7f-92a4-dd888ba6522b" containerName="ceilometer-central-agent" containerID="cri-o://1f2e45eb0d2f8739c77bd2ee198ff3f31bb081cff0105e884501be4a60fc41f4" gracePeriod=30 Jan 03 06:07:20 crc kubenswrapper[4854]: I0103 06:07:20.004248 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ec4f77c3-e679-4e7f-92a4-dd888ba6522b" containerName="proxy-httpd" containerID="cri-o://97065523b45de4a8d51b89d3329b7f3d4f2237bd0df136211ebb2dab116de8d1" gracePeriod=30 Jan 03 06:07:20 crc kubenswrapper[4854]: I0103 06:07:20.004361 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ec4f77c3-e679-4e7f-92a4-dd888ba6522b" containerName="sg-core" containerID="cri-o://8f4f6f985f7e59e6dfe40a39835d4193a83cc0ae9243ed7a0fee244b02ea2ade" gracePeriod=30 Jan 03 06:07:20 crc kubenswrapper[4854]: I0103 06:07:20.004308 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ec4f77c3-e679-4e7f-92a4-dd888ba6522b" containerName="ceilometer-notification-agent" containerID="cri-o://11b07654612794b379eace89c3cb7bcac66467c9554d1270d913fb6a29971b18" gracePeriod=30 Jan 03 06:07:20 crc kubenswrapper[4854]: I0103 06:07:20.013338 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="ec4f77c3-e679-4e7f-92a4-dd888ba6522b" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.237:3000/\": EOF" Jan 03 06:07:20 crc kubenswrapper[4854]: I0103 06:07:20.035490 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 03 06:07:20 crc kubenswrapper[4854]: I0103 06:07:20.171351 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="ec4f77c3-e679-4e7f-92a4-dd888ba6522b" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.237:3000/\": dial tcp 10.217.0.237:3000: connect: connection refused" Jan 03 06:07:20 crc kubenswrapper[4854]: I0103 06:07:20.364435 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 03 06:07:20 crc kubenswrapper[4854]: I0103 06:07:20.768186 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7eb108e4-cabe-4eca-afb4-4104b147b759","Type":"ContainerStarted","Data":"8d6faa93cfe4277de9a204cc0900087b922d49a672da006257f3580cf27c23c8"} Jan 03 06:07:20 crc kubenswrapper[4854]: I0103 06:07:20.771512 4854 generic.go:334] "Generic (PLEG): container finished" podID="ec4f77c3-e679-4e7f-92a4-dd888ba6522b" containerID="97065523b45de4a8d51b89d3329b7f3d4f2237bd0df136211ebb2dab116de8d1" exitCode=0 Jan 03 06:07:20 crc kubenswrapper[4854]: I0103 06:07:20.771537 4854 generic.go:334] "Generic (PLEG): container finished" podID="ec4f77c3-e679-4e7f-92a4-dd888ba6522b" containerID="8f4f6f985f7e59e6dfe40a39835d4193a83cc0ae9243ed7a0fee244b02ea2ade" exitCode=2 Jan 03 06:07:20 crc kubenswrapper[4854]: I0103 06:07:20.771550 4854 generic.go:334] "Generic (PLEG): container finished" podID="ec4f77c3-e679-4e7f-92a4-dd888ba6522b" containerID="1f2e45eb0d2f8739c77bd2ee198ff3f31bb081cff0105e884501be4a60fc41f4" exitCode=0 Jan 03 06:07:20 crc kubenswrapper[4854]: I0103 06:07:20.771571 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec4f77c3-e679-4e7f-92a4-dd888ba6522b","Type":"ContainerDied","Data":"97065523b45de4a8d51b89d3329b7f3d4f2237bd0df136211ebb2dab116de8d1"} Jan 03 06:07:20 crc kubenswrapper[4854]: I0103 06:07:20.771590 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec4f77c3-e679-4e7f-92a4-dd888ba6522b","Type":"ContainerDied","Data":"8f4f6f985f7e59e6dfe40a39835d4193a83cc0ae9243ed7a0fee244b02ea2ade"} Jan 03 06:07:20 crc kubenswrapper[4854]: I0103 06:07:20.771604 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec4f77c3-e679-4e7f-92a4-dd888ba6522b","Type":"ContainerDied","Data":"1f2e45eb0d2f8739c77bd2ee198ff3f31bb081cff0105e884501be4a60fc41f4"} Jan 03 06:07:21 crc kubenswrapper[4854]: I0103 06:07:21.498144 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 06:07:21 crc kubenswrapper[4854]: I0103 06:07:21.768038 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ec4f77c3-e679-4e7f-92a4-dd888ba6522b-sg-core-conf-yaml\") pod \"ec4f77c3-e679-4e7f-92a4-dd888ba6522b\" (UID: \"ec4f77c3-e679-4e7f-92a4-dd888ba6522b\") " Jan 03 06:07:21 crc kubenswrapper[4854]: I0103 06:07:21.768473 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec4f77c3-e679-4e7f-92a4-dd888ba6522b-run-httpd\") pod \"ec4f77c3-e679-4e7f-92a4-dd888ba6522b\" (UID: \"ec4f77c3-e679-4e7f-92a4-dd888ba6522b\") " Jan 03 06:07:21 crc kubenswrapper[4854]: I0103 06:07:21.768615 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec4f77c3-e679-4e7f-92a4-dd888ba6522b-scripts\") pod \"ec4f77c3-e679-4e7f-92a4-dd888ba6522b\" (UID: \"ec4f77c3-e679-4e7f-92a4-dd888ba6522b\") " Jan 03 06:07:21 crc kubenswrapper[4854]: I0103 06:07:21.768735 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wb6vh\" (UniqueName: \"kubernetes.io/projected/ec4f77c3-e679-4e7f-92a4-dd888ba6522b-kube-api-access-wb6vh\") pod \"ec4f77c3-e679-4e7f-92a4-dd888ba6522b\" (UID: \"ec4f77c3-e679-4e7f-92a4-dd888ba6522b\") " Jan 03 06:07:21 crc kubenswrapper[4854]: I0103 06:07:21.768825 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec4f77c3-e679-4e7f-92a4-dd888ba6522b-log-httpd\") pod \"ec4f77c3-e679-4e7f-92a4-dd888ba6522b\" (UID: \"ec4f77c3-e679-4e7f-92a4-dd888ba6522b\") " Jan 03 06:07:21 crc kubenswrapper[4854]: I0103 06:07:21.768882 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec4f77c3-e679-4e7f-92a4-dd888ba6522b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ec4f77c3-e679-4e7f-92a4-dd888ba6522b" (UID: "ec4f77c3-e679-4e7f-92a4-dd888ba6522b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:07:21 crc kubenswrapper[4854]: I0103 06:07:21.768889 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec4f77c3-e679-4e7f-92a4-dd888ba6522b-combined-ca-bundle\") pod \"ec4f77c3-e679-4e7f-92a4-dd888ba6522b\" (UID: \"ec4f77c3-e679-4e7f-92a4-dd888ba6522b\") " Jan 03 06:07:21 crc kubenswrapper[4854]: I0103 06:07:21.769000 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec4f77c3-e679-4e7f-92a4-dd888ba6522b-config-data\") pod \"ec4f77c3-e679-4e7f-92a4-dd888ba6522b\" (UID: \"ec4f77c3-e679-4e7f-92a4-dd888ba6522b\") " Jan 03 06:07:21 crc kubenswrapper[4854]: I0103 06:07:21.770206 4854 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec4f77c3-e679-4e7f-92a4-dd888ba6522b-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:21 crc kubenswrapper[4854]: I0103 06:07:21.770794 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec4f77c3-e679-4e7f-92a4-dd888ba6522b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ec4f77c3-e679-4e7f-92a4-dd888ba6522b" (UID: "ec4f77c3-e679-4e7f-92a4-dd888ba6522b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:07:21 crc kubenswrapper[4854]: I0103 06:07:21.787175 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec4f77c3-e679-4e7f-92a4-dd888ba6522b-scripts" (OuterVolumeSpecName: "scripts") pod "ec4f77c3-e679-4e7f-92a4-dd888ba6522b" (UID: "ec4f77c3-e679-4e7f-92a4-dd888ba6522b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:07:21 crc kubenswrapper[4854]: I0103 06:07:21.793810 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec4f77c3-e679-4e7f-92a4-dd888ba6522b-kube-api-access-wb6vh" (OuterVolumeSpecName: "kube-api-access-wb6vh") pod "ec4f77c3-e679-4e7f-92a4-dd888ba6522b" (UID: "ec4f77c3-e679-4e7f-92a4-dd888ba6522b"). InnerVolumeSpecName "kube-api-access-wb6vh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:07:21 crc kubenswrapper[4854]: I0103 06:07:21.806119 4854 generic.go:334] "Generic (PLEG): container finished" podID="ec4f77c3-e679-4e7f-92a4-dd888ba6522b" containerID="11b07654612794b379eace89c3cb7bcac66467c9554d1270d913fb6a29971b18" exitCode=0 Jan 03 06:07:21 crc kubenswrapper[4854]: I0103 06:07:21.806166 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec4f77c3-e679-4e7f-92a4-dd888ba6522b","Type":"ContainerDied","Data":"11b07654612794b379eace89c3cb7bcac66467c9554d1270d913fb6a29971b18"} Jan 03 06:07:21 crc kubenswrapper[4854]: I0103 06:07:21.806197 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec4f77c3-e679-4e7f-92a4-dd888ba6522b","Type":"ContainerDied","Data":"3b08cab72b3a44826e956f11f46978672b20914aae3fd119009ec8846f8cb322"} Jan 03 06:07:21 crc kubenswrapper[4854]: I0103 06:07:21.806213 4854 scope.go:117] "RemoveContainer" containerID="97065523b45de4a8d51b89d3329b7f3d4f2237bd0df136211ebb2dab116de8d1" Jan 03 06:07:21 crc kubenswrapper[4854]: I0103 06:07:21.806555 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 06:07:21 crc kubenswrapper[4854]: I0103 06:07:21.854401 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec4f77c3-e679-4e7f-92a4-dd888ba6522b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ec4f77c3-e679-4e7f-92a4-dd888ba6522b" (UID: "ec4f77c3-e679-4e7f-92a4-dd888ba6522b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:07:21 crc kubenswrapper[4854]: I0103 06:07:21.870279 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec4f77c3-e679-4e7f-92a4-dd888ba6522b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec4f77c3-e679-4e7f-92a4-dd888ba6522b" (UID: "ec4f77c3-e679-4e7f-92a4-dd888ba6522b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:07:21 crc kubenswrapper[4854]: I0103 06:07:21.872423 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wb6vh\" (UniqueName: \"kubernetes.io/projected/ec4f77c3-e679-4e7f-92a4-dd888ba6522b-kube-api-access-wb6vh\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:21 crc kubenswrapper[4854]: I0103 06:07:21.872468 4854 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec4f77c3-e679-4e7f-92a4-dd888ba6522b-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:21 crc kubenswrapper[4854]: I0103 06:07:21.872485 4854 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec4f77c3-e679-4e7f-92a4-dd888ba6522b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:21 crc kubenswrapper[4854]: I0103 06:07:21.872499 4854 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ec4f77c3-e679-4e7f-92a4-dd888ba6522b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:21 crc kubenswrapper[4854]: I0103 06:07:21.872512 4854 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec4f77c3-e679-4e7f-92a4-dd888ba6522b-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:21 crc kubenswrapper[4854]: I0103 06:07:21.924269 4854 scope.go:117] "RemoveContainer" containerID="8f4f6f985f7e59e6dfe40a39835d4193a83cc0ae9243ed7a0fee244b02ea2ade" Jan 03 06:07:21 crc kubenswrapper[4854]: I0103 06:07:21.943274 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec4f77c3-e679-4e7f-92a4-dd888ba6522b-config-data" (OuterVolumeSpecName: "config-data") pod "ec4f77c3-e679-4e7f-92a4-dd888ba6522b" (UID: "ec4f77c3-e679-4e7f-92a4-dd888ba6522b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:07:21 crc kubenswrapper[4854]: I0103 06:07:21.947972 4854 scope.go:117] "RemoveContainer" containerID="11b07654612794b379eace89c3cb7bcac66467c9554d1270d913fb6a29971b18" Jan 03 06:07:21 crc kubenswrapper[4854]: I0103 06:07:21.971874 4854 scope.go:117] "RemoveContainer" containerID="1f2e45eb0d2f8739c77bd2ee198ff3f31bb081cff0105e884501be4a60fc41f4" Jan 03 06:07:21 crc kubenswrapper[4854]: I0103 06:07:21.974521 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec4f77c3-e679-4e7f-92a4-dd888ba6522b-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:22 crc kubenswrapper[4854]: I0103 06:07:22.008325 4854 scope.go:117] "RemoveContainer" containerID="97065523b45de4a8d51b89d3329b7f3d4f2237bd0df136211ebb2dab116de8d1" Jan 03 06:07:22 crc kubenswrapper[4854]: E0103 06:07:22.009038 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97065523b45de4a8d51b89d3329b7f3d4f2237bd0df136211ebb2dab116de8d1\": container with ID starting with 97065523b45de4a8d51b89d3329b7f3d4f2237bd0df136211ebb2dab116de8d1 not found: ID does not exist" containerID="97065523b45de4a8d51b89d3329b7f3d4f2237bd0df136211ebb2dab116de8d1" Jan 03 06:07:22 crc kubenswrapper[4854]: I0103 06:07:22.009102 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97065523b45de4a8d51b89d3329b7f3d4f2237bd0df136211ebb2dab116de8d1"} err="failed to get container status \"97065523b45de4a8d51b89d3329b7f3d4f2237bd0df136211ebb2dab116de8d1\": rpc error: code = NotFound desc = could not find container \"97065523b45de4a8d51b89d3329b7f3d4f2237bd0df136211ebb2dab116de8d1\": container with ID starting with 97065523b45de4a8d51b89d3329b7f3d4f2237bd0df136211ebb2dab116de8d1 not found: ID does not exist" Jan 03 06:07:22 crc kubenswrapper[4854]: I0103 06:07:22.009133 4854 scope.go:117] "RemoveContainer" containerID="8f4f6f985f7e59e6dfe40a39835d4193a83cc0ae9243ed7a0fee244b02ea2ade" Jan 03 06:07:22 crc kubenswrapper[4854]: E0103 06:07:22.009798 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f4f6f985f7e59e6dfe40a39835d4193a83cc0ae9243ed7a0fee244b02ea2ade\": container with ID starting with 8f4f6f985f7e59e6dfe40a39835d4193a83cc0ae9243ed7a0fee244b02ea2ade not found: ID does not exist" containerID="8f4f6f985f7e59e6dfe40a39835d4193a83cc0ae9243ed7a0fee244b02ea2ade" Jan 03 06:07:22 crc kubenswrapper[4854]: I0103 06:07:22.009820 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f4f6f985f7e59e6dfe40a39835d4193a83cc0ae9243ed7a0fee244b02ea2ade"} err="failed to get container status \"8f4f6f985f7e59e6dfe40a39835d4193a83cc0ae9243ed7a0fee244b02ea2ade\": rpc error: code = NotFound desc = could not find container \"8f4f6f985f7e59e6dfe40a39835d4193a83cc0ae9243ed7a0fee244b02ea2ade\": container with ID starting with 8f4f6f985f7e59e6dfe40a39835d4193a83cc0ae9243ed7a0fee244b02ea2ade not found: ID does not exist" Jan 03 06:07:22 crc kubenswrapper[4854]: I0103 06:07:22.009836 4854 scope.go:117] "RemoveContainer" containerID="11b07654612794b379eace89c3cb7bcac66467c9554d1270d913fb6a29971b18" Jan 03 06:07:22 crc kubenswrapper[4854]: E0103 06:07:22.010286 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11b07654612794b379eace89c3cb7bcac66467c9554d1270d913fb6a29971b18\": container with ID starting with 11b07654612794b379eace89c3cb7bcac66467c9554d1270d913fb6a29971b18 not found: ID does not exist" containerID="11b07654612794b379eace89c3cb7bcac66467c9554d1270d913fb6a29971b18" Jan 03 06:07:22 crc kubenswrapper[4854]: I0103 06:07:22.010350 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11b07654612794b379eace89c3cb7bcac66467c9554d1270d913fb6a29971b18"} err="failed to get container status \"11b07654612794b379eace89c3cb7bcac66467c9554d1270d913fb6a29971b18\": rpc error: code = NotFound desc = could not find container \"11b07654612794b379eace89c3cb7bcac66467c9554d1270d913fb6a29971b18\": container with ID starting with 11b07654612794b379eace89c3cb7bcac66467c9554d1270d913fb6a29971b18 not found: ID does not exist" Jan 03 06:07:22 crc kubenswrapper[4854]: I0103 06:07:22.010390 4854 scope.go:117] "RemoveContainer" containerID="1f2e45eb0d2f8739c77bd2ee198ff3f31bb081cff0105e884501be4a60fc41f4" Jan 03 06:07:22 crc kubenswrapper[4854]: E0103 06:07:22.021430 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f2e45eb0d2f8739c77bd2ee198ff3f31bb081cff0105e884501be4a60fc41f4\": container with ID starting with 1f2e45eb0d2f8739c77bd2ee198ff3f31bb081cff0105e884501be4a60fc41f4 not found: ID does not exist" containerID="1f2e45eb0d2f8739c77bd2ee198ff3f31bb081cff0105e884501be4a60fc41f4" Jan 03 06:07:22 crc kubenswrapper[4854]: I0103 06:07:22.021473 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f2e45eb0d2f8739c77bd2ee198ff3f31bb081cff0105e884501be4a60fc41f4"} err="failed to get container status \"1f2e45eb0d2f8739c77bd2ee198ff3f31bb081cff0105e884501be4a60fc41f4\": rpc error: code = NotFound desc = could not find container \"1f2e45eb0d2f8739c77bd2ee198ff3f31bb081cff0105e884501be4a60fc41f4\": container with ID starting with 1f2e45eb0d2f8739c77bd2ee198ff3f31bb081cff0105e884501be4a60fc41f4 not found: ID does not exist" Jan 03 06:07:22 crc kubenswrapper[4854]: I0103 06:07:22.081041 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 03 06:07:22 crc kubenswrapper[4854]: I0103 06:07:22.081106 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 03 06:07:22 crc kubenswrapper[4854]: I0103 06:07:22.280656 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:07:22 crc kubenswrapper[4854]: I0103 06:07:22.313789 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:07:22 crc kubenswrapper[4854]: I0103 06:07:22.341470 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:07:22 crc kubenswrapper[4854]: E0103 06:07:22.342185 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec4f77c3-e679-4e7f-92a4-dd888ba6522b" containerName="ceilometer-central-agent" Jan 03 06:07:22 crc kubenswrapper[4854]: I0103 06:07:22.342253 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec4f77c3-e679-4e7f-92a4-dd888ba6522b" containerName="ceilometer-central-agent" Jan 03 06:07:22 crc kubenswrapper[4854]: E0103 06:07:22.342329 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec4f77c3-e679-4e7f-92a4-dd888ba6522b" containerName="sg-core" Jan 03 06:07:22 crc kubenswrapper[4854]: I0103 06:07:22.342380 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec4f77c3-e679-4e7f-92a4-dd888ba6522b" containerName="sg-core" Jan 03 06:07:22 crc kubenswrapper[4854]: E0103 06:07:22.342446 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec4f77c3-e679-4e7f-92a4-dd888ba6522b" containerName="ceilometer-notification-agent" Jan 03 06:07:22 crc kubenswrapper[4854]: I0103 06:07:22.342494 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec4f77c3-e679-4e7f-92a4-dd888ba6522b" containerName="ceilometer-notification-agent" Jan 03 06:07:22 crc kubenswrapper[4854]: E0103 06:07:22.342600 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec4f77c3-e679-4e7f-92a4-dd888ba6522b" containerName="proxy-httpd" Jan 03 06:07:22 crc kubenswrapper[4854]: I0103 06:07:22.342649 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec4f77c3-e679-4e7f-92a4-dd888ba6522b" containerName="proxy-httpd" Jan 03 06:07:22 crc kubenswrapper[4854]: I0103 06:07:22.342901 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec4f77c3-e679-4e7f-92a4-dd888ba6522b" containerName="ceilometer-notification-agent" Jan 03 06:07:22 crc kubenswrapper[4854]: I0103 06:07:22.342969 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec4f77c3-e679-4e7f-92a4-dd888ba6522b" containerName="proxy-httpd" Jan 03 06:07:22 crc kubenswrapper[4854]: I0103 06:07:22.343028 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec4f77c3-e679-4e7f-92a4-dd888ba6522b" containerName="ceilometer-central-agent" Jan 03 06:07:22 crc kubenswrapper[4854]: I0103 06:07:22.343096 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec4f77c3-e679-4e7f-92a4-dd888ba6522b" containerName="sg-core" Jan 03 06:07:22 crc kubenswrapper[4854]: I0103 06:07:22.345429 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 06:07:22 crc kubenswrapper[4854]: I0103 06:07:22.353799 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 03 06:07:22 crc kubenswrapper[4854]: I0103 06:07:22.353992 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 03 06:07:22 crc kubenswrapper[4854]: I0103 06:07:22.374893 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:07:22 crc kubenswrapper[4854]: I0103 06:07:22.495592 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ee936f2-ed21-40c7-a10f-23eb3e2f198d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3ee936f2-ed21-40c7-a10f-23eb3e2f198d\") " pod="openstack/ceilometer-0" Jan 03 06:07:22 crc kubenswrapper[4854]: I0103 06:07:22.495662 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd8nk\" (UniqueName: \"kubernetes.io/projected/3ee936f2-ed21-40c7-a10f-23eb3e2f198d-kube-api-access-pd8nk\") pod \"ceilometer-0\" (UID: \"3ee936f2-ed21-40c7-a10f-23eb3e2f198d\") " pod="openstack/ceilometer-0" Jan 03 06:07:22 crc kubenswrapper[4854]: I0103 06:07:22.495687 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ee936f2-ed21-40c7-a10f-23eb3e2f198d-log-httpd\") pod \"ceilometer-0\" (UID: \"3ee936f2-ed21-40c7-a10f-23eb3e2f198d\") " pod="openstack/ceilometer-0" Jan 03 06:07:22 crc kubenswrapper[4854]: I0103 06:07:22.495775 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ee936f2-ed21-40c7-a10f-23eb3e2f198d-run-httpd\") pod \"ceilometer-0\" (UID: \"3ee936f2-ed21-40c7-a10f-23eb3e2f198d\") " pod="openstack/ceilometer-0" Jan 03 06:07:22 crc kubenswrapper[4854]: I0103 06:07:22.495807 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3ee936f2-ed21-40c7-a10f-23eb3e2f198d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3ee936f2-ed21-40c7-a10f-23eb3e2f198d\") " pod="openstack/ceilometer-0" Jan 03 06:07:22 crc kubenswrapper[4854]: I0103 06:07:22.495830 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ee936f2-ed21-40c7-a10f-23eb3e2f198d-scripts\") pod \"ceilometer-0\" (UID: \"3ee936f2-ed21-40c7-a10f-23eb3e2f198d\") " pod="openstack/ceilometer-0" Jan 03 06:07:22 crc kubenswrapper[4854]: I0103 06:07:22.495900 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ee936f2-ed21-40c7-a10f-23eb3e2f198d-config-data\") pod \"ceilometer-0\" (UID: \"3ee936f2-ed21-40c7-a10f-23eb3e2f198d\") " pod="openstack/ceilometer-0" Jan 03 06:07:22 crc kubenswrapper[4854]: I0103 06:07:22.603334 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pd8nk\" (UniqueName: \"kubernetes.io/projected/3ee936f2-ed21-40c7-a10f-23eb3e2f198d-kube-api-access-pd8nk\") pod \"ceilometer-0\" (UID: \"3ee936f2-ed21-40c7-a10f-23eb3e2f198d\") " pod="openstack/ceilometer-0" Jan 03 06:07:22 crc kubenswrapper[4854]: I0103 06:07:22.604505 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ee936f2-ed21-40c7-a10f-23eb3e2f198d-log-httpd\") pod \"ceilometer-0\" (UID: \"3ee936f2-ed21-40c7-a10f-23eb3e2f198d\") " pod="openstack/ceilometer-0" Jan 03 06:07:22 crc kubenswrapper[4854]: I0103 06:07:22.604764 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ee936f2-ed21-40c7-a10f-23eb3e2f198d-run-httpd\") pod \"ceilometer-0\" (UID: \"3ee936f2-ed21-40c7-a10f-23eb3e2f198d\") " pod="openstack/ceilometer-0" Jan 03 06:07:22 crc kubenswrapper[4854]: I0103 06:07:22.604895 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3ee936f2-ed21-40c7-a10f-23eb3e2f198d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3ee936f2-ed21-40c7-a10f-23eb3e2f198d\") " pod="openstack/ceilometer-0" Jan 03 06:07:22 crc kubenswrapper[4854]: I0103 06:07:22.605005 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ee936f2-ed21-40c7-a10f-23eb3e2f198d-scripts\") pod \"ceilometer-0\" (UID: \"3ee936f2-ed21-40c7-a10f-23eb3e2f198d\") " pod="openstack/ceilometer-0" Jan 03 06:07:22 crc kubenswrapper[4854]: I0103 06:07:22.605181 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ee936f2-ed21-40c7-a10f-23eb3e2f198d-config-data\") pod \"ceilometer-0\" (UID: \"3ee936f2-ed21-40c7-a10f-23eb3e2f198d\") " pod="openstack/ceilometer-0" Jan 03 06:07:22 crc kubenswrapper[4854]: I0103 06:07:22.605391 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ee936f2-ed21-40c7-a10f-23eb3e2f198d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3ee936f2-ed21-40c7-a10f-23eb3e2f198d\") " pod="openstack/ceilometer-0" Jan 03 06:07:22 crc kubenswrapper[4854]: I0103 06:07:22.607649 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ee936f2-ed21-40c7-a10f-23eb3e2f198d-run-httpd\") pod \"ceilometer-0\" (UID: \"3ee936f2-ed21-40c7-a10f-23eb3e2f198d\") " pod="openstack/ceilometer-0" Jan 03 06:07:22 crc kubenswrapper[4854]: I0103 06:07:22.607934 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ee936f2-ed21-40c7-a10f-23eb3e2f198d-log-httpd\") pod \"ceilometer-0\" (UID: \"3ee936f2-ed21-40c7-a10f-23eb3e2f198d\") " pod="openstack/ceilometer-0" Jan 03 06:07:22 crc kubenswrapper[4854]: I0103 06:07:22.610966 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ee936f2-ed21-40c7-a10f-23eb3e2f198d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3ee936f2-ed21-40c7-a10f-23eb3e2f198d\") " pod="openstack/ceilometer-0" Jan 03 06:07:22 crc kubenswrapper[4854]: I0103 06:07:22.611938 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ee936f2-ed21-40c7-a10f-23eb3e2f198d-config-data\") pod \"ceilometer-0\" (UID: \"3ee936f2-ed21-40c7-a10f-23eb3e2f198d\") " pod="openstack/ceilometer-0" Jan 03 06:07:22 crc kubenswrapper[4854]: I0103 06:07:22.614211 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3ee936f2-ed21-40c7-a10f-23eb3e2f198d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3ee936f2-ed21-40c7-a10f-23eb3e2f198d\") " pod="openstack/ceilometer-0" Jan 03 06:07:22 crc kubenswrapper[4854]: I0103 06:07:22.625654 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pd8nk\" (UniqueName: \"kubernetes.io/projected/3ee936f2-ed21-40c7-a10f-23eb3e2f198d-kube-api-access-pd8nk\") pod \"ceilometer-0\" (UID: \"3ee936f2-ed21-40c7-a10f-23eb3e2f198d\") " pod="openstack/ceilometer-0" Jan 03 06:07:22 crc kubenswrapper[4854]: I0103 06:07:22.626090 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ee936f2-ed21-40c7-a10f-23eb3e2f198d-scripts\") pod \"ceilometer-0\" (UID: \"3ee936f2-ed21-40c7-a10f-23eb3e2f198d\") " pod="openstack/ceilometer-0" Jan 03 06:07:22 crc kubenswrapper[4854]: I0103 06:07:22.694033 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 06:07:23 crc kubenswrapper[4854]: I0103 06:07:23.052373 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 03 06:07:23 crc kubenswrapper[4854]: I0103 06:07:23.103248 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="261c8dea-757c-4e06-9bd5-a39fdb96f34e" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.250:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 06:07:23 crc kubenswrapper[4854]: I0103 06:07:23.103878 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="261c8dea-757c-4e06-9bd5-a39fdb96f34e" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.250:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 06:07:23 crc kubenswrapper[4854]: I0103 06:07:23.106796 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 03 06:07:23 crc kubenswrapper[4854]: I0103 06:07:23.541782 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:07:23 crc kubenswrapper[4854]: I0103 06:07:23.618659 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:07:23 crc kubenswrapper[4854]: I0103 06:07:23.832836 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ee936f2-ed21-40c7-a10f-23eb3e2f198d","Type":"ContainerStarted","Data":"5956b81dc4a164f331dcdeadcee57e22d5798b260fc902af8996ca6ded589ff1"} Jan 03 06:07:23 crc kubenswrapper[4854]: I0103 06:07:23.835336 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7eb108e4-cabe-4eca-afb4-4104b147b759","Type":"ContainerStarted","Data":"f6c24955dcc27f14af561c2ad0699104d42cf1ce682d3f8bdfeee35908fe078b"} Jan 03 06:07:23 crc kubenswrapper[4854]: I0103 06:07:23.865918 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 03 06:07:24 crc kubenswrapper[4854]: I0103 06:07:24.145987 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec4f77c3-e679-4e7f-92a4-dd888ba6522b" path="/var/lib/kubelet/pods/ec4f77c3-e679-4e7f-92a4-dd888ba6522b/volumes" Jan 03 06:07:24 crc kubenswrapper[4854]: I0103 06:07:24.849249 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ee936f2-ed21-40c7-a10f-23eb3e2f198d","Type":"ContainerStarted","Data":"e94a27243816fac7d1450c981a5c93a7ef4141863a5b94792580de04e768a524"} Jan 03 06:07:25 crc kubenswrapper[4854]: I0103 06:07:25.110069 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 03 06:07:25 crc kubenswrapper[4854]: I0103 06:07:25.110438 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 03 06:07:26 crc kubenswrapper[4854]: I0103 06:07:26.192381 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="acc395e2-fa94-4736-8167-960fa4c2779b" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.252:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 06:07:26 crc kubenswrapper[4854]: I0103 06:07:26.192397 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="acc395e2-fa94-4736-8167-960fa4c2779b" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.252:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 06:07:29 crc kubenswrapper[4854]: I0103 06:07:29.832513 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 03 06:07:29 crc kubenswrapper[4854]: I0103 06:07:29.919224 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7eb108e4-cabe-4eca-afb4-4104b147b759","Type":"ContainerStarted","Data":"9cd01d567662b47bde48758971fdee4db5f1305265086e33fdba07616aa2ab50"} Jan 03 06:07:29 crc kubenswrapper[4854]: I0103 06:07:29.919408 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="7eb108e4-cabe-4eca-afb4-4104b147b759" containerName="aodh-api" containerID="cri-o://0e2dc2ee6a888765aa49f91ae537905fd2fc0ad221a0bd1eeb89aa8528c0811d" gracePeriod=30 Jan 03 06:07:29 crc kubenswrapper[4854]: I0103 06:07:29.919971 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="7eb108e4-cabe-4eca-afb4-4104b147b759" containerName="aodh-listener" containerID="cri-o://9cd01d567662b47bde48758971fdee4db5f1305265086e33fdba07616aa2ab50" gracePeriod=30 Jan 03 06:07:29 crc kubenswrapper[4854]: I0103 06:07:29.920023 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="7eb108e4-cabe-4eca-afb4-4104b147b759" containerName="aodh-notifier" containerID="cri-o://f6c24955dcc27f14af561c2ad0699104d42cf1ce682d3f8bdfeee35908fe078b" gracePeriod=30 Jan 03 06:07:29 crc kubenswrapper[4854]: I0103 06:07:29.920058 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="7eb108e4-cabe-4eca-afb4-4104b147b759" containerName="aodh-evaluator" containerID="cri-o://8d6faa93cfe4277de9a204cc0900087b922d49a672da006257f3580cf27c23c8" gracePeriod=30 Jan 03 06:07:29 crc kubenswrapper[4854]: I0103 06:07:29.927402 4854 generic.go:334] "Generic (PLEG): container finished" podID="32c47867-8d22-4340-98a0-37ae6b098d80" containerID="1567b5bc5aef6ec8d4493e57203e2e1bb21a78b3d600a4a9d135fc61e19def87" exitCode=137 Jan 03 06:07:29 crc kubenswrapper[4854]: I0103 06:07:29.927723 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"32c47867-8d22-4340-98a0-37ae6b098d80","Type":"ContainerDied","Data":"1567b5bc5aef6ec8d4493e57203e2e1bb21a78b3d600a4a9d135fc61e19def87"} Jan 03 06:07:29 crc kubenswrapper[4854]: I0103 06:07:29.927762 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"32c47867-8d22-4340-98a0-37ae6b098d80","Type":"ContainerDied","Data":"df1968a863c699ff1b208889de45fe0e2c47318aa9d79f9052a2e9bdee5da694"} Jan 03 06:07:29 crc kubenswrapper[4854]: I0103 06:07:29.927797 4854 scope.go:117] "RemoveContainer" containerID="1567b5bc5aef6ec8d4493e57203e2e1bb21a78b3d600a4a9d135fc61e19def87" Jan 03 06:07:29 crc kubenswrapper[4854]: I0103 06:07:29.927999 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 03 06:07:29 crc kubenswrapper[4854]: I0103 06:07:29.928919 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32c47867-8d22-4340-98a0-37ae6b098d80-combined-ca-bundle\") pod \"32c47867-8d22-4340-98a0-37ae6b098d80\" (UID: \"32c47867-8d22-4340-98a0-37ae6b098d80\") " Jan 03 06:07:29 crc kubenswrapper[4854]: I0103 06:07:29.929186 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32c47867-8d22-4340-98a0-37ae6b098d80-config-data\") pod \"32c47867-8d22-4340-98a0-37ae6b098d80\" (UID: \"32c47867-8d22-4340-98a0-37ae6b098d80\") " Jan 03 06:07:29 crc kubenswrapper[4854]: I0103 06:07:29.929356 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62pdw\" (UniqueName: \"kubernetes.io/projected/32c47867-8d22-4340-98a0-37ae6b098d80-kube-api-access-62pdw\") pod \"32c47867-8d22-4340-98a0-37ae6b098d80\" (UID: \"32c47867-8d22-4340-98a0-37ae6b098d80\") " Jan 03 06:07:29 crc kubenswrapper[4854]: I0103 06:07:29.933004 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ee936f2-ed21-40c7-a10f-23eb3e2f198d","Type":"ContainerStarted","Data":"c62bf77f917e74e3c88bc9b3596d2594db7c9ce4772c60fdbc898f46f9eb57fb"} Jan 03 06:07:29 crc kubenswrapper[4854]: I0103 06:07:29.947564 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32c47867-8d22-4340-98a0-37ae6b098d80-kube-api-access-62pdw" (OuterVolumeSpecName: "kube-api-access-62pdw") pod "32c47867-8d22-4340-98a0-37ae6b098d80" (UID: "32c47867-8d22-4340-98a0-37ae6b098d80"). InnerVolumeSpecName "kube-api-access-62pdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:07:29 crc kubenswrapper[4854]: I0103 06:07:29.964533 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.347809148 podStartE2EDuration="13.964486985s" podCreationTimestamp="2026-01-03 06:07:16 +0000 UTC" firstStartedPulling="2026-01-03 06:07:17.538205563 +0000 UTC m=+1615.864782135" lastFinishedPulling="2026-01-03 06:07:29.1548834 +0000 UTC m=+1627.481459972" observedRunningTime="2026-01-03 06:07:29.947493801 +0000 UTC m=+1628.274070383" watchObservedRunningTime="2026-01-03 06:07:29.964486985 +0000 UTC m=+1628.291063577" Jan 03 06:07:29 crc kubenswrapper[4854]: I0103 06:07:29.983517 4854 scope.go:117] "RemoveContainer" containerID="1567b5bc5aef6ec8d4493e57203e2e1bb21a78b3d600a4a9d135fc61e19def87" Jan 03 06:07:29 crc kubenswrapper[4854]: E0103 06:07:29.994821 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1567b5bc5aef6ec8d4493e57203e2e1bb21a78b3d600a4a9d135fc61e19def87\": container with ID starting with 1567b5bc5aef6ec8d4493e57203e2e1bb21a78b3d600a4a9d135fc61e19def87 not found: ID does not exist" containerID="1567b5bc5aef6ec8d4493e57203e2e1bb21a78b3d600a4a9d135fc61e19def87" Jan 03 06:07:29 crc kubenswrapper[4854]: I0103 06:07:29.994864 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1567b5bc5aef6ec8d4493e57203e2e1bb21a78b3d600a4a9d135fc61e19def87"} err="failed to get container status \"1567b5bc5aef6ec8d4493e57203e2e1bb21a78b3d600a4a9d135fc61e19def87\": rpc error: code = NotFound desc = could not find container \"1567b5bc5aef6ec8d4493e57203e2e1bb21a78b3d600a4a9d135fc61e19def87\": container with ID starting with 1567b5bc5aef6ec8d4493e57203e2e1bb21a78b3d600a4a9d135fc61e19def87 not found: ID does not exist" Jan 03 06:07:29 crc kubenswrapper[4854]: I0103 06:07:29.996550 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32c47867-8d22-4340-98a0-37ae6b098d80-config-data" (OuterVolumeSpecName: "config-data") pod "32c47867-8d22-4340-98a0-37ae6b098d80" (UID: "32c47867-8d22-4340-98a0-37ae6b098d80"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:07:30 crc kubenswrapper[4854]: I0103 06:07:30.003326 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32c47867-8d22-4340-98a0-37ae6b098d80-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "32c47867-8d22-4340-98a0-37ae6b098d80" (UID: "32c47867-8d22-4340-98a0-37ae6b098d80"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:07:30 crc kubenswrapper[4854]: I0103 06:07:30.038057 4854 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32c47867-8d22-4340-98a0-37ae6b098d80-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:30 crc kubenswrapper[4854]: I0103 06:07:30.038120 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32c47867-8d22-4340-98a0-37ae6b098d80-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:30 crc kubenswrapper[4854]: I0103 06:07:30.038134 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62pdw\" (UniqueName: \"kubernetes.io/projected/32c47867-8d22-4340-98a0-37ae6b098d80-kube-api-access-62pdw\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:30 crc kubenswrapper[4854]: I0103 06:07:30.326296 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 03 06:07:30 crc kubenswrapper[4854]: I0103 06:07:30.339642 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 03 06:07:30 crc kubenswrapper[4854]: I0103 06:07:30.373159 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 03 06:07:30 crc kubenswrapper[4854]: E0103 06:07:30.374244 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32c47867-8d22-4340-98a0-37ae6b098d80" containerName="nova-cell1-novncproxy-novncproxy" Jan 03 06:07:30 crc kubenswrapper[4854]: I0103 06:07:30.374273 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="32c47867-8d22-4340-98a0-37ae6b098d80" containerName="nova-cell1-novncproxy-novncproxy" Jan 03 06:07:30 crc kubenswrapper[4854]: I0103 06:07:30.374604 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="32c47867-8d22-4340-98a0-37ae6b098d80" containerName="nova-cell1-novncproxy-novncproxy" Jan 03 06:07:30 crc kubenswrapper[4854]: I0103 06:07:30.375714 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 03 06:07:30 crc kubenswrapper[4854]: I0103 06:07:30.383949 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 03 06:07:30 crc kubenswrapper[4854]: I0103 06:07:30.384372 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 03 06:07:30 crc kubenswrapper[4854]: I0103 06:07:30.384412 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 03 06:07:30 crc kubenswrapper[4854]: I0103 06:07:30.390991 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 03 06:07:30 crc kubenswrapper[4854]: I0103 06:07:30.457783 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8438395-9d06-47a8-9697-f4f5c09f9be5-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f8438395-9d06-47a8-9697-f4f5c09f9be5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 03 06:07:30 crc kubenswrapper[4854]: I0103 06:07:30.457878 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8438395-9d06-47a8-9697-f4f5c09f9be5-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f8438395-9d06-47a8-9697-f4f5c09f9be5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 03 06:07:30 crc kubenswrapper[4854]: I0103 06:07:30.458212 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8438395-9d06-47a8-9697-f4f5c09f9be5-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f8438395-9d06-47a8-9697-f4f5c09f9be5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 03 06:07:30 crc kubenswrapper[4854]: I0103 06:07:30.458325 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knzpk\" (UniqueName: \"kubernetes.io/projected/f8438395-9d06-47a8-9697-f4f5c09f9be5-kube-api-access-knzpk\") pod \"nova-cell1-novncproxy-0\" (UID: \"f8438395-9d06-47a8-9697-f4f5c09f9be5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 03 06:07:30 crc kubenswrapper[4854]: I0103 06:07:30.458418 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8438395-9d06-47a8-9697-f4f5c09f9be5-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f8438395-9d06-47a8-9697-f4f5c09f9be5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 03 06:07:30 crc kubenswrapper[4854]: I0103 06:07:30.560464 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8438395-9d06-47a8-9697-f4f5c09f9be5-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f8438395-9d06-47a8-9697-f4f5c09f9be5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 03 06:07:30 crc kubenswrapper[4854]: I0103 06:07:30.560559 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8438395-9d06-47a8-9697-f4f5c09f9be5-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f8438395-9d06-47a8-9697-f4f5c09f9be5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 03 06:07:30 crc kubenswrapper[4854]: I0103 06:07:30.560595 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8438395-9d06-47a8-9697-f4f5c09f9be5-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f8438395-9d06-47a8-9697-f4f5c09f9be5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 03 06:07:30 crc kubenswrapper[4854]: I0103 06:07:30.560753 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8438395-9d06-47a8-9697-f4f5c09f9be5-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f8438395-9d06-47a8-9697-f4f5c09f9be5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 03 06:07:30 crc kubenswrapper[4854]: I0103 06:07:30.560811 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knzpk\" (UniqueName: \"kubernetes.io/projected/f8438395-9d06-47a8-9697-f4f5c09f9be5-kube-api-access-knzpk\") pod \"nova-cell1-novncproxy-0\" (UID: \"f8438395-9d06-47a8-9697-f4f5c09f9be5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 03 06:07:30 crc kubenswrapper[4854]: I0103 06:07:30.569008 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8438395-9d06-47a8-9697-f4f5c09f9be5-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f8438395-9d06-47a8-9697-f4f5c09f9be5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 03 06:07:30 crc kubenswrapper[4854]: I0103 06:07:30.569750 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8438395-9d06-47a8-9697-f4f5c09f9be5-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f8438395-9d06-47a8-9697-f4f5c09f9be5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 03 06:07:30 crc kubenswrapper[4854]: I0103 06:07:30.573306 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8438395-9d06-47a8-9697-f4f5c09f9be5-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f8438395-9d06-47a8-9697-f4f5c09f9be5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 03 06:07:30 crc kubenswrapper[4854]: I0103 06:07:30.577040 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8438395-9d06-47a8-9697-f4f5c09f9be5-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f8438395-9d06-47a8-9697-f4f5c09f9be5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 03 06:07:30 crc kubenswrapper[4854]: I0103 06:07:30.583617 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knzpk\" (UniqueName: \"kubernetes.io/projected/f8438395-9d06-47a8-9697-f4f5c09f9be5-kube-api-access-knzpk\") pod \"nova-cell1-novncproxy-0\" (UID: \"f8438395-9d06-47a8-9697-f4f5c09f9be5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 03 06:07:30 crc kubenswrapper[4854]: I0103 06:07:30.715198 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 03 06:07:30 crc kubenswrapper[4854]: I0103 06:07:30.962117 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ee936f2-ed21-40c7-a10f-23eb3e2f198d","Type":"ContainerStarted","Data":"74f2ef472a147fd393529a873d52b201dddbeea25c79e5a1fac8d8c6bd376494"} Jan 03 06:07:30 crc kubenswrapper[4854]: I0103 06:07:30.970194 4854 generic.go:334] "Generic (PLEG): container finished" podID="7eb108e4-cabe-4eca-afb4-4104b147b759" containerID="f6c24955dcc27f14af561c2ad0699104d42cf1ce682d3f8bdfeee35908fe078b" exitCode=0 Jan 03 06:07:30 crc kubenswrapper[4854]: I0103 06:07:30.970230 4854 generic.go:334] "Generic (PLEG): container finished" podID="7eb108e4-cabe-4eca-afb4-4104b147b759" containerID="8d6faa93cfe4277de9a204cc0900087b922d49a672da006257f3580cf27c23c8" exitCode=0 Jan 03 06:07:30 crc kubenswrapper[4854]: I0103 06:07:30.970238 4854 generic.go:334] "Generic (PLEG): container finished" podID="7eb108e4-cabe-4eca-afb4-4104b147b759" containerID="0e2dc2ee6a888765aa49f91ae537905fd2fc0ad221a0bd1eeb89aa8528c0811d" exitCode=0 Jan 03 06:07:30 crc kubenswrapper[4854]: I0103 06:07:30.970255 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7eb108e4-cabe-4eca-afb4-4104b147b759","Type":"ContainerDied","Data":"f6c24955dcc27f14af561c2ad0699104d42cf1ce682d3f8bdfeee35908fe078b"} Jan 03 06:07:30 crc kubenswrapper[4854]: I0103 06:07:30.970282 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7eb108e4-cabe-4eca-afb4-4104b147b759","Type":"ContainerDied","Data":"8d6faa93cfe4277de9a204cc0900087b922d49a672da006257f3580cf27c23c8"} Jan 03 06:07:30 crc kubenswrapper[4854]: I0103 06:07:30.970291 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7eb108e4-cabe-4eca-afb4-4104b147b759","Type":"ContainerDied","Data":"0e2dc2ee6a888765aa49f91ae537905fd2fc0ad221a0bd1eeb89aa8528c0811d"} Jan 03 06:07:31 crc kubenswrapper[4854]: I0103 06:07:31.251368 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 03 06:07:31 crc kubenswrapper[4854]: W0103 06:07:31.254104 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8438395_9d06_47a8_9697_f4f5c09f9be5.slice/crio-6a7ae53a520083ce7fe65ba0ebf92408c7c0587a7b185d024666ed0b7f238c9b WatchSource:0}: Error finding container 6a7ae53a520083ce7fe65ba0ebf92408c7c0587a7b185d024666ed0b7f238c9b: Status 404 returned error can't find the container with id 6a7ae53a520083ce7fe65ba0ebf92408c7c0587a7b185d024666ed0b7f238c9b Jan 03 06:07:31 crc kubenswrapper[4854]: I0103 06:07:31.983995 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ee936f2-ed21-40c7-a10f-23eb3e2f198d","Type":"ContainerStarted","Data":"5dba256751019b0386a785ada8c0d47cc6029858df2de6358d5bdac623d95b99"} Jan 03 06:07:31 crc kubenswrapper[4854]: I0103 06:07:31.984128 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3ee936f2-ed21-40c7-a10f-23eb3e2f198d" containerName="ceilometer-central-agent" containerID="cri-o://e94a27243816fac7d1450c981a5c93a7ef4141863a5b94792580de04e768a524" gracePeriod=30 Jan 03 06:07:31 crc kubenswrapper[4854]: I0103 06:07:31.984401 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3ee936f2-ed21-40c7-a10f-23eb3e2f198d" containerName="proxy-httpd" containerID="cri-o://5dba256751019b0386a785ada8c0d47cc6029858df2de6358d5bdac623d95b99" gracePeriod=30 Jan 03 06:07:31 crc kubenswrapper[4854]: I0103 06:07:31.984484 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3ee936f2-ed21-40c7-a10f-23eb3e2f198d" containerName="sg-core" containerID="cri-o://74f2ef472a147fd393529a873d52b201dddbeea25c79e5a1fac8d8c6bd376494" gracePeriod=30 Jan 03 06:07:31 crc kubenswrapper[4854]: I0103 06:07:31.984518 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3ee936f2-ed21-40c7-a10f-23eb3e2f198d" containerName="ceilometer-notification-agent" containerID="cri-o://c62bf77f917e74e3c88bc9b3596d2594db7c9ce4772c60fdbc898f46f9eb57fb" gracePeriod=30 Jan 03 06:07:31 crc kubenswrapper[4854]: I0103 06:07:31.984561 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 03 06:07:31 crc kubenswrapper[4854]: I0103 06:07:31.986380 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f8438395-9d06-47a8-9697-f4f5c09f9be5","Type":"ContainerStarted","Data":"14b944c3fa38fa5f523be2030348983df499ddc0e83c349ca20c64a4841bdc88"} Jan 03 06:07:31 crc kubenswrapper[4854]: I0103 06:07:31.986418 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f8438395-9d06-47a8-9697-f4f5c09f9be5","Type":"ContainerStarted","Data":"6a7ae53a520083ce7fe65ba0ebf92408c7c0587a7b185d024666ed0b7f238c9b"} Jan 03 06:07:32 crc kubenswrapper[4854]: I0103 06:07:32.025465 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.8724857350000002 podStartE2EDuration="10.025441781s" podCreationTimestamp="2026-01-03 06:07:22 +0000 UTC" firstStartedPulling="2026-01-03 06:07:23.542298392 +0000 UTC m=+1621.868874964" lastFinishedPulling="2026-01-03 06:07:31.695254438 +0000 UTC m=+1630.021831010" observedRunningTime="2026-01-03 06:07:32.01059667 +0000 UTC m=+1630.337173252" watchObservedRunningTime="2026-01-03 06:07:32.025441781 +0000 UTC m=+1630.352018353" Jan 03 06:07:32 crc kubenswrapper[4854]: I0103 06:07:32.049321 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.049301845 podStartE2EDuration="2.049301845s" podCreationTimestamp="2026-01-03 06:07:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:07:32.037781628 +0000 UTC m=+1630.364358190" watchObservedRunningTime="2026-01-03 06:07:32.049301845 +0000 UTC m=+1630.375878417" Jan 03 06:07:32 crc kubenswrapper[4854]: I0103 06:07:32.090465 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 03 06:07:32 crc kubenswrapper[4854]: I0103 06:07:32.090995 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 03 06:07:32 crc kubenswrapper[4854]: I0103 06:07:32.108349 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 03 06:07:32 crc kubenswrapper[4854]: I0103 06:07:32.164779 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32c47867-8d22-4340-98a0-37ae6b098d80" path="/var/lib/kubelet/pods/32c47867-8d22-4340-98a0-37ae6b098d80/volumes" Jan 03 06:07:33 crc kubenswrapper[4854]: I0103 06:07:33.004584 4854 generic.go:334] "Generic (PLEG): container finished" podID="3ee936f2-ed21-40c7-a10f-23eb3e2f198d" containerID="5dba256751019b0386a785ada8c0d47cc6029858df2de6358d5bdac623d95b99" exitCode=0 Jan 03 06:07:33 crc kubenswrapper[4854]: I0103 06:07:33.004891 4854 generic.go:334] "Generic (PLEG): container finished" podID="3ee936f2-ed21-40c7-a10f-23eb3e2f198d" containerID="74f2ef472a147fd393529a873d52b201dddbeea25c79e5a1fac8d8c6bd376494" exitCode=2 Jan 03 06:07:33 crc kubenswrapper[4854]: I0103 06:07:33.004900 4854 generic.go:334] "Generic (PLEG): container finished" podID="3ee936f2-ed21-40c7-a10f-23eb3e2f198d" containerID="c62bf77f917e74e3c88bc9b3596d2594db7c9ce4772c60fdbc898f46f9eb57fb" exitCode=0 Jan 03 06:07:33 crc kubenswrapper[4854]: I0103 06:07:33.004661 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ee936f2-ed21-40c7-a10f-23eb3e2f198d","Type":"ContainerDied","Data":"5dba256751019b0386a785ada8c0d47cc6029858df2de6358d5bdac623d95b99"} Jan 03 06:07:33 crc kubenswrapper[4854]: I0103 06:07:33.004990 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ee936f2-ed21-40c7-a10f-23eb3e2f198d","Type":"ContainerDied","Data":"74f2ef472a147fd393529a873d52b201dddbeea25c79e5a1fac8d8c6bd376494"} Jan 03 06:07:33 crc kubenswrapper[4854]: I0103 06:07:33.005002 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ee936f2-ed21-40c7-a10f-23eb3e2f198d","Type":"ContainerDied","Data":"c62bf77f917e74e3c88bc9b3596d2594db7c9ce4772c60fdbc898f46f9eb57fb"} Jan 03 06:07:33 crc kubenswrapper[4854]: I0103 06:07:33.009803 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 03 06:07:35 crc kubenswrapper[4854]: I0103 06:07:35.114190 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 03 06:07:35 crc kubenswrapper[4854]: I0103 06:07:35.115055 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 03 06:07:35 crc kubenswrapper[4854]: I0103 06:07:35.115501 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 03 06:07:35 crc kubenswrapper[4854]: I0103 06:07:35.118186 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 03 06:07:35 crc kubenswrapper[4854]: I0103 06:07:35.695162 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 06:07:35 crc kubenswrapper[4854]: I0103 06:07:35.715346 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 03 06:07:35 crc kubenswrapper[4854]: I0103 06:07:35.788468 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ee936f2-ed21-40c7-a10f-23eb3e2f198d-run-httpd\") pod \"3ee936f2-ed21-40c7-a10f-23eb3e2f198d\" (UID: \"3ee936f2-ed21-40c7-a10f-23eb3e2f198d\") " Jan 03 06:07:35 crc kubenswrapper[4854]: I0103 06:07:35.788530 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pd8nk\" (UniqueName: \"kubernetes.io/projected/3ee936f2-ed21-40c7-a10f-23eb3e2f198d-kube-api-access-pd8nk\") pod \"3ee936f2-ed21-40c7-a10f-23eb3e2f198d\" (UID: \"3ee936f2-ed21-40c7-a10f-23eb3e2f198d\") " Jan 03 06:07:35 crc kubenswrapper[4854]: I0103 06:07:35.788558 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3ee936f2-ed21-40c7-a10f-23eb3e2f198d-sg-core-conf-yaml\") pod \"3ee936f2-ed21-40c7-a10f-23eb3e2f198d\" (UID: \"3ee936f2-ed21-40c7-a10f-23eb3e2f198d\") " Jan 03 06:07:35 crc kubenswrapper[4854]: I0103 06:07:35.788801 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ee936f2-ed21-40c7-a10f-23eb3e2f198d-scripts\") pod \"3ee936f2-ed21-40c7-a10f-23eb3e2f198d\" (UID: \"3ee936f2-ed21-40c7-a10f-23eb3e2f198d\") " Jan 03 06:07:35 crc kubenswrapper[4854]: I0103 06:07:35.788836 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ee936f2-ed21-40c7-a10f-23eb3e2f198d-log-httpd\") pod \"3ee936f2-ed21-40c7-a10f-23eb3e2f198d\" (UID: \"3ee936f2-ed21-40c7-a10f-23eb3e2f198d\") " Jan 03 06:07:35 crc kubenswrapper[4854]: I0103 06:07:35.788882 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ee936f2-ed21-40c7-a10f-23eb3e2f198d-combined-ca-bundle\") pod \"3ee936f2-ed21-40c7-a10f-23eb3e2f198d\" (UID: \"3ee936f2-ed21-40c7-a10f-23eb3e2f198d\") " Jan 03 06:07:35 crc kubenswrapper[4854]: I0103 06:07:35.788913 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ee936f2-ed21-40c7-a10f-23eb3e2f198d-config-data\") pod \"3ee936f2-ed21-40c7-a10f-23eb3e2f198d\" (UID: \"3ee936f2-ed21-40c7-a10f-23eb3e2f198d\") " Jan 03 06:07:35 crc kubenswrapper[4854]: I0103 06:07:35.789950 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ee936f2-ed21-40c7-a10f-23eb3e2f198d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3ee936f2-ed21-40c7-a10f-23eb3e2f198d" (UID: "3ee936f2-ed21-40c7-a10f-23eb3e2f198d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:07:35 crc kubenswrapper[4854]: I0103 06:07:35.789950 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ee936f2-ed21-40c7-a10f-23eb3e2f198d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3ee936f2-ed21-40c7-a10f-23eb3e2f198d" (UID: "3ee936f2-ed21-40c7-a10f-23eb3e2f198d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:07:35 crc kubenswrapper[4854]: I0103 06:07:35.796153 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ee936f2-ed21-40c7-a10f-23eb3e2f198d-kube-api-access-pd8nk" (OuterVolumeSpecName: "kube-api-access-pd8nk") pod "3ee936f2-ed21-40c7-a10f-23eb3e2f198d" (UID: "3ee936f2-ed21-40c7-a10f-23eb3e2f198d"). InnerVolumeSpecName "kube-api-access-pd8nk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:07:35 crc kubenswrapper[4854]: I0103 06:07:35.797704 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ee936f2-ed21-40c7-a10f-23eb3e2f198d-scripts" (OuterVolumeSpecName: "scripts") pod "3ee936f2-ed21-40c7-a10f-23eb3e2f198d" (UID: "3ee936f2-ed21-40c7-a10f-23eb3e2f198d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:07:35 crc kubenswrapper[4854]: I0103 06:07:35.822565 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ee936f2-ed21-40c7-a10f-23eb3e2f198d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3ee936f2-ed21-40c7-a10f-23eb3e2f198d" (UID: "3ee936f2-ed21-40c7-a10f-23eb3e2f198d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:07:35 crc kubenswrapper[4854]: I0103 06:07:35.887509 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ee936f2-ed21-40c7-a10f-23eb3e2f198d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ee936f2-ed21-40c7-a10f-23eb3e2f198d" (UID: "3ee936f2-ed21-40c7-a10f-23eb3e2f198d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:07:35 crc kubenswrapper[4854]: I0103 06:07:35.892542 4854 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ee936f2-ed21-40c7-a10f-23eb3e2f198d-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:35 crc kubenswrapper[4854]: I0103 06:07:35.892580 4854 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ee936f2-ed21-40c7-a10f-23eb3e2f198d-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:35 crc kubenswrapper[4854]: I0103 06:07:35.892594 4854 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ee936f2-ed21-40c7-a10f-23eb3e2f198d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:35 crc kubenswrapper[4854]: I0103 06:07:35.892611 4854 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ee936f2-ed21-40c7-a10f-23eb3e2f198d-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:35 crc kubenswrapper[4854]: I0103 06:07:35.892624 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pd8nk\" (UniqueName: \"kubernetes.io/projected/3ee936f2-ed21-40c7-a10f-23eb3e2f198d-kube-api-access-pd8nk\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:35 crc kubenswrapper[4854]: I0103 06:07:35.892637 4854 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3ee936f2-ed21-40c7-a10f-23eb3e2f198d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:35 crc kubenswrapper[4854]: I0103 06:07:35.923017 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ee936f2-ed21-40c7-a10f-23eb3e2f198d-config-data" (OuterVolumeSpecName: "config-data") pod "3ee936f2-ed21-40c7-a10f-23eb3e2f198d" (UID: "3ee936f2-ed21-40c7-a10f-23eb3e2f198d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:07:35 crc kubenswrapper[4854]: I0103 06:07:35.995274 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ee936f2-ed21-40c7-a10f-23eb3e2f198d-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.050063 4854 generic.go:334] "Generic (PLEG): container finished" podID="3ee936f2-ed21-40c7-a10f-23eb3e2f198d" containerID="e94a27243816fac7d1450c981a5c93a7ef4141863a5b94792580de04e768a524" exitCode=0 Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.050189 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.050234 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ee936f2-ed21-40c7-a10f-23eb3e2f198d","Type":"ContainerDied","Data":"e94a27243816fac7d1450c981a5c93a7ef4141863a5b94792580de04e768a524"} Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.050294 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ee936f2-ed21-40c7-a10f-23eb3e2f198d","Type":"ContainerDied","Data":"5956b81dc4a164f331dcdeadcee57e22d5798b260fc902af8996ca6ded589ff1"} Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.050322 4854 scope.go:117] "RemoveContainer" containerID="5dba256751019b0386a785ada8c0d47cc6029858df2de6358d5bdac623d95b99" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.050934 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.056484 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.073175 4854 scope.go:117] "RemoveContainer" containerID="74f2ef472a147fd393529a873d52b201dddbeea25c79e5a1fac8d8c6bd376494" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.103709 4854 scope.go:117] "RemoveContainer" containerID="c62bf77f917e74e3c88bc9b3596d2594db7c9ce4772c60fdbc898f46f9eb57fb" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.105246 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.137074 4854 scope.go:117] "RemoveContainer" containerID="e94a27243816fac7d1450c981a5c93a7ef4141863a5b94792580de04e768a524" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.146063 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.180843 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:07:36 crc kubenswrapper[4854]: E0103 06:07:36.181598 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ee936f2-ed21-40c7-a10f-23eb3e2f198d" containerName="ceilometer-central-agent" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.181618 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ee936f2-ed21-40c7-a10f-23eb3e2f198d" containerName="ceilometer-central-agent" Jan 03 06:07:36 crc kubenswrapper[4854]: E0103 06:07:36.181655 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ee936f2-ed21-40c7-a10f-23eb3e2f198d" containerName="sg-core" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.181662 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ee936f2-ed21-40c7-a10f-23eb3e2f198d" containerName="sg-core" Jan 03 06:07:36 crc kubenswrapper[4854]: E0103 06:07:36.181677 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ee936f2-ed21-40c7-a10f-23eb3e2f198d" containerName="ceilometer-notification-agent" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.181685 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ee936f2-ed21-40c7-a10f-23eb3e2f198d" containerName="ceilometer-notification-agent" Jan 03 06:07:36 crc kubenswrapper[4854]: E0103 06:07:36.181731 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ee936f2-ed21-40c7-a10f-23eb3e2f198d" containerName="proxy-httpd" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.181740 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ee936f2-ed21-40c7-a10f-23eb3e2f198d" containerName="proxy-httpd" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.182042 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ee936f2-ed21-40c7-a10f-23eb3e2f198d" containerName="proxy-httpd" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.182057 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ee936f2-ed21-40c7-a10f-23eb3e2f198d" containerName="ceilometer-central-agent" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.182073 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ee936f2-ed21-40c7-a10f-23eb3e2f198d" containerName="ceilometer-notification-agent" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.182106 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ee936f2-ed21-40c7-a10f-23eb3e2f198d" containerName="sg-core" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.187261 4854 scope.go:117] "RemoveContainer" containerID="5dba256751019b0386a785ada8c0d47cc6029858df2de6358d5bdac623d95b99" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.188205 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 06:07:36 crc kubenswrapper[4854]: E0103 06:07:36.189603 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5dba256751019b0386a785ada8c0d47cc6029858df2de6358d5bdac623d95b99\": container with ID starting with 5dba256751019b0386a785ada8c0d47cc6029858df2de6358d5bdac623d95b99 not found: ID does not exist" containerID="5dba256751019b0386a785ada8c0d47cc6029858df2de6358d5bdac623d95b99" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.189657 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5dba256751019b0386a785ada8c0d47cc6029858df2de6358d5bdac623d95b99"} err="failed to get container status \"5dba256751019b0386a785ada8c0d47cc6029858df2de6358d5bdac623d95b99\": rpc error: code = NotFound desc = could not find container \"5dba256751019b0386a785ada8c0d47cc6029858df2de6358d5bdac623d95b99\": container with ID starting with 5dba256751019b0386a785ada8c0d47cc6029858df2de6358d5bdac623d95b99 not found: ID does not exist" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.189692 4854 scope.go:117] "RemoveContainer" containerID="74f2ef472a147fd393529a873d52b201dddbeea25c79e5a1fac8d8c6bd376494" Jan 03 06:07:36 crc kubenswrapper[4854]: E0103 06:07:36.194133 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74f2ef472a147fd393529a873d52b201dddbeea25c79e5a1fac8d8c6bd376494\": container with ID starting with 74f2ef472a147fd393529a873d52b201dddbeea25c79e5a1fac8d8c6bd376494 not found: ID does not exist" containerID="74f2ef472a147fd393529a873d52b201dddbeea25c79e5a1fac8d8c6bd376494" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.194192 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74f2ef472a147fd393529a873d52b201dddbeea25c79e5a1fac8d8c6bd376494"} err="failed to get container status \"74f2ef472a147fd393529a873d52b201dddbeea25c79e5a1fac8d8c6bd376494\": rpc error: code = NotFound desc = could not find container \"74f2ef472a147fd393529a873d52b201dddbeea25c79e5a1fac8d8c6bd376494\": container with ID starting with 74f2ef472a147fd393529a873d52b201dddbeea25c79e5a1fac8d8c6bd376494 not found: ID does not exist" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.194231 4854 scope.go:117] "RemoveContainer" containerID="c62bf77f917e74e3c88bc9b3596d2594db7c9ce4772c60fdbc898f46f9eb57fb" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.194559 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 03 06:07:36 crc kubenswrapper[4854]: E0103 06:07:36.194721 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c62bf77f917e74e3c88bc9b3596d2594db7c9ce4772c60fdbc898f46f9eb57fb\": container with ID starting with c62bf77f917e74e3c88bc9b3596d2594db7c9ce4772c60fdbc898f46f9eb57fb not found: ID does not exist" containerID="c62bf77f917e74e3c88bc9b3596d2594db7c9ce4772c60fdbc898f46f9eb57fb" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.194767 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c62bf77f917e74e3c88bc9b3596d2594db7c9ce4772c60fdbc898f46f9eb57fb"} err="failed to get container status \"c62bf77f917e74e3c88bc9b3596d2594db7c9ce4772c60fdbc898f46f9eb57fb\": rpc error: code = NotFound desc = could not find container \"c62bf77f917e74e3c88bc9b3596d2594db7c9ce4772c60fdbc898f46f9eb57fb\": container with ID starting with c62bf77f917e74e3c88bc9b3596d2594db7c9ce4772c60fdbc898f46f9eb57fb not found: ID does not exist" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.194799 4854 scope.go:117] "RemoveContainer" containerID="e94a27243816fac7d1450c981a5c93a7ef4141863a5b94792580de04e768a524" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.195674 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 03 06:07:36 crc kubenswrapper[4854]: E0103 06:07:36.196318 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e94a27243816fac7d1450c981a5c93a7ef4141863a5b94792580de04e768a524\": container with ID starting with e94a27243816fac7d1450c981a5c93a7ef4141863a5b94792580de04e768a524 not found: ID does not exist" containerID="e94a27243816fac7d1450c981a5c93a7ef4141863a5b94792580de04e768a524" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.196351 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e94a27243816fac7d1450c981a5c93a7ef4141863a5b94792580de04e768a524"} err="failed to get container status \"e94a27243816fac7d1450c981a5c93a7ef4141863a5b94792580de04e768a524\": rpc error: code = NotFound desc = could not find container \"e94a27243816fac7d1450c981a5c93a7ef4141863a5b94792580de04e768a524\": container with ID starting with e94a27243816fac7d1450c981a5c93a7ef4141863a5b94792580de04e768a524 not found: ID does not exist" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.199387 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.275295 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-9g7b2"] Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.277745 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-9g7b2" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.286104 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-9g7b2"] Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.303921 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db98a120-c01c-415a-b2e2-8044d2daad27-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"db98a120-c01c-415a-b2e2-8044d2daad27\") " pod="openstack/ceilometer-0" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.304132 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db98a120-c01c-415a-b2e2-8044d2daad27-log-httpd\") pod \"ceilometer-0\" (UID: \"db98a120-c01c-415a-b2e2-8044d2daad27\") " pod="openstack/ceilometer-0" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.304180 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db98a120-c01c-415a-b2e2-8044d2daad27-config-data\") pod \"ceilometer-0\" (UID: \"db98a120-c01c-415a-b2e2-8044d2daad27\") " pod="openstack/ceilometer-0" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.304541 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/db98a120-c01c-415a-b2e2-8044d2daad27-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"db98a120-c01c-415a-b2e2-8044d2daad27\") " pod="openstack/ceilometer-0" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.304639 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db98a120-c01c-415a-b2e2-8044d2daad27-scripts\") pod \"ceilometer-0\" (UID: \"db98a120-c01c-415a-b2e2-8044d2daad27\") " pod="openstack/ceilometer-0" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.304718 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzrw9\" (UniqueName: \"kubernetes.io/projected/db98a120-c01c-415a-b2e2-8044d2daad27-kube-api-access-gzrw9\") pod \"ceilometer-0\" (UID: \"db98a120-c01c-415a-b2e2-8044d2daad27\") " pod="openstack/ceilometer-0" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.304799 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db98a120-c01c-415a-b2e2-8044d2daad27-run-httpd\") pod \"ceilometer-0\" (UID: \"db98a120-c01c-415a-b2e2-8044d2daad27\") " pod="openstack/ceilometer-0" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.407459 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzrw9\" (UniqueName: \"kubernetes.io/projected/db98a120-c01c-415a-b2e2-8044d2daad27-kube-api-access-gzrw9\") pod \"ceilometer-0\" (UID: \"db98a120-c01c-415a-b2e2-8044d2daad27\") " pod="openstack/ceilometer-0" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.407549 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db98a120-c01c-415a-b2e2-8044d2daad27-run-httpd\") pod \"ceilometer-0\" (UID: \"db98a120-c01c-415a-b2e2-8044d2daad27\") " pod="openstack/ceilometer-0" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.407618 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db98a120-c01c-415a-b2e2-8044d2daad27-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"db98a120-c01c-415a-b2e2-8044d2daad27\") " pod="openstack/ceilometer-0" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.407657 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d521510c-fc2f-4928-a2c8-45155c352562-config\") pod \"dnsmasq-dns-6b7bbf7cf9-9g7b2\" (UID: \"d521510c-fc2f-4928-a2c8-45155c352562\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-9g7b2" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.407684 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d521510c-fc2f-4928-a2c8-45155c352562-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-9g7b2\" (UID: \"d521510c-fc2f-4928-a2c8-45155c352562\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-9g7b2" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.407730 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8qr9\" (UniqueName: \"kubernetes.io/projected/d521510c-fc2f-4928-a2c8-45155c352562-kube-api-access-h8qr9\") pod \"dnsmasq-dns-6b7bbf7cf9-9g7b2\" (UID: \"d521510c-fc2f-4928-a2c8-45155c352562\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-9g7b2" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.407785 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d521510c-fc2f-4928-a2c8-45155c352562-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-9g7b2\" (UID: \"d521510c-fc2f-4928-a2c8-45155c352562\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-9g7b2" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.407835 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db98a120-c01c-415a-b2e2-8044d2daad27-log-httpd\") pod \"ceilometer-0\" (UID: \"db98a120-c01c-415a-b2e2-8044d2daad27\") " pod="openstack/ceilometer-0" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.407866 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db98a120-c01c-415a-b2e2-8044d2daad27-config-data\") pod \"ceilometer-0\" (UID: \"db98a120-c01c-415a-b2e2-8044d2daad27\") " pod="openstack/ceilometer-0" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.407925 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d521510c-fc2f-4928-a2c8-45155c352562-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-9g7b2\" (UID: \"d521510c-fc2f-4928-a2c8-45155c352562\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-9g7b2" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.407968 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/db98a120-c01c-415a-b2e2-8044d2daad27-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"db98a120-c01c-415a-b2e2-8044d2daad27\") " pod="openstack/ceilometer-0" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.408034 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db98a120-c01c-415a-b2e2-8044d2daad27-scripts\") pod \"ceilometer-0\" (UID: \"db98a120-c01c-415a-b2e2-8044d2daad27\") " pod="openstack/ceilometer-0" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.408094 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d521510c-fc2f-4928-a2c8-45155c352562-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-9g7b2\" (UID: \"d521510c-fc2f-4928-a2c8-45155c352562\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-9g7b2" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.408261 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db98a120-c01c-415a-b2e2-8044d2daad27-run-httpd\") pod \"ceilometer-0\" (UID: \"db98a120-c01c-415a-b2e2-8044d2daad27\") " pod="openstack/ceilometer-0" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.408617 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db98a120-c01c-415a-b2e2-8044d2daad27-log-httpd\") pod \"ceilometer-0\" (UID: \"db98a120-c01c-415a-b2e2-8044d2daad27\") " pod="openstack/ceilometer-0" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.413682 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db98a120-c01c-415a-b2e2-8044d2daad27-scripts\") pod \"ceilometer-0\" (UID: \"db98a120-c01c-415a-b2e2-8044d2daad27\") " pod="openstack/ceilometer-0" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.413909 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db98a120-c01c-415a-b2e2-8044d2daad27-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"db98a120-c01c-415a-b2e2-8044d2daad27\") " pod="openstack/ceilometer-0" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.414849 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db98a120-c01c-415a-b2e2-8044d2daad27-config-data\") pod \"ceilometer-0\" (UID: \"db98a120-c01c-415a-b2e2-8044d2daad27\") " pod="openstack/ceilometer-0" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.414999 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/db98a120-c01c-415a-b2e2-8044d2daad27-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"db98a120-c01c-415a-b2e2-8044d2daad27\") " pod="openstack/ceilometer-0" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.446454 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzrw9\" (UniqueName: \"kubernetes.io/projected/db98a120-c01c-415a-b2e2-8044d2daad27-kube-api-access-gzrw9\") pod \"ceilometer-0\" (UID: \"db98a120-c01c-415a-b2e2-8044d2daad27\") " pod="openstack/ceilometer-0" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.510134 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d521510c-fc2f-4928-a2c8-45155c352562-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-9g7b2\" (UID: \"d521510c-fc2f-4928-a2c8-45155c352562\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-9g7b2" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.510408 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d521510c-fc2f-4928-a2c8-45155c352562-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-9g7b2\" (UID: \"d521510c-fc2f-4928-a2c8-45155c352562\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-9g7b2" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.510644 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d521510c-fc2f-4928-a2c8-45155c352562-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-9g7b2\" (UID: \"d521510c-fc2f-4928-a2c8-45155c352562\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-9g7b2" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.510836 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d521510c-fc2f-4928-a2c8-45155c352562-config\") pod \"dnsmasq-dns-6b7bbf7cf9-9g7b2\" (UID: \"d521510c-fc2f-4928-a2c8-45155c352562\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-9g7b2" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.510933 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d521510c-fc2f-4928-a2c8-45155c352562-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-9g7b2\" (UID: \"d521510c-fc2f-4928-a2c8-45155c352562\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-9g7b2" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.511034 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8qr9\" (UniqueName: \"kubernetes.io/projected/d521510c-fc2f-4928-a2c8-45155c352562-kube-api-access-h8qr9\") pod \"dnsmasq-dns-6b7bbf7cf9-9g7b2\" (UID: \"d521510c-fc2f-4928-a2c8-45155c352562\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-9g7b2" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.511727 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d521510c-fc2f-4928-a2c8-45155c352562-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-9g7b2\" (UID: \"d521510c-fc2f-4928-a2c8-45155c352562\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-9g7b2" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.511753 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d521510c-fc2f-4928-a2c8-45155c352562-config\") pod \"dnsmasq-dns-6b7bbf7cf9-9g7b2\" (UID: \"d521510c-fc2f-4928-a2c8-45155c352562\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-9g7b2" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.511775 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d521510c-fc2f-4928-a2c8-45155c352562-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-9g7b2\" (UID: \"d521510c-fc2f-4928-a2c8-45155c352562\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-9g7b2" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.511779 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d521510c-fc2f-4928-a2c8-45155c352562-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-9g7b2\" (UID: \"d521510c-fc2f-4928-a2c8-45155c352562\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-9g7b2" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.511924 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d521510c-fc2f-4928-a2c8-45155c352562-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-9g7b2\" (UID: \"d521510c-fc2f-4928-a2c8-45155c352562\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-9g7b2" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.520369 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.527301 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8qr9\" (UniqueName: \"kubernetes.io/projected/d521510c-fc2f-4928-a2c8-45155c352562-kube-api-access-h8qr9\") pod \"dnsmasq-dns-6b7bbf7cf9-9g7b2\" (UID: \"d521510c-fc2f-4928-a2c8-45155c352562\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-9g7b2" Jan 03 06:07:36 crc kubenswrapper[4854]: I0103 06:07:36.618652 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-9g7b2" Jan 03 06:07:37 crc kubenswrapper[4854]: I0103 06:07:37.216276 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:07:37 crc kubenswrapper[4854]: I0103 06:07:37.376558 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-9g7b2"] Jan 03 06:07:38 crc kubenswrapper[4854]: I0103 06:07:38.103688 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"db98a120-c01c-415a-b2e2-8044d2daad27","Type":"ContainerStarted","Data":"4e094d139794bce24d1b77da3f162372f538774dce96ff38a3b747ba163d85b2"} Jan 03 06:07:38 crc kubenswrapper[4854]: I0103 06:07:38.104118 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"db98a120-c01c-415a-b2e2-8044d2daad27","Type":"ContainerStarted","Data":"8664a4d8bf8cf134a3003ebf4f969619f1c363c26ec9b8fa53d45fc09ed3c199"} Jan 03 06:07:38 crc kubenswrapper[4854]: I0103 06:07:38.106485 4854 generic.go:334] "Generic (PLEG): container finished" podID="d521510c-fc2f-4928-a2c8-45155c352562" containerID="f6a1ae9ef209985cc157ceb0e1ac708a2d2dcf2d00250be1076aa2b626ba9eec" exitCode=0 Jan 03 06:07:38 crc kubenswrapper[4854]: I0103 06:07:38.106555 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-9g7b2" event={"ID":"d521510c-fc2f-4928-a2c8-45155c352562","Type":"ContainerDied","Data":"f6a1ae9ef209985cc157ceb0e1ac708a2d2dcf2d00250be1076aa2b626ba9eec"} Jan 03 06:07:38 crc kubenswrapper[4854]: I0103 06:07:38.106621 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-9g7b2" event={"ID":"d521510c-fc2f-4928-a2c8-45155c352562","Type":"ContainerStarted","Data":"a1069a5bfa9842fb8bcc20ed8ab01d001701bff8b8d9c69c9ab0afd09ca76275"} Jan 03 06:07:38 crc kubenswrapper[4854]: I0103 06:07:38.151980 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ee936f2-ed21-40c7-a10f-23eb3e2f198d" path="/var/lib/kubelet/pods/3ee936f2-ed21-40c7-a10f-23eb3e2f198d/volumes" Jan 03 06:07:39 crc kubenswrapper[4854]: I0103 06:07:39.126556 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"db98a120-c01c-415a-b2e2-8044d2daad27","Type":"ContainerStarted","Data":"3458f68c2b18f040bed75a21202cca2445688e161b44ff61e9235417dd932c62"} Jan 03 06:07:39 crc kubenswrapper[4854]: I0103 06:07:39.131207 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-9g7b2" event={"ID":"d521510c-fc2f-4928-a2c8-45155c352562","Type":"ContainerStarted","Data":"5c22bce98bc4a32ece7dd30b876deb7e801b8501ffe66759f8f8501daa90c0d3"} Jan 03 06:07:39 crc kubenswrapper[4854]: I0103 06:07:39.131389 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7bbf7cf9-9g7b2" Jan 03 06:07:39 crc kubenswrapper[4854]: I0103 06:07:39.171225 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7bbf7cf9-9g7b2" podStartSLOduration=3.171203314 podStartE2EDuration="3.171203314s" podCreationTimestamp="2026-01-03 06:07:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:07:39.15459763 +0000 UTC m=+1637.481174202" watchObservedRunningTime="2026-01-03 06:07:39.171203314 +0000 UTC m=+1637.497779886" Jan 03 06:07:39 crc kubenswrapper[4854]: I0103 06:07:39.350228 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 03 06:07:39 crc kubenswrapper[4854]: I0103 06:07:39.350725 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="acc395e2-fa94-4736-8167-960fa4c2779b" containerName="nova-api-log" containerID="cri-o://78a330b505babe0bf45b21c1b1b536720b2fede8ebf290f95c7629fa2d11ca43" gracePeriod=30 Jan 03 06:07:39 crc kubenswrapper[4854]: I0103 06:07:39.350877 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="acc395e2-fa94-4736-8167-960fa4c2779b" containerName="nova-api-api" containerID="cri-o://429070de9fd55bf2a9a4d6f50b9de2fbb8365cde0423c80f974c2dcd4846e2bf" gracePeriod=30 Jan 03 06:07:39 crc kubenswrapper[4854]: I0103 06:07:39.636712 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:07:40 crc kubenswrapper[4854]: I0103 06:07:40.170809 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"db98a120-c01c-415a-b2e2-8044d2daad27","Type":"ContainerStarted","Data":"8cb92517b6764026766e638030d891c77e7115ed3a6da6f7ea6390dbe68d2f33"} Jan 03 06:07:40 crc kubenswrapper[4854]: I0103 06:07:40.177101 4854 generic.go:334] "Generic (PLEG): container finished" podID="acc395e2-fa94-4736-8167-960fa4c2779b" containerID="78a330b505babe0bf45b21c1b1b536720b2fede8ebf290f95c7629fa2d11ca43" exitCode=143 Jan 03 06:07:40 crc kubenswrapper[4854]: I0103 06:07:40.177184 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"acc395e2-fa94-4736-8167-960fa4c2779b","Type":"ContainerDied","Data":"78a330b505babe0bf45b21c1b1b536720b2fede8ebf290f95c7629fa2d11ca43"} Jan 03 06:07:40 crc kubenswrapper[4854]: I0103 06:07:40.715718 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 03 06:07:40 crc kubenswrapper[4854]: I0103 06:07:40.744629 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 03 06:07:41 crc kubenswrapper[4854]: I0103 06:07:41.190382 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"db98a120-c01c-415a-b2e2-8044d2daad27","Type":"ContainerStarted","Data":"7043a0fad8084fb1649e76e46e836918d36dc3ec214ea7092d2af91c927c126e"} Jan 03 06:07:41 crc kubenswrapper[4854]: I0103 06:07:41.190738 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="db98a120-c01c-415a-b2e2-8044d2daad27" containerName="proxy-httpd" containerID="cri-o://7043a0fad8084fb1649e76e46e836918d36dc3ec214ea7092d2af91c927c126e" gracePeriod=30 Jan 03 06:07:41 crc kubenswrapper[4854]: I0103 06:07:41.190714 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="db98a120-c01c-415a-b2e2-8044d2daad27" containerName="ceilometer-central-agent" containerID="cri-o://4e094d139794bce24d1b77da3f162372f538774dce96ff38a3b747ba163d85b2" gracePeriod=30 Jan 03 06:07:41 crc kubenswrapper[4854]: I0103 06:07:41.190843 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="db98a120-c01c-415a-b2e2-8044d2daad27" containerName="sg-core" containerID="cri-o://8cb92517b6764026766e638030d891c77e7115ed3a6da6f7ea6390dbe68d2f33" gracePeriod=30 Jan 03 06:07:41 crc kubenswrapper[4854]: I0103 06:07:41.190916 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="db98a120-c01c-415a-b2e2-8044d2daad27" containerName="ceilometer-notification-agent" containerID="cri-o://3458f68c2b18f040bed75a21202cca2445688e161b44ff61e9235417dd932c62" gracePeriod=30 Jan 03 06:07:41 crc kubenswrapper[4854]: I0103 06:07:41.229680 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.709655514 podStartE2EDuration="5.229649867s" podCreationTimestamp="2026-01-03 06:07:36 +0000 UTC" firstStartedPulling="2026-01-03 06:07:37.218636741 +0000 UTC m=+1635.545213303" lastFinishedPulling="2026-01-03 06:07:40.738631084 +0000 UTC m=+1639.065207656" observedRunningTime="2026-01-03 06:07:41.221667388 +0000 UTC m=+1639.548243980" watchObservedRunningTime="2026-01-03 06:07:41.229649867 +0000 UTC m=+1639.556226439" Jan 03 06:07:41 crc kubenswrapper[4854]: I0103 06:07:41.249245 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 03 06:07:41 crc kubenswrapper[4854]: I0103 06:07:41.481782 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-lkfxj"] Jan 03 06:07:41 crc kubenswrapper[4854]: I0103 06:07:41.483487 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lkfxj" Jan 03 06:07:41 crc kubenswrapper[4854]: I0103 06:07:41.484982 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 03 06:07:41 crc kubenswrapper[4854]: I0103 06:07:41.485498 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 03 06:07:41 crc kubenswrapper[4854]: I0103 06:07:41.495674 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-lkfxj"] Jan 03 06:07:41 crc kubenswrapper[4854]: I0103 06:07:41.611948 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66b95f7c-2775-47c3-ad74-dd5ffe92a9a5-scripts\") pod \"nova-cell1-cell-mapping-lkfxj\" (UID: \"66b95f7c-2775-47c3-ad74-dd5ffe92a9a5\") " pod="openstack/nova-cell1-cell-mapping-lkfxj" Jan 03 06:07:41 crc kubenswrapper[4854]: I0103 06:07:41.612002 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66b95f7c-2775-47c3-ad74-dd5ffe92a9a5-config-data\") pod \"nova-cell1-cell-mapping-lkfxj\" (UID: \"66b95f7c-2775-47c3-ad74-dd5ffe92a9a5\") " pod="openstack/nova-cell1-cell-mapping-lkfxj" Jan 03 06:07:41 crc kubenswrapper[4854]: I0103 06:07:41.612020 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66b95f7c-2775-47c3-ad74-dd5ffe92a9a5-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-lkfxj\" (UID: \"66b95f7c-2775-47c3-ad74-dd5ffe92a9a5\") " pod="openstack/nova-cell1-cell-mapping-lkfxj" Jan 03 06:07:41 crc kubenswrapper[4854]: I0103 06:07:41.612059 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8z5l\" (UniqueName: \"kubernetes.io/projected/66b95f7c-2775-47c3-ad74-dd5ffe92a9a5-kube-api-access-v8z5l\") pod \"nova-cell1-cell-mapping-lkfxj\" (UID: \"66b95f7c-2775-47c3-ad74-dd5ffe92a9a5\") " pod="openstack/nova-cell1-cell-mapping-lkfxj" Jan 03 06:07:41 crc kubenswrapper[4854]: I0103 06:07:41.714195 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66b95f7c-2775-47c3-ad74-dd5ffe92a9a5-scripts\") pod \"nova-cell1-cell-mapping-lkfxj\" (UID: \"66b95f7c-2775-47c3-ad74-dd5ffe92a9a5\") " pod="openstack/nova-cell1-cell-mapping-lkfxj" Jan 03 06:07:41 crc kubenswrapper[4854]: I0103 06:07:41.714477 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66b95f7c-2775-47c3-ad74-dd5ffe92a9a5-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-lkfxj\" (UID: \"66b95f7c-2775-47c3-ad74-dd5ffe92a9a5\") " pod="openstack/nova-cell1-cell-mapping-lkfxj" Jan 03 06:07:41 crc kubenswrapper[4854]: I0103 06:07:41.714498 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66b95f7c-2775-47c3-ad74-dd5ffe92a9a5-config-data\") pod \"nova-cell1-cell-mapping-lkfxj\" (UID: \"66b95f7c-2775-47c3-ad74-dd5ffe92a9a5\") " pod="openstack/nova-cell1-cell-mapping-lkfxj" Jan 03 06:07:41 crc kubenswrapper[4854]: I0103 06:07:41.714533 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8z5l\" (UniqueName: \"kubernetes.io/projected/66b95f7c-2775-47c3-ad74-dd5ffe92a9a5-kube-api-access-v8z5l\") pod \"nova-cell1-cell-mapping-lkfxj\" (UID: \"66b95f7c-2775-47c3-ad74-dd5ffe92a9a5\") " pod="openstack/nova-cell1-cell-mapping-lkfxj" Jan 03 06:07:41 crc kubenswrapper[4854]: I0103 06:07:41.720761 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66b95f7c-2775-47c3-ad74-dd5ffe92a9a5-config-data\") pod \"nova-cell1-cell-mapping-lkfxj\" (UID: \"66b95f7c-2775-47c3-ad74-dd5ffe92a9a5\") " pod="openstack/nova-cell1-cell-mapping-lkfxj" Jan 03 06:07:41 crc kubenswrapper[4854]: I0103 06:07:41.720773 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66b95f7c-2775-47c3-ad74-dd5ffe92a9a5-scripts\") pod \"nova-cell1-cell-mapping-lkfxj\" (UID: \"66b95f7c-2775-47c3-ad74-dd5ffe92a9a5\") " pod="openstack/nova-cell1-cell-mapping-lkfxj" Jan 03 06:07:41 crc kubenswrapper[4854]: I0103 06:07:41.720832 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66b95f7c-2775-47c3-ad74-dd5ffe92a9a5-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-lkfxj\" (UID: \"66b95f7c-2775-47c3-ad74-dd5ffe92a9a5\") " pod="openstack/nova-cell1-cell-mapping-lkfxj" Jan 03 06:07:41 crc kubenswrapper[4854]: I0103 06:07:41.738680 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8z5l\" (UniqueName: \"kubernetes.io/projected/66b95f7c-2775-47c3-ad74-dd5ffe92a9a5-kube-api-access-v8z5l\") pod \"nova-cell1-cell-mapping-lkfxj\" (UID: \"66b95f7c-2775-47c3-ad74-dd5ffe92a9a5\") " pod="openstack/nova-cell1-cell-mapping-lkfxj" Jan 03 06:07:41 crc kubenswrapper[4854]: I0103 06:07:41.755955 4854 patch_prober.go:28] interesting pod/machine-config-daemon-qdhfx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 06:07:41 crc kubenswrapper[4854]: I0103 06:07:41.756048 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 06:07:41 crc kubenswrapper[4854]: I0103 06:07:41.804324 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lkfxj" Jan 03 06:07:42 crc kubenswrapper[4854]: I0103 06:07:42.202467 4854 generic.go:334] "Generic (PLEG): container finished" podID="db98a120-c01c-415a-b2e2-8044d2daad27" containerID="7043a0fad8084fb1649e76e46e836918d36dc3ec214ea7092d2af91c927c126e" exitCode=0 Jan 03 06:07:42 crc kubenswrapper[4854]: I0103 06:07:42.202794 4854 generic.go:334] "Generic (PLEG): container finished" podID="db98a120-c01c-415a-b2e2-8044d2daad27" containerID="8cb92517b6764026766e638030d891c77e7115ed3a6da6f7ea6390dbe68d2f33" exitCode=2 Jan 03 06:07:42 crc kubenswrapper[4854]: I0103 06:07:42.202805 4854 generic.go:334] "Generic (PLEG): container finished" podID="db98a120-c01c-415a-b2e2-8044d2daad27" containerID="3458f68c2b18f040bed75a21202cca2445688e161b44ff61e9235417dd932c62" exitCode=0 Jan 03 06:07:42 crc kubenswrapper[4854]: I0103 06:07:42.202548 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"db98a120-c01c-415a-b2e2-8044d2daad27","Type":"ContainerDied","Data":"7043a0fad8084fb1649e76e46e836918d36dc3ec214ea7092d2af91c927c126e"} Jan 03 06:07:42 crc kubenswrapper[4854]: I0103 06:07:42.202911 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"db98a120-c01c-415a-b2e2-8044d2daad27","Type":"ContainerDied","Data":"8cb92517b6764026766e638030d891c77e7115ed3a6da6f7ea6390dbe68d2f33"} Jan 03 06:07:42 crc kubenswrapper[4854]: I0103 06:07:42.202924 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"db98a120-c01c-415a-b2e2-8044d2daad27","Type":"ContainerDied","Data":"3458f68c2b18f040bed75a21202cca2445688e161b44ff61e9235417dd932c62"} Jan 03 06:07:42 crc kubenswrapper[4854]: I0103 06:07:42.295301 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-lkfxj"] Jan 03 06:07:43 crc kubenswrapper[4854]: I0103 06:07:43.113807 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 03 06:07:43 crc kubenswrapper[4854]: I0103 06:07:43.156415 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acc395e2-fa94-4736-8167-960fa4c2779b-config-data\") pod \"acc395e2-fa94-4736-8167-960fa4c2779b\" (UID: \"acc395e2-fa94-4736-8167-960fa4c2779b\") " Jan 03 06:07:43 crc kubenswrapper[4854]: I0103 06:07:43.156635 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/acc395e2-fa94-4736-8167-960fa4c2779b-logs\") pod \"acc395e2-fa94-4736-8167-960fa4c2779b\" (UID: \"acc395e2-fa94-4736-8167-960fa4c2779b\") " Jan 03 06:07:43 crc kubenswrapper[4854]: I0103 06:07:43.156723 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acc395e2-fa94-4736-8167-960fa4c2779b-combined-ca-bundle\") pod \"acc395e2-fa94-4736-8167-960fa4c2779b\" (UID: \"acc395e2-fa94-4736-8167-960fa4c2779b\") " Jan 03 06:07:43 crc kubenswrapper[4854]: I0103 06:07:43.156769 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrhcj\" (UniqueName: \"kubernetes.io/projected/acc395e2-fa94-4736-8167-960fa4c2779b-kube-api-access-hrhcj\") pod \"acc395e2-fa94-4736-8167-960fa4c2779b\" (UID: \"acc395e2-fa94-4736-8167-960fa4c2779b\") " Jan 03 06:07:43 crc kubenswrapper[4854]: I0103 06:07:43.159175 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/acc395e2-fa94-4736-8167-960fa4c2779b-logs" (OuterVolumeSpecName: "logs") pod "acc395e2-fa94-4736-8167-960fa4c2779b" (UID: "acc395e2-fa94-4736-8167-960fa4c2779b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:07:43 crc kubenswrapper[4854]: I0103 06:07:43.180659 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acc395e2-fa94-4736-8167-960fa4c2779b-kube-api-access-hrhcj" (OuterVolumeSpecName: "kube-api-access-hrhcj") pod "acc395e2-fa94-4736-8167-960fa4c2779b" (UID: "acc395e2-fa94-4736-8167-960fa4c2779b"). InnerVolumeSpecName "kube-api-access-hrhcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:07:43 crc kubenswrapper[4854]: I0103 06:07:43.202805 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acc395e2-fa94-4736-8167-960fa4c2779b-config-data" (OuterVolumeSpecName: "config-data") pod "acc395e2-fa94-4736-8167-960fa4c2779b" (UID: "acc395e2-fa94-4736-8167-960fa4c2779b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:07:43 crc kubenswrapper[4854]: I0103 06:07:43.219998 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acc395e2-fa94-4736-8167-960fa4c2779b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "acc395e2-fa94-4736-8167-960fa4c2779b" (UID: "acc395e2-fa94-4736-8167-960fa4c2779b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:07:43 crc kubenswrapper[4854]: I0103 06:07:43.268055 4854 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/acc395e2-fa94-4736-8167-960fa4c2779b-logs\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:43 crc kubenswrapper[4854]: I0103 06:07:43.268094 4854 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acc395e2-fa94-4736-8167-960fa4c2779b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:43 crc kubenswrapper[4854]: I0103 06:07:43.268108 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrhcj\" (UniqueName: \"kubernetes.io/projected/acc395e2-fa94-4736-8167-960fa4c2779b-kube-api-access-hrhcj\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:43 crc kubenswrapper[4854]: I0103 06:07:43.268117 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acc395e2-fa94-4736-8167-960fa4c2779b-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:43 crc kubenswrapper[4854]: I0103 06:07:43.269022 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lkfxj" event={"ID":"66b95f7c-2775-47c3-ad74-dd5ffe92a9a5","Type":"ContainerStarted","Data":"7582cbb1742cfeac6bc5235eced5dc9da19d4b654e9eed76cd80622e26bcaaf3"} Jan 03 06:07:43 crc kubenswrapper[4854]: I0103 06:07:43.269062 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lkfxj" event={"ID":"66b95f7c-2775-47c3-ad74-dd5ffe92a9a5","Type":"ContainerStarted","Data":"4d6fd98e52c7ebf6ef6494310011e4f743d6b58533b8a2f9b988442cb21a18eb"} Jan 03 06:07:43 crc kubenswrapper[4854]: I0103 06:07:43.303350 4854 generic.go:334] "Generic (PLEG): container finished" podID="acc395e2-fa94-4736-8167-960fa4c2779b" containerID="429070de9fd55bf2a9a4d6f50b9de2fbb8365cde0423c80f974c2dcd4846e2bf" exitCode=0 Jan 03 06:07:43 crc kubenswrapper[4854]: I0103 06:07:43.303579 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"acc395e2-fa94-4736-8167-960fa4c2779b","Type":"ContainerDied","Data":"429070de9fd55bf2a9a4d6f50b9de2fbb8365cde0423c80f974c2dcd4846e2bf"} Jan 03 06:07:43 crc kubenswrapper[4854]: I0103 06:07:43.303676 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"acc395e2-fa94-4736-8167-960fa4c2779b","Type":"ContainerDied","Data":"b0479bea2b9b539411b3ad41c363bd9144919d649ddaa482cfc520c4bd101f48"} Jan 03 06:07:43 crc kubenswrapper[4854]: I0103 06:07:43.303779 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 03 06:07:43 crc kubenswrapper[4854]: I0103 06:07:43.303798 4854 scope.go:117] "RemoveContainer" containerID="429070de9fd55bf2a9a4d6f50b9de2fbb8365cde0423c80f974c2dcd4846e2bf" Jan 03 06:07:43 crc kubenswrapper[4854]: I0103 06:07:43.318301 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-lkfxj" podStartSLOduration=2.318280143 podStartE2EDuration="2.318280143s" podCreationTimestamp="2026-01-03 06:07:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:07:43.299703259 +0000 UTC m=+1641.626279831" watchObservedRunningTime="2026-01-03 06:07:43.318280143 +0000 UTC m=+1641.644856715" Jan 03 06:07:43 crc kubenswrapper[4854]: I0103 06:07:43.377940 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 03 06:07:43 crc kubenswrapper[4854]: I0103 06:07:43.390640 4854 scope.go:117] "RemoveContainer" containerID="78a330b505babe0bf45b21c1b1b536720b2fede8ebf290f95c7629fa2d11ca43" Jan 03 06:07:43 crc kubenswrapper[4854]: I0103 06:07:43.396148 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 03 06:07:43 crc kubenswrapper[4854]: I0103 06:07:43.412194 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 03 06:07:43 crc kubenswrapper[4854]: E0103 06:07:43.412788 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acc395e2-fa94-4736-8167-960fa4c2779b" containerName="nova-api-api" Jan 03 06:07:43 crc kubenswrapper[4854]: I0103 06:07:43.412811 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="acc395e2-fa94-4736-8167-960fa4c2779b" containerName="nova-api-api" Jan 03 06:07:43 crc kubenswrapper[4854]: E0103 06:07:43.412867 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acc395e2-fa94-4736-8167-960fa4c2779b" containerName="nova-api-log" Jan 03 06:07:43 crc kubenswrapper[4854]: I0103 06:07:43.412877 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="acc395e2-fa94-4736-8167-960fa4c2779b" containerName="nova-api-log" Jan 03 06:07:43 crc kubenswrapper[4854]: I0103 06:07:43.413246 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="acc395e2-fa94-4736-8167-960fa4c2779b" containerName="nova-api-api" Jan 03 06:07:43 crc kubenswrapper[4854]: I0103 06:07:43.413277 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="acc395e2-fa94-4736-8167-960fa4c2779b" containerName="nova-api-log" Jan 03 06:07:43 crc kubenswrapper[4854]: I0103 06:07:43.414980 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 03 06:07:43 crc kubenswrapper[4854]: I0103 06:07:43.421498 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 03 06:07:43 crc kubenswrapper[4854]: I0103 06:07:43.422346 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 03 06:07:43 crc kubenswrapper[4854]: I0103 06:07:43.422698 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 03 06:07:43 crc kubenswrapper[4854]: I0103 06:07:43.437282 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 03 06:07:43 crc kubenswrapper[4854]: I0103 06:07:43.454682 4854 scope.go:117] "RemoveContainer" containerID="429070de9fd55bf2a9a4d6f50b9de2fbb8365cde0423c80f974c2dcd4846e2bf" Jan 03 06:07:43 crc kubenswrapper[4854]: E0103 06:07:43.467051 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"429070de9fd55bf2a9a4d6f50b9de2fbb8365cde0423c80f974c2dcd4846e2bf\": container with ID starting with 429070de9fd55bf2a9a4d6f50b9de2fbb8365cde0423c80f974c2dcd4846e2bf not found: ID does not exist" containerID="429070de9fd55bf2a9a4d6f50b9de2fbb8365cde0423c80f974c2dcd4846e2bf" Jan 03 06:07:43 crc kubenswrapper[4854]: I0103 06:07:43.467110 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"429070de9fd55bf2a9a4d6f50b9de2fbb8365cde0423c80f974c2dcd4846e2bf"} err="failed to get container status \"429070de9fd55bf2a9a4d6f50b9de2fbb8365cde0423c80f974c2dcd4846e2bf\": rpc error: code = NotFound desc = could not find container \"429070de9fd55bf2a9a4d6f50b9de2fbb8365cde0423c80f974c2dcd4846e2bf\": container with ID starting with 429070de9fd55bf2a9a4d6f50b9de2fbb8365cde0423c80f974c2dcd4846e2bf not found: ID does not exist" Jan 03 06:07:43 crc kubenswrapper[4854]: I0103 06:07:43.467137 4854 scope.go:117] "RemoveContainer" containerID="78a330b505babe0bf45b21c1b1b536720b2fede8ebf290f95c7629fa2d11ca43" Jan 03 06:07:43 crc kubenswrapper[4854]: E0103 06:07:43.469366 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78a330b505babe0bf45b21c1b1b536720b2fede8ebf290f95c7629fa2d11ca43\": container with ID starting with 78a330b505babe0bf45b21c1b1b536720b2fede8ebf290f95c7629fa2d11ca43 not found: ID does not exist" containerID="78a330b505babe0bf45b21c1b1b536720b2fede8ebf290f95c7629fa2d11ca43" Jan 03 06:07:43 crc kubenswrapper[4854]: I0103 06:07:43.469431 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78a330b505babe0bf45b21c1b1b536720b2fede8ebf290f95c7629fa2d11ca43"} err="failed to get container status \"78a330b505babe0bf45b21c1b1b536720b2fede8ebf290f95c7629fa2d11ca43\": rpc error: code = NotFound desc = could not find container \"78a330b505babe0bf45b21c1b1b536720b2fede8ebf290f95c7629fa2d11ca43\": container with ID starting with 78a330b505babe0bf45b21c1b1b536720b2fede8ebf290f95c7629fa2d11ca43 not found: ID does not exist" Jan 03 06:07:43 crc kubenswrapper[4854]: I0103 06:07:43.472604 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d-public-tls-certs\") pod \"nova-api-0\" (UID: \"bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d\") " pod="openstack/nova-api-0" Jan 03 06:07:43 crc kubenswrapper[4854]: I0103 06:07:43.472634 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d-logs\") pod \"nova-api-0\" (UID: \"bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d\") " pod="openstack/nova-api-0" Jan 03 06:07:43 crc kubenswrapper[4854]: I0103 06:07:43.472668 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d\") " pod="openstack/nova-api-0" Jan 03 06:07:43 crc kubenswrapper[4854]: I0103 06:07:43.472802 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d-config-data\") pod \"nova-api-0\" (UID: \"bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d\") " pod="openstack/nova-api-0" Jan 03 06:07:43 crc kubenswrapper[4854]: I0103 06:07:43.472872 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d\") " pod="openstack/nova-api-0" Jan 03 06:07:43 crc kubenswrapper[4854]: I0103 06:07:43.472937 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzkdq\" (UniqueName: \"kubernetes.io/projected/bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d-kube-api-access-gzkdq\") pod \"nova-api-0\" (UID: \"bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d\") " pod="openstack/nova-api-0" Jan 03 06:07:43 crc kubenswrapper[4854]: I0103 06:07:43.575440 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d-config-data\") pod \"nova-api-0\" (UID: \"bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d\") " pod="openstack/nova-api-0" Jan 03 06:07:43 crc kubenswrapper[4854]: I0103 06:07:43.575819 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d\") " pod="openstack/nova-api-0" Jan 03 06:07:43 crc kubenswrapper[4854]: I0103 06:07:43.575881 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzkdq\" (UniqueName: \"kubernetes.io/projected/bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d-kube-api-access-gzkdq\") pod \"nova-api-0\" (UID: \"bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d\") " pod="openstack/nova-api-0" Jan 03 06:07:43 crc kubenswrapper[4854]: I0103 06:07:43.575920 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d-public-tls-certs\") pod \"nova-api-0\" (UID: \"bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d\") " pod="openstack/nova-api-0" Jan 03 06:07:43 crc kubenswrapper[4854]: I0103 06:07:43.575938 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d-logs\") pod \"nova-api-0\" (UID: \"bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d\") " pod="openstack/nova-api-0" Jan 03 06:07:43 crc kubenswrapper[4854]: I0103 06:07:43.575964 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d\") " pod="openstack/nova-api-0" Jan 03 06:07:43 crc kubenswrapper[4854]: I0103 06:07:43.577121 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d-logs\") pod \"nova-api-0\" (UID: \"bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d\") " pod="openstack/nova-api-0" Jan 03 06:07:43 crc kubenswrapper[4854]: I0103 06:07:43.582264 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d-public-tls-certs\") pod \"nova-api-0\" (UID: \"bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d\") " pod="openstack/nova-api-0" Jan 03 06:07:43 crc kubenswrapper[4854]: I0103 06:07:43.582444 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d\") " pod="openstack/nova-api-0" Jan 03 06:07:43 crc kubenswrapper[4854]: I0103 06:07:43.582602 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d\") " pod="openstack/nova-api-0" Jan 03 06:07:43 crc kubenswrapper[4854]: I0103 06:07:43.582615 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d-config-data\") pod \"nova-api-0\" (UID: \"bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d\") " pod="openstack/nova-api-0" Jan 03 06:07:43 crc kubenswrapper[4854]: I0103 06:07:43.597482 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzkdq\" (UniqueName: \"kubernetes.io/projected/bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d-kube-api-access-gzkdq\") pod \"nova-api-0\" (UID: \"bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d\") " pod="openstack/nova-api-0" Jan 03 06:07:43 crc kubenswrapper[4854]: I0103 06:07:43.737897 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 03 06:07:44 crc kubenswrapper[4854]: I0103 06:07:44.133333 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acc395e2-fa94-4736-8167-960fa4c2779b" path="/var/lib/kubelet/pods/acc395e2-fa94-4736-8167-960fa4c2779b/volumes" Jan 03 06:07:44 crc kubenswrapper[4854]: I0103 06:07:44.321681 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.224682 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.330045 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db98a120-c01c-415a-b2e2-8044d2daad27-run-httpd\") pod \"db98a120-c01c-415a-b2e2-8044d2daad27\" (UID: \"db98a120-c01c-415a-b2e2-8044d2daad27\") " Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.330112 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db98a120-c01c-415a-b2e2-8044d2daad27-scripts\") pod \"db98a120-c01c-415a-b2e2-8044d2daad27\" (UID: \"db98a120-c01c-415a-b2e2-8044d2daad27\") " Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.330173 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzrw9\" (UniqueName: \"kubernetes.io/projected/db98a120-c01c-415a-b2e2-8044d2daad27-kube-api-access-gzrw9\") pod \"db98a120-c01c-415a-b2e2-8044d2daad27\" (UID: \"db98a120-c01c-415a-b2e2-8044d2daad27\") " Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.330283 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/db98a120-c01c-415a-b2e2-8044d2daad27-sg-core-conf-yaml\") pod \"db98a120-c01c-415a-b2e2-8044d2daad27\" (UID: \"db98a120-c01c-415a-b2e2-8044d2daad27\") " Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.330383 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db98a120-c01c-415a-b2e2-8044d2daad27-combined-ca-bundle\") pod \"db98a120-c01c-415a-b2e2-8044d2daad27\" (UID: \"db98a120-c01c-415a-b2e2-8044d2daad27\") " Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.330518 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db98a120-c01c-415a-b2e2-8044d2daad27-config-data\") pod \"db98a120-c01c-415a-b2e2-8044d2daad27\" (UID: \"db98a120-c01c-415a-b2e2-8044d2daad27\") " Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.330530 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db98a120-c01c-415a-b2e2-8044d2daad27-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "db98a120-c01c-415a-b2e2-8044d2daad27" (UID: "db98a120-c01c-415a-b2e2-8044d2daad27"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.330553 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db98a120-c01c-415a-b2e2-8044d2daad27-log-httpd\") pod \"db98a120-c01c-415a-b2e2-8044d2daad27\" (UID: \"db98a120-c01c-415a-b2e2-8044d2daad27\") " Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.331117 4854 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db98a120-c01c-415a-b2e2-8044d2daad27-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.331633 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db98a120-c01c-415a-b2e2-8044d2daad27-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "db98a120-c01c-415a-b2e2-8044d2daad27" (UID: "db98a120-c01c-415a-b2e2-8044d2daad27"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.344533 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db98a120-c01c-415a-b2e2-8044d2daad27-kube-api-access-gzrw9" (OuterVolumeSpecName: "kube-api-access-gzrw9") pod "db98a120-c01c-415a-b2e2-8044d2daad27" (UID: "db98a120-c01c-415a-b2e2-8044d2daad27"). InnerVolumeSpecName "kube-api-access-gzrw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.345049 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db98a120-c01c-415a-b2e2-8044d2daad27-scripts" (OuterVolumeSpecName: "scripts") pod "db98a120-c01c-415a-b2e2-8044d2daad27" (UID: "db98a120-c01c-415a-b2e2-8044d2daad27"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.351273 4854 generic.go:334] "Generic (PLEG): container finished" podID="db98a120-c01c-415a-b2e2-8044d2daad27" containerID="4e094d139794bce24d1b77da3f162372f538774dce96ff38a3b747ba163d85b2" exitCode=0 Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.351339 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"db98a120-c01c-415a-b2e2-8044d2daad27","Type":"ContainerDied","Data":"4e094d139794bce24d1b77da3f162372f538774dce96ff38a3b747ba163d85b2"} Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.351369 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"db98a120-c01c-415a-b2e2-8044d2daad27","Type":"ContainerDied","Data":"8664a4d8bf8cf134a3003ebf4f969619f1c363c26ec9b8fa53d45fc09ed3c199"} Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.351391 4854 scope.go:117] "RemoveContainer" containerID="7043a0fad8084fb1649e76e46e836918d36dc3ec214ea7092d2af91c927c126e" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.351426 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.362741 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d","Type":"ContainerStarted","Data":"e9214f2ab082d7a827ed8bd70c1cb5877825472f943691828a3dd82891374074"} Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.362824 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d","Type":"ContainerStarted","Data":"b8565419a8d3c5d950ee9ada1ab483bb63ead070a41ada649663068014a8dc36"} Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.362840 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d","Type":"ContainerStarted","Data":"9f54262bc752c697a8c655ab07dd40d05ce64f5cb3eb1cf20eb212502f530116"} Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.370137 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db98a120-c01c-415a-b2e2-8044d2daad27-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "db98a120-c01c-415a-b2e2-8044d2daad27" (UID: "db98a120-c01c-415a-b2e2-8044d2daad27"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.395750 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.395718009 podStartE2EDuration="2.395718009s" podCreationTimestamp="2026-01-03 06:07:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:07:45.385624577 +0000 UTC m=+1643.712201169" watchObservedRunningTime="2026-01-03 06:07:45.395718009 +0000 UTC m=+1643.722294581" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.435047 4854 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db98a120-c01c-415a-b2e2-8044d2daad27-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.436267 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzrw9\" (UniqueName: \"kubernetes.io/projected/db98a120-c01c-415a-b2e2-8044d2daad27-kube-api-access-gzrw9\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.436292 4854 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/db98a120-c01c-415a-b2e2-8044d2daad27-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.436334 4854 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db98a120-c01c-415a-b2e2-8044d2daad27-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.446804 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db98a120-c01c-415a-b2e2-8044d2daad27-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db98a120-c01c-415a-b2e2-8044d2daad27" (UID: "db98a120-c01c-415a-b2e2-8044d2daad27"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.476124 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db98a120-c01c-415a-b2e2-8044d2daad27-config-data" (OuterVolumeSpecName: "config-data") pod "db98a120-c01c-415a-b2e2-8044d2daad27" (UID: "db98a120-c01c-415a-b2e2-8044d2daad27"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.493051 4854 scope.go:117] "RemoveContainer" containerID="8cb92517b6764026766e638030d891c77e7115ed3a6da6f7ea6390dbe68d2f33" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.516820 4854 scope.go:117] "RemoveContainer" containerID="3458f68c2b18f040bed75a21202cca2445688e161b44ff61e9235417dd932c62" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.538695 4854 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db98a120-c01c-415a-b2e2-8044d2daad27-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.538733 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db98a120-c01c-415a-b2e2-8044d2daad27-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.542281 4854 scope.go:117] "RemoveContainer" containerID="4e094d139794bce24d1b77da3f162372f538774dce96ff38a3b747ba163d85b2" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.564741 4854 scope.go:117] "RemoveContainer" containerID="7043a0fad8084fb1649e76e46e836918d36dc3ec214ea7092d2af91c927c126e" Jan 03 06:07:45 crc kubenswrapper[4854]: E0103 06:07:45.565180 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7043a0fad8084fb1649e76e46e836918d36dc3ec214ea7092d2af91c927c126e\": container with ID starting with 7043a0fad8084fb1649e76e46e836918d36dc3ec214ea7092d2af91c927c126e not found: ID does not exist" containerID="7043a0fad8084fb1649e76e46e836918d36dc3ec214ea7092d2af91c927c126e" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.565220 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7043a0fad8084fb1649e76e46e836918d36dc3ec214ea7092d2af91c927c126e"} err="failed to get container status \"7043a0fad8084fb1649e76e46e836918d36dc3ec214ea7092d2af91c927c126e\": rpc error: code = NotFound desc = could not find container \"7043a0fad8084fb1649e76e46e836918d36dc3ec214ea7092d2af91c927c126e\": container with ID starting with 7043a0fad8084fb1649e76e46e836918d36dc3ec214ea7092d2af91c927c126e not found: ID does not exist" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.565249 4854 scope.go:117] "RemoveContainer" containerID="8cb92517b6764026766e638030d891c77e7115ed3a6da6f7ea6390dbe68d2f33" Jan 03 06:07:45 crc kubenswrapper[4854]: E0103 06:07:45.565513 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cb92517b6764026766e638030d891c77e7115ed3a6da6f7ea6390dbe68d2f33\": container with ID starting with 8cb92517b6764026766e638030d891c77e7115ed3a6da6f7ea6390dbe68d2f33 not found: ID does not exist" containerID="8cb92517b6764026766e638030d891c77e7115ed3a6da6f7ea6390dbe68d2f33" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.565534 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cb92517b6764026766e638030d891c77e7115ed3a6da6f7ea6390dbe68d2f33"} err="failed to get container status \"8cb92517b6764026766e638030d891c77e7115ed3a6da6f7ea6390dbe68d2f33\": rpc error: code = NotFound desc = could not find container \"8cb92517b6764026766e638030d891c77e7115ed3a6da6f7ea6390dbe68d2f33\": container with ID starting with 8cb92517b6764026766e638030d891c77e7115ed3a6da6f7ea6390dbe68d2f33 not found: ID does not exist" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.565547 4854 scope.go:117] "RemoveContainer" containerID="3458f68c2b18f040bed75a21202cca2445688e161b44ff61e9235417dd932c62" Jan 03 06:07:45 crc kubenswrapper[4854]: E0103 06:07:45.565825 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3458f68c2b18f040bed75a21202cca2445688e161b44ff61e9235417dd932c62\": container with ID starting with 3458f68c2b18f040bed75a21202cca2445688e161b44ff61e9235417dd932c62 not found: ID does not exist" containerID="3458f68c2b18f040bed75a21202cca2445688e161b44ff61e9235417dd932c62" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.565849 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3458f68c2b18f040bed75a21202cca2445688e161b44ff61e9235417dd932c62"} err="failed to get container status \"3458f68c2b18f040bed75a21202cca2445688e161b44ff61e9235417dd932c62\": rpc error: code = NotFound desc = could not find container \"3458f68c2b18f040bed75a21202cca2445688e161b44ff61e9235417dd932c62\": container with ID starting with 3458f68c2b18f040bed75a21202cca2445688e161b44ff61e9235417dd932c62 not found: ID does not exist" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.565861 4854 scope.go:117] "RemoveContainer" containerID="4e094d139794bce24d1b77da3f162372f538774dce96ff38a3b747ba163d85b2" Jan 03 06:07:45 crc kubenswrapper[4854]: E0103 06:07:45.566162 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e094d139794bce24d1b77da3f162372f538774dce96ff38a3b747ba163d85b2\": container with ID starting with 4e094d139794bce24d1b77da3f162372f538774dce96ff38a3b747ba163d85b2 not found: ID does not exist" containerID="4e094d139794bce24d1b77da3f162372f538774dce96ff38a3b747ba163d85b2" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.566238 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e094d139794bce24d1b77da3f162372f538774dce96ff38a3b747ba163d85b2"} err="failed to get container status \"4e094d139794bce24d1b77da3f162372f538774dce96ff38a3b747ba163d85b2\": rpc error: code = NotFound desc = could not find container \"4e094d139794bce24d1b77da3f162372f538774dce96ff38a3b747ba163d85b2\": container with ID starting with 4e094d139794bce24d1b77da3f162372f538774dce96ff38a3b747ba163d85b2 not found: ID does not exist" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.733857 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.755194 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.766516 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:07:45 crc kubenswrapper[4854]: E0103 06:07:45.767058 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db98a120-c01c-415a-b2e2-8044d2daad27" containerName="proxy-httpd" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.767091 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="db98a120-c01c-415a-b2e2-8044d2daad27" containerName="proxy-httpd" Jan 03 06:07:45 crc kubenswrapper[4854]: E0103 06:07:45.767106 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db98a120-c01c-415a-b2e2-8044d2daad27" containerName="ceilometer-notification-agent" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.767113 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="db98a120-c01c-415a-b2e2-8044d2daad27" containerName="ceilometer-notification-agent" Jan 03 06:07:45 crc kubenswrapper[4854]: E0103 06:07:45.767126 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db98a120-c01c-415a-b2e2-8044d2daad27" containerName="ceilometer-central-agent" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.767132 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="db98a120-c01c-415a-b2e2-8044d2daad27" containerName="ceilometer-central-agent" Jan 03 06:07:45 crc kubenswrapper[4854]: E0103 06:07:45.767153 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db98a120-c01c-415a-b2e2-8044d2daad27" containerName="sg-core" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.767159 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="db98a120-c01c-415a-b2e2-8044d2daad27" containerName="sg-core" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.767385 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="db98a120-c01c-415a-b2e2-8044d2daad27" containerName="proxy-httpd" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.767402 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="db98a120-c01c-415a-b2e2-8044d2daad27" containerName="sg-core" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.767413 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="db98a120-c01c-415a-b2e2-8044d2daad27" containerName="ceilometer-central-agent" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.767436 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="db98a120-c01c-415a-b2e2-8044d2daad27" containerName="ceilometer-notification-agent" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.769816 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.772418 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.772559 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.783411 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.847944 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b283d54-42a4-48fb-a5a7-952b74db16b8-log-httpd\") pod \"ceilometer-0\" (UID: \"0b283d54-42a4-48fb-a5a7-952b74db16b8\") " pod="openstack/ceilometer-0" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.848003 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0b283d54-42a4-48fb-a5a7-952b74db16b8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0b283d54-42a4-48fb-a5a7-952b74db16b8\") " pod="openstack/ceilometer-0" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.848162 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b283d54-42a4-48fb-a5a7-952b74db16b8-config-data\") pod \"ceilometer-0\" (UID: \"0b283d54-42a4-48fb-a5a7-952b74db16b8\") " pod="openstack/ceilometer-0" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.848196 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b283d54-42a4-48fb-a5a7-952b74db16b8-run-httpd\") pod \"ceilometer-0\" (UID: \"0b283d54-42a4-48fb-a5a7-952b74db16b8\") " pod="openstack/ceilometer-0" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.848216 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b283d54-42a4-48fb-a5a7-952b74db16b8-scripts\") pod \"ceilometer-0\" (UID: \"0b283d54-42a4-48fb-a5a7-952b74db16b8\") " pod="openstack/ceilometer-0" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.848270 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b283d54-42a4-48fb-a5a7-952b74db16b8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0b283d54-42a4-48fb-a5a7-952b74db16b8\") " pod="openstack/ceilometer-0" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.848299 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx5c4\" (UniqueName: \"kubernetes.io/projected/0b283d54-42a4-48fb-a5a7-952b74db16b8-kube-api-access-hx5c4\") pod \"ceilometer-0\" (UID: \"0b283d54-42a4-48fb-a5a7-952b74db16b8\") " pod="openstack/ceilometer-0" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.950414 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b283d54-42a4-48fb-a5a7-952b74db16b8-log-httpd\") pod \"ceilometer-0\" (UID: \"0b283d54-42a4-48fb-a5a7-952b74db16b8\") " pod="openstack/ceilometer-0" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.950464 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0b283d54-42a4-48fb-a5a7-952b74db16b8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0b283d54-42a4-48fb-a5a7-952b74db16b8\") " pod="openstack/ceilometer-0" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.950578 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b283d54-42a4-48fb-a5a7-952b74db16b8-config-data\") pod \"ceilometer-0\" (UID: \"0b283d54-42a4-48fb-a5a7-952b74db16b8\") " pod="openstack/ceilometer-0" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.950602 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b283d54-42a4-48fb-a5a7-952b74db16b8-run-httpd\") pod \"ceilometer-0\" (UID: \"0b283d54-42a4-48fb-a5a7-952b74db16b8\") " pod="openstack/ceilometer-0" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.950624 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b283d54-42a4-48fb-a5a7-952b74db16b8-scripts\") pod \"ceilometer-0\" (UID: \"0b283d54-42a4-48fb-a5a7-952b74db16b8\") " pod="openstack/ceilometer-0" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.950664 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b283d54-42a4-48fb-a5a7-952b74db16b8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0b283d54-42a4-48fb-a5a7-952b74db16b8\") " pod="openstack/ceilometer-0" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.950698 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hx5c4\" (UniqueName: \"kubernetes.io/projected/0b283d54-42a4-48fb-a5a7-952b74db16b8-kube-api-access-hx5c4\") pod \"ceilometer-0\" (UID: \"0b283d54-42a4-48fb-a5a7-952b74db16b8\") " pod="openstack/ceilometer-0" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.951037 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b283d54-42a4-48fb-a5a7-952b74db16b8-log-httpd\") pod \"ceilometer-0\" (UID: \"0b283d54-42a4-48fb-a5a7-952b74db16b8\") " pod="openstack/ceilometer-0" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.951071 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b283d54-42a4-48fb-a5a7-952b74db16b8-run-httpd\") pod \"ceilometer-0\" (UID: \"0b283d54-42a4-48fb-a5a7-952b74db16b8\") " pod="openstack/ceilometer-0" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.954371 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0b283d54-42a4-48fb-a5a7-952b74db16b8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0b283d54-42a4-48fb-a5a7-952b74db16b8\") " pod="openstack/ceilometer-0" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.954828 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b283d54-42a4-48fb-a5a7-952b74db16b8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0b283d54-42a4-48fb-a5a7-952b74db16b8\") " pod="openstack/ceilometer-0" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.955891 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b283d54-42a4-48fb-a5a7-952b74db16b8-config-data\") pod \"ceilometer-0\" (UID: \"0b283d54-42a4-48fb-a5a7-952b74db16b8\") " pod="openstack/ceilometer-0" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.956595 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b283d54-42a4-48fb-a5a7-952b74db16b8-scripts\") pod \"ceilometer-0\" (UID: \"0b283d54-42a4-48fb-a5a7-952b74db16b8\") " pod="openstack/ceilometer-0" Jan 03 06:07:45 crc kubenswrapper[4854]: I0103 06:07:45.968857 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hx5c4\" (UniqueName: \"kubernetes.io/projected/0b283d54-42a4-48fb-a5a7-952b74db16b8-kube-api-access-hx5c4\") pod \"ceilometer-0\" (UID: \"0b283d54-42a4-48fb-a5a7-952b74db16b8\") " pod="openstack/ceilometer-0" Jan 03 06:07:46 crc kubenswrapper[4854]: I0103 06:07:46.133392 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db98a120-c01c-415a-b2e2-8044d2daad27" path="/var/lib/kubelet/pods/db98a120-c01c-415a-b2e2-8044d2daad27/volumes" Jan 03 06:07:46 crc kubenswrapper[4854]: I0103 06:07:46.140704 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 06:07:46 crc kubenswrapper[4854]: I0103 06:07:46.620237 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7bbf7cf9-9g7b2" Jan 03 06:07:46 crc kubenswrapper[4854]: I0103 06:07:46.637068 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:07:46 crc kubenswrapper[4854]: W0103 06:07:46.638853 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b283d54_42a4_48fb_a5a7_952b74db16b8.slice/crio-36e7aa05273f7d08b2bb38a84f9dbe9519e1a9c617efaafa14f5f6d8ad124f79 WatchSource:0}: Error finding container 36e7aa05273f7d08b2bb38a84f9dbe9519e1a9c617efaafa14f5f6d8ad124f79: Status 404 returned error can't find the container with id 36e7aa05273f7d08b2bb38a84f9dbe9519e1a9c617efaafa14f5f6d8ad124f79 Jan 03 06:07:46 crc kubenswrapper[4854]: I0103 06:07:46.721750 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-vk4z5"] Jan 03 06:07:46 crc kubenswrapper[4854]: I0103 06:07:46.725375 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-9b86998b5-vk4z5" podUID="d274c7e9-dee8-408e-a5fe-2cbb9d319dbf" containerName="dnsmasq-dns" containerID="cri-o://cae3f5ca4314adfb5f28c0244a9aeda0e23fabf33665f95cfcbe5700382ebcda" gracePeriod=10 Jan 03 06:07:46 crc kubenswrapper[4854]: I0103 06:07:46.916874 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-9b86998b5-vk4z5" podUID="d274c7e9-dee8-408e-a5fe-2cbb9d319dbf" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.245:5353: connect: connection refused" Jan 03 06:07:47 crc kubenswrapper[4854]: I0103 06:07:47.304396 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-vk4z5" Jan 03 06:07:47 crc kubenswrapper[4854]: I0103 06:07:47.425124 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b283d54-42a4-48fb-a5a7-952b74db16b8","Type":"ContainerStarted","Data":"db58d72d35553c6937243dceacb27de2ba2b335c91c86fa72db32f876385b442"} Jan 03 06:07:47 crc kubenswrapper[4854]: I0103 06:07:47.425170 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b283d54-42a4-48fb-a5a7-952b74db16b8","Type":"ContainerStarted","Data":"36e7aa05273f7d08b2bb38a84f9dbe9519e1a9c617efaafa14f5f6d8ad124f79"} Jan 03 06:07:47 crc kubenswrapper[4854]: I0103 06:07:47.427095 4854 generic.go:334] "Generic (PLEG): container finished" podID="d274c7e9-dee8-408e-a5fe-2cbb9d319dbf" containerID="cae3f5ca4314adfb5f28c0244a9aeda0e23fabf33665f95cfcbe5700382ebcda" exitCode=0 Jan 03 06:07:47 crc kubenswrapper[4854]: I0103 06:07:47.427135 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-vk4z5" event={"ID":"d274c7e9-dee8-408e-a5fe-2cbb9d319dbf","Type":"ContainerDied","Data":"cae3f5ca4314adfb5f28c0244a9aeda0e23fabf33665f95cfcbe5700382ebcda"} Jan 03 06:07:47 crc kubenswrapper[4854]: I0103 06:07:47.427164 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-vk4z5" event={"ID":"d274c7e9-dee8-408e-a5fe-2cbb9d319dbf","Type":"ContainerDied","Data":"1c6fe71aa07b68de3ff0e78e18d73bd939c87af8783f53825c108661e5812190"} Jan 03 06:07:47 crc kubenswrapper[4854]: I0103 06:07:47.427181 4854 scope.go:117] "RemoveContainer" containerID="cae3f5ca4314adfb5f28c0244a9aeda0e23fabf33665f95cfcbe5700382ebcda" Jan 03 06:07:47 crc kubenswrapper[4854]: I0103 06:07:47.427386 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-vk4z5" Jan 03 06:07:47 crc kubenswrapper[4854]: I0103 06:07:47.461727 4854 scope.go:117] "RemoveContainer" containerID="43566fcb9e3283924e6fa1df0a556243121897ff056cb783711758c151dc68d3" Jan 03 06:07:47 crc kubenswrapper[4854]: I0103 06:07:47.490872 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fg8x6\" (UniqueName: \"kubernetes.io/projected/d274c7e9-dee8-408e-a5fe-2cbb9d319dbf-kube-api-access-fg8x6\") pod \"d274c7e9-dee8-408e-a5fe-2cbb9d319dbf\" (UID: \"d274c7e9-dee8-408e-a5fe-2cbb9d319dbf\") " Jan 03 06:07:47 crc kubenswrapper[4854]: I0103 06:07:47.490988 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d274c7e9-dee8-408e-a5fe-2cbb9d319dbf-config\") pod \"d274c7e9-dee8-408e-a5fe-2cbb9d319dbf\" (UID: \"d274c7e9-dee8-408e-a5fe-2cbb9d319dbf\") " Jan 03 06:07:47 crc kubenswrapper[4854]: I0103 06:07:47.491037 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d274c7e9-dee8-408e-a5fe-2cbb9d319dbf-ovsdbserver-nb\") pod \"d274c7e9-dee8-408e-a5fe-2cbb9d319dbf\" (UID: \"d274c7e9-dee8-408e-a5fe-2cbb9d319dbf\") " Jan 03 06:07:47 crc kubenswrapper[4854]: I0103 06:07:47.491109 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d274c7e9-dee8-408e-a5fe-2cbb9d319dbf-ovsdbserver-sb\") pod \"d274c7e9-dee8-408e-a5fe-2cbb9d319dbf\" (UID: \"d274c7e9-dee8-408e-a5fe-2cbb9d319dbf\") " Jan 03 06:07:47 crc kubenswrapper[4854]: I0103 06:07:47.497685 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d274c7e9-dee8-408e-a5fe-2cbb9d319dbf-dns-svc\") pod \"d274c7e9-dee8-408e-a5fe-2cbb9d319dbf\" (UID: \"d274c7e9-dee8-408e-a5fe-2cbb9d319dbf\") " Jan 03 06:07:47 crc kubenswrapper[4854]: I0103 06:07:47.497722 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d274c7e9-dee8-408e-a5fe-2cbb9d319dbf-dns-swift-storage-0\") pod \"d274c7e9-dee8-408e-a5fe-2cbb9d319dbf\" (UID: \"d274c7e9-dee8-408e-a5fe-2cbb9d319dbf\") " Jan 03 06:07:47 crc kubenswrapper[4854]: I0103 06:07:47.498372 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d274c7e9-dee8-408e-a5fe-2cbb9d319dbf-kube-api-access-fg8x6" (OuterVolumeSpecName: "kube-api-access-fg8x6") pod "d274c7e9-dee8-408e-a5fe-2cbb9d319dbf" (UID: "d274c7e9-dee8-408e-a5fe-2cbb9d319dbf"). InnerVolumeSpecName "kube-api-access-fg8x6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:07:47 crc kubenswrapper[4854]: I0103 06:07:47.500200 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fg8x6\" (UniqueName: \"kubernetes.io/projected/d274c7e9-dee8-408e-a5fe-2cbb9d319dbf-kube-api-access-fg8x6\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:47 crc kubenswrapper[4854]: I0103 06:07:47.507087 4854 scope.go:117] "RemoveContainer" containerID="cae3f5ca4314adfb5f28c0244a9aeda0e23fabf33665f95cfcbe5700382ebcda" Jan 03 06:07:47 crc kubenswrapper[4854]: E0103 06:07:47.509893 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cae3f5ca4314adfb5f28c0244a9aeda0e23fabf33665f95cfcbe5700382ebcda\": container with ID starting with cae3f5ca4314adfb5f28c0244a9aeda0e23fabf33665f95cfcbe5700382ebcda not found: ID does not exist" containerID="cae3f5ca4314adfb5f28c0244a9aeda0e23fabf33665f95cfcbe5700382ebcda" Jan 03 06:07:47 crc kubenswrapper[4854]: I0103 06:07:47.509942 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cae3f5ca4314adfb5f28c0244a9aeda0e23fabf33665f95cfcbe5700382ebcda"} err="failed to get container status \"cae3f5ca4314adfb5f28c0244a9aeda0e23fabf33665f95cfcbe5700382ebcda\": rpc error: code = NotFound desc = could not find container \"cae3f5ca4314adfb5f28c0244a9aeda0e23fabf33665f95cfcbe5700382ebcda\": container with ID starting with cae3f5ca4314adfb5f28c0244a9aeda0e23fabf33665f95cfcbe5700382ebcda not found: ID does not exist" Jan 03 06:07:47 crc kubenswrapper[4854]: I0103 06:07:47.509971 4854 scope.go:117] "RemoveContainer" containerID="43566fcb9e3283924e6fa1df0a556243121897ff056cb783711758c151dc68d3" Jan 03 06:07:47 crc kubenswrapper[4854]: E0103 06:07:47.510290 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43566fcb9e3283924e6fa1df0a556243121897ff056cb783711758c151dc68d3\": container with ID starting with 43566fcb9e3283924e6fa1df0a556243121897ff056cb783711758c151dc68d3 not found: ID does not exist" containerID="43566fcb9e3283924e6fa1df0a556243121897ff056cb783711758c151dc68d3" Jan 03 06:07:47 crc kubenswrapper[4854]: I0103 06:07:47.510312 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43566fcb9e3283924e6fa1df0a556243121897ff056cb783711758c151dc68d3"} err="failed to get container status \"43566fcb9e3283924e6fa1df0a556243121897ff056cb783711758c151dc68d3\": rpc error: code = NotFound desc = could not find container \"43566fcb9e3283924e6fa1df0a556243121897ff056cb783711758c151dc68d3\": container with ID starting with 43566fcb9e3283924e6fa1df0a556243121897ff056cb783711758c151dc68d3 not found: ID does not exist" Jan 03 06:07:47 crc kubenswrapper[4854]: I0103 06:07:47.560738 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d274c7e9-dee8-408e-a5fe-2cbb9d319dbf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d274c7e9-dee8-408e-a5fe-2cbb9d319dbf" (UID: "d274c7e9-dee8-408e-a5fe-2cbb9d319dbf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:07:47 crc kubenswrapper[4854]: I0103 06:07:47.562098 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d274c7e9-dee8-408e-a5fe-2cbb9d319dbf-config" (OuterVolumeSpecName: "config") pod "d274c7e9-dee8-408e-a5fe-2cbb9d319dbf" (UID: "d274c7e9-dee8-408e-a5fe-2cbb9d319dbf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:07:47 crc kubenswrapper[4854]: I0103 06:07:47.563570 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d274c7e9-dee8-408e-a5fe-2cbb9d319dbf-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d274c7e9-dee8-408e-a5fe-2cbb9d319dbf" (UID: "d274c7e9-dee8-408e-a5fe-2cbb9d319dbf"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:07:47 crc kubenswrapper[4854]: I0103 06:07:47.564660 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d274c7e9-dee8-408e-a5fe-2cbb9d319dbf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d274c7e9-dee8-408e-a5fe-2cbb9d319dbf" (UID: "d274c7e9-dee8-408e-a5fe-2cbb9d319dbf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:07:47 crc kubenswrapper[4854]: I0103 06:07:47.584253 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d274c7e9-dee8-408e-a5fe-2cbb9d319dbf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d274c7e9-dee8-408e-a5fe-2cbb9d319dbf" (UID: "d274c7e9-dee8-408e-a5fe-2cbb9d319dbf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:07:47 crc kubenswrapper[4854]: I0103 06:07:47.602580 4854 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d274c7e9-dee8-408e-a5fe-2cbb9d319dbf-config\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:47 crc kubenswrapper[4854]: I0103 06:07:47.602612 4854 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d274c7e9-dee8-408e-a5fe-2cbb9d319dbf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:47 crc kubenswrapper[4854]: I0103 06:07:47.602622 4854 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d274c7e9-dee8-408e-a5fe-2cbb9d319dbf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:47 crc kubenswrapper[4854]: I0103 06:07:47.602630 4854 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d274c7e9-dee8-408e-a5fe-2cbb9d319dbf-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:47 crc kubenswrapper[4854]: I0103 06:07:47.602640 4854 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d274c7e9-dee8-408e-a5fe-2cbb9d319dbf-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:47 crc kubenswrapper[4854]: I0103 06:07:47.764650 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-vk4z5"] Jan 03 06:07:47 crc kubenswrapper[4854]: I0103 06:07:47.776058 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-vk4z5"] Jan 03 06:07:48 crc kubenswrapper[4854]: I0103 06:07:48.158034 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d274c7e9-dee8-408e-a5fe-2cbb9d319dbf" path="/var/lib/kubelet/pods/d274c7e9-dee8-408e-a5fe-2cbb9d319dbf/volumes" Jan 03 06:07:48 crc kubenswrapper[4854]: I0103 06:07:48.439302 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lkfxj" event={"ID":"66b95f7c-2775-47c3-ad74-dd5ffe92a9a5","Type":"ContainerDied","Data":"7582cbb1742cfeac6bc5235eced5dc9da19d4b654e9eed76cd80622e26bcaaf3"} Jan 03 06:07:48 crc kubenswrapper[4854]: I0103 06:07:48.439339 4854 generic.go:334] "Generic (PLEG): container finished" podID="66b95f7c-2775-47c3-ad74-dd5ffe92a9a5" containerID="7582cbb1742cfeac6bc5235eced5dc9da19d4b654e9eed76cd80622e26bcaaf3" exitCode=0 Jan 03 06:07:48 crc kubenswrapper[4854]: I0103 06:07:48.444192 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b283d54-42a4-48fb-a5a7-952b74db16b8","Type":"ContainerStarted","Data":"4cf10882e15fed543760e137cf2cd41e094d7b423415676d1b6ba961764d7ab7"} Jan 03 06:07:49 crc kubenswrapper[4854]: I0103 06:07:49.457137 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b283d54-42a4-48fb-a5a7-952b74db16b8","Type":"ContainerStarted","Data":"e35fd78b689fe3742d239829ec7cdfe025369e2c75ba6ecfff04721d9d345281"} Jan 03 06:07:49 crc kubenswrapper[4854]: I0103 06:07:49.957904 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lkfxj" Jan 03 06:07:49 crc kubenswrapper[4854]: I0103 06:07:49.988250 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66b95f7c-2775-47c3-ad74-dd5ffe92a9a5-config-data\") pod \"66b95f7c-2775-47c3-ad74-dd5ffe92a9a5\" (UID: \"66b95f7c-2775-47c3-ad74-dd5ffe92a9a5\") " Jan 03 06:07:49 crc kubenswrapper[4854]: I0103 06:07:49.988418 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66b95f7c-2775-47c3-ad74-dd5ffe92a9a5-scripts\") pod \"66b95f7c-2775-47c3-ad74-dd5ffe92a9a5\" (UID: \"66b95f7c-2775-47c3-ad74-dd5ffe92a9a5\") " Jan 03 06:07:49 crc kubenswrapper[4854]: I0103 06:07:49.988496 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66b95f7c-2775-47c3-ad74-dd5ffe92a9a5-combined-ca-bundle\") pod \"66b95f7c-2775-47c3-ad74-dd5ffe92a9a5\" (UID: \"66b95f7c-2775-47c3-ad74-dd5ffe92a9a5\") " Jan 03 06:07:49 crc kubenswrapper[4854]: I0103 06:07:49.988529 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8z5l\" (UniqueName: \"kubernetes.io/projected/66b95f7c-2775-47c3-ad74-dd5ffe92a9a5-kube-api-access-v8z5l\") pod \"66b95f7c-2775-47c3-ad74-dd5ffe92a9a5\" (UID: \"66b95f7c-2775-47c3-ad74-dd5ffe92a9a5\") " Jan 03 06:07:49 crc kubenswrapper[4854]: I0103 06:07:49.993692 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66b95f7c-2775-47c3-ad74-dd5ffe92a9a5-kube-api-access-v8z5l" (OuterVolumeSpecName: "kube-api-access-v8z5l") pod "66b95f7c-2775-47c3-ad74-dd5ffe92a9a5" (UID: "66b95f7c-2775-47c3-ad74-dd5ffe92a9a5"). InnerVolumeSpecName "kube-api-access-v8z5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:07:49 crc kubenswrapper[4854]: I0103 06:07:49.996333 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66b95f7c-2775-47c3-ad74-dd5ffe92a9a5-scripts" (OuterVolumeSpecName: "scripts") pod "66b95f7c-2775-47c3-ad74-dd5ffe92a9a5" (UID: "66b95f7c-2775-47c3-ad74-dd5ffe92a9a5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:07:50 crc kubenswrapper[4854]: I0103 06:07:50.025913 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66b95f7c-2775-47c3-ad74-dd5ffe92a9a5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "66b95f7c-2775-47c3-ad74-dd5ffe92a9a5" (UID: "66b95f7c-2775-47c3-ad74-dd5ffe92a9a5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:07:50 crc kubenswrapper[4854]: I0103 06:07:50.047599 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66b95f7c-2775-47c3-ad74-dd5ffe92a9a5-config-data" (OuterVolumeSpecName: "config-data") pod "66b95f7c-2775-47c3-ad74-dd5ffe92a9a5" (UID: "66b95f7c-2775-47c3-ad74-dd5ffe92a9a5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:07:50 crc kubenswrapper[4854]: I0103 06:07:50.092145 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66b95f7c-2775-47c3-ad74-dd5ffe92a9a5-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:50 crc kubenswrapper[4854]: I0103 06:07:50.092201 4854 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66b95f7c-2775-47c3-ad74-dd5ffe92a9a5-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:50 crc kubenswrapper[4854]: I0103 06:07:50.092226 4854 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66b95f7c-2775-47c3-ad74-dd5ffe92a9a5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:50 crc kubenswrapper[4854]: I0103 06:07:50.092237 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8z5l\" (UniqueName: \"kubernetes.io/projected/66b95f7c-2775-47c3-ad74-dd5ffe92a9a5-kube-api-access-v8z5l\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:50 crc kubenswrapper[4854]: I0103 06:07:50.469446 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lkfxj" event={"ID":"66b95f7c-2775-47c3-ad74-dd5ffe92a9a5","Type":"ContainerDied","Data":"4d6fd98e52c7ebf6ef6494310011e4f743d6b58533b8a2f9b988442cb21a18eb"} Jan 03 06:07:50 crc kubenswrapper[4854]: I0103 06:07:50.469489 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d6fd98e52c7ebf6ef6494310011e4f743d6b58533b8a2f9b988442cb21a18eb" Jan 03 06:07:50 crc kubenswrapper[4854]: I0103 06:07:50.469499 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lkfxj" Jan 03 06:07:50 crc kubenswrapper[4854]: I0103 06:07:50.667134 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 03 06:07:50 crc kubenswrapper[4854]: I0103 06:07:50.667360 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="b6912a97-3357-44a4-b06f-284d4ec6c357" containerName="nova-scheduler-scheduler" containerID="cri-o://74bd7b9349f9f238f6154ab99c91cd976a90d67f76d30b501e2bfc9b0024e9ca" gracePeriod=30 Jan 03 06:07:50 crc kubenswrapper[4854]: I0103 06:07:50.676216 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 03 06:07:50 crc kubenswrapper[4854]: I0103 06:07:50.676505 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d" containerName="nova-api-log" containerID="cri-o://b8565419a8d3c5d950ee9ada1ab483bb63ead070a41ada649663068014a8dc36" gracePeriod=30 Jan 03 06:07:50 crc kubenswrapper[4854]: I0103 06:07:50.677099 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d" containerName="nova-api-api" containerID="cri-o://e9214f2ab082d7a827ed8bd70c1cb5877825472f943691828a3dd82891374074" gracePeriod=30 Jan 03 06:07:50 crc kubenswrapper[4854]: I0103 06:07:50.701601 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 03 06:07:50 crc kubenswrapper[4854]: I0103 06:07:50.701841 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="261c8dea-757c-4e06-9bd5-a39fdb96f34e" containerName="nova-metadata-log" containerID="cri-o://3c3deddc92d214919071046f9984daf5656b1a6e4ff0ce2e45db1651ac1ac96d" gracePeriod=30 Jan 03 06:07:50 crc kubenswrapper[4854]: I0103 06:07:50.702013 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="261c8dea-757c-4e06-9bd5-a39fdb96f34e" containerName="nova-metadata-metadata" containerID="cri-o://fb7017417a322e9530ca80496fb14a84bbcde2e3df4b43025cf7e1315f818941" gracePeriod=30 Jan 03 06:07:51 crc kubenswrapper[4854]: I0103 06:07:51.481567 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 03 06:07:51 crc kubenswrapper[4854]: I0103 06:07:51.487182 4854 generic.go:334] "Generic (PLEG): container finished" podID="bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d" containerID="e9214f2ab082d7a827ed8bd70c1cb5877825472f943691828a3dd82891374074" exitCode=0 Jan 03 06:07:51 crc kubenswrapper[4854]: I0103 06:07:51.487219 4854 generic.go:334] "Generic (PLEG): container finished" podID="bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d" containerID="b8565419a8d3c5d950ee9ada1ab483bb63ead070a41ada649663068014a8dc36" exitCode=143 Jan 03 06:07:51 crc kubenswrapper[4854]: I0103 06:07:51.487305 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d","Type":"ContainerDied","Data":"e9214f2ab082d7a827ed8bd70c1cb5877825472f943691828a3dd82891374074"} Jan 03 06:07:51 crc kubenswrapper[4854]: I0103 06:07:51.487336 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d","Type":"ContainerDied","Data":"b8565419a8d3c5d950ee9ada1ab483bb63ead070a41ada649663068014a8dc36"} Jan 03 06:07:51 crc kubenswrapper[4854]: I0103 06:07:51.487905 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d","Type":"ContainerDied","Data":"9f54262bc752c697a8c655ab07dd40d05ce64f5cb3eb1cf20eb212502f530116"} Jan 03 06:07:51 crc kubenswrapper[4854]: I0103 06:07:51.487367 4854 scope.go:117] "RemoveContainer" containerID="e9214f2ab082d7a827ed8bd70c1cb5877825472f943691828a3dd82891374074" Jan 03 06:07:51 crc kubenswrapper[4854]: I0103 06:07:51.520524 4854 scope.go:117] "RemoveContainer" containerID="b8565419a8d3c5d950ee9ada1ab483bb63ead070a41ada649663068014a8dc36" Jan 03 06:07:51 crc kubenswrapper[4854]: I0103 06:07:51.524164 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b283d54-42a4-48fb-a5a7-952b74db16b8","Type":"ContainerStarted","Data":"31fc5e08fb311d65ebebc4011426e8acfb33d8bab9f8b4fd5ab98acf724b42a9"} Jan 03 06:07:51 crc kubenswrapper[4854]: I0103 06:07:51.524236 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 03 06:07:51 crc kubenswrapper[4854]: I0103 06:07:51.535303 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d-logs\") pod \"bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d\" (UID: \"bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d\") " Jan 03 06:07:51 crc kubenswrapper[4854]: I0103 06:07:51.535649 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d-config-data\") pod \"bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d\" (UID: \"bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d\") " Jan 03 06:07:51 crc kubenswrapper[4854]: I0103 06:07:51.535748 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d-combined-ca-bundle\") pod \"bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d\" (UID: \"bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d\") " Jan 03 06:07:51 crc kubenswrapper[4854]: I0103 06:07:51.535837 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d-public-tls-certs\") pod \"bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d\" (UID: \"bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d\") " Jan 03 06:07:51 crc kubenswrapper[4854]: I0103 06:07:51.537983 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d-internal-tls-certs\") pod \"bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d\" (UID: \"bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d\") " Jan 03 06:07:51 crc kubenswrapper[4854]: I0103 06:07:51.538133 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzkdq\" (UniqueName: \"kubernetes.io/projected/bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d-kube-api-access-gzkdq\") pod \"bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d\" (UID: \"bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d\") " Jan 03 06:07:51 crc kubenswrapper[4854]: I0103 06:07:51.538567 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d-logs" (OuterVolumeSpecName: "logs") pod "bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d" (UID: "bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:07:51 crc kubenswrapper[4854]: I0103 06:07:51.539172 4854 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d-logs\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:51 crc kubenswrapper[4854]: I0103 06:07:51.542944 4854 generic.go:334] "Generic (PLEG): container finished" podID="261c8dea-757c-4e06-9bd5-a39fdb96f34e" containerID="3c3deddc92d214919071046f9984daf5656b1a6e4ff0ce2e45db1651ac1ac96d" exitCode=143 Jan 03 06:07:51 crc kubenswrapper[4854]: I0103 06:07:51.545126 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d-kube-api-access-gzkdq" (OuterVolumeSpecName: "kube-api-access-gzkdq") pod "bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d" (UID: "bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d"). InnerVolumeSpecName "kube-api-access-gzkdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:07:51 crc kubenswrapper[4854]: I0103 06:07:51.548325 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"261c8dea-757c-4e06-9bd5-a39fdb96f34e","Type":"ContainerDied","Data":"3c3deddc92d214919071046f9984daf5656b1a6e4ff0ce2e45db1651ac1ac96d"} Jan 03 06:07:51 crc kubenswrapper[4854]: I0103 06:07:51.576427 4854 scope.go:117] "RemoveContainer" containerID="e9214f2ab082d7a827ed8bd70c1cb5877825472f943691828a3dd82891374074" Jan 03 06:07:51 crc kubenswrapper[4854]: E0103 06:07:51.580996 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9214f2ab082d7a827ed8bd70c1cb5877825472f943691828a3dd82891374074\": container with ID starting with e9214f2ab082d7a827ed8bd70c1cb5877825472f943691828a3dd82891374074 not found: ID does not exist" containerID="e9214f2ab082d7a827ed8bd70c1cb5877825472f943691828a3dd82891374074" Jan 03 06:07:51 crc kubenswrapper[4854]: I0103 06:07:51.581050 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9214f2ab082d7a827ed8bd70c1cb5877825472f943691828a3dd82891374074"} err="failed to get container status \"e9214f2ab082d7a827ed8bd70c1cb5877825472f943691828a3dd82891374074\": rpc error: code = NotFound desc = could not find container \"e9214f2ab082d7a827ed8bd70c1cb5877825472f943691828a3dd82891374074\": container with ID starting with e9214f2ab082d7a827ed8bd70c1cb5877825472f943691828a3dd82891374074 not found: ID does not exist" Jan 03 06:07:51 crc kubenswrapper[4854]: I0103 06:07:51.581097 4854 scope.go:117] "RemoveContainer" containerID="b8565419a8d3c5d950ee9ada1ab483bb63ead070a41ada649663068014a8dc36" Jan 03 06:07:51 crc kubenswrapper[4854]: E0103 06:07:51.581477 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8565419a8d3c5d950ee9ada1ab483bb63ead070a41ada649663068014a8dc36\": container with ID starting with b8565419a8d3c5d950ee9ada1ab483bb63ead070a41ada649663068014a8dc36 not found: ID does not exist" containerID="b8565419a8d3c5d950ee9ada1ab483bb63ead070a41ada649663068014a8dc36" Jan 03 06:07:51 crc kubenswrapper[4854]: I0103 06:07:51.581527 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8565419a8d3c5d950ee9ada1ab483bb63ead070a41ada649663068014a8dc36"} err="failed to get container status \"b8565419a8d3c5d950ee9ada1ab483bb63ead070a41ada649663068014a8dc36\": rpc error: code = NotFound desc = could not find container \"b8565419a8d3c5d950ee9ada1ab483bb63ead070a41ada649663068014a8dc36\": container with ID starting with b8565419a8d3c5d950ee9ada1ab483bb63ead070a41ada649663068014a8dc36 not found: ID does not exist" Jan 03 06:07:51 crc kubenswrapper[4854]: I0103 06:07:51.581544 4854 scope.go:117] "RemoveContainer" containerID="e9214f2ab082d7a827ed8bd70c1cb5877825472f943691828a3dd82891374074" Jan 03 06:07:51 crc kubenswrapper[4854]: I0103 06:07:51.581822 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9214f2ab082d7a827ed8bd70c1cb5877825472f943691828a3dd82891374074"} err="failed to get container status \"e9214f2ab082d7a827ed8bd70c1cb5877825472f943691828a3dd82891374074\": rpc error: code = NotFound desc = could not find container \"e9214f2ab082d7a827ed8bd70c1cb5877825472f943691828a3dd82891374074\": container with ID starting with e9214f2ab082d7a827ed8bd70c1cb5877825472f943691828a3dd82891374074 not found: ID does not exist" Jan 03 06:07:51 crc kubenswrapper[4854]: I0103 06:07:51.581942 4854 scope.go:117] "RemoveContainer" containerID="b8565419a8d3c5d950ee9ada1ab483bb63ead070a41ada649663068014a8dc36" Jan 03 06:07:51 crc kubenswrapper[4854]: I0103 06:07:51.582516 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8565419a8d3c5d950ee9ada1ab483bb63ead070a41ada649663068014a8dc36"} err="failed to get container status \"b8565419a8d3c5d950ee9ada1ab483bb63ead070a41ada649663068014a8dc36\": rpc error: code = NotFound desc = could not find container \"b8565419a8d3c5d950ee9ada1ab483bb63ead070a41ada649663068014a8dc36\": container with ID starting with b8565419a8d3c5d950ee9ada1ab483bb63ead070a41ada649663068014a8dc36 not found: ID does not exist" Jan 03 06:07:51 crc kubenswrapper[4854]: I0103 06:07:51.592186 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d" (UID: "bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:07:51 crc kubenswrapper[4854]: I0103 06:07:51.610545 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.640824235 podStartE2EDuration="6.610523551s" podCreationTimestamp="2026-01-03 06:07:45 +0000 UTC" firstStartedPulling="2026-01-03 06:07:46.643004707 +0000 UTC m=+1644.969581279" lastFinishedPulling="2026-01-03 06:07:50.612704023 +0000 UTC m=+1648.939280595" observedRunningTime="2026-01-03 06:07:51.575978089 +0000 UTC m=+1649.902554681" watchObservedRunningTime="2026-01-03 06:07:51.610523551 +0000 UTC m=+1649.937100133" Jan 03 06:07:51 crc kubenswrapper[4854]: I0103 06:07:51.612939 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d-config-data" (OuterVolumeSpecName: "config-data") pod "bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d" (UID: "bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:07:51 crc kubenswrapper[4854]: I0103 06:07:51.622002 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d" (UID: "bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:07:51 crc kubenswrapper[4854]: I0103 06:07:51.635559 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d" (UID: "bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:07:51 crc kubenswrapper[4854]: I0103 06:07:51.646580 4854 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:51 crc kubenswrapper[4854]: I0103 06:07:51.646609 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzkdq\" (UniqueName: \"kubernetes.io/projected/bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d-kube-api-access-gzkdq\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:51 crc kubenswrapper[4854]: I0103 06:07:51.646621 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:51 crc kubenswrapper[4854]: I0103 06:07:51.646631 4854 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:51 crc kubenswrapper[4854]: I0103 06:07:51.646641 4854 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:52 crc kubenswrapper[4854]: I0103 06:07:52.557525 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 03 06:07:52 crc kubenswrapper[4854]: I0103 06:07:52.596393 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 03 06:07:52 crc kubenswrapper[4854]: I0103 06:07:52.609150 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 03 06:07:52 crc kubenswrapper[4854]: I0103 06:07:52.622400 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 03 06:07:52 crc kubenswrapper[4854]: E0103 06:07:52.622925 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d274c7e9-dee8-408e-a5fe-2cbb9d319dbf" containerName="init" Jan 03 06:07:52 crc kubenswrapper[4854]: I0103 06:07:52.622943 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="d274c7e9-dee8-408e-a5fe-2cbb9d319dbf" containerName="init" Jan 03 06:07:52 crc kubenswrapper[4854]: E0103 06:07:52.622963 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d" containerName="nova-api-api" Jan 03 06:07:52 crc kubenswrapper[4854]: I0103 06:07:52.622969 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d" containerName="nova-api-api" Jan 03 06:07:52 crc kubenswrapper[4854]: E0103 06:07:52.622978 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d274c7e9-dee8-408e-a5fe-2cbb9d319dbf" containerName="dnsmasq-dns" Jan 03 06:07:52 crc kubenswrapper[4854]: I0103 06:07:52.622985 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="d274c7e9-dee8-408e-a5fe-2cbb9d319dbf" containerName="dnsmasq-dns" Jan 03 06:07:52 crc kubenswrapper[4854]: E0103 06:07:52.623012 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d" containerName="nova-api-log" Jan 03 06:07:52 crc kubenswrapper[4854]: I0103 06:07:52.623017 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d" containerName="nova-api-log" Jan 03 06:07:52 crc kubenswrapper[4854]: E0103 06:07:52.623031 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66b95f7c-2775-47c3-ad74-dd5ffe92a9a5" containerName="nova-manage" Jan 03 06:07:52 crc kubenswrapper[4854]: I0103 06:07:52.623037 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="66b95f7c-2775-47c3-ad74-dd5ffe92a9a5" containerName="nova-manage" Jan 03 06:07:52 crc kubenswrapper[4854]: I0103 06:07:52.623264 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="66b95f7c-2775-47c3-ad74-dd5ffe92a9a5" containerName="nova-manage" Jan 03 06:07:52 crc kubenswrapper[4854]: I0103 06:07:52.623282 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="d274c7e9-dee8-408e-a5fe-2cbb9d319dbf" containerName="dnsmasq-dns" Jan 03 06:07:52 crc kubenswrapper[4854]: I0103 06:07:52.623299 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d" containerName="nova-api-api" Jan 03 06:07:52 crc kubenswrapper[4854]: I0103 06:07:52.623309 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d" containerName="nova-api-log" Jan 03 06:07:52 crc kubenswrapper[4854]: I0103 06:07:52.625009 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 03 06:07:52 crc kubenswrapper[4854]: I0103 06:07:52.627017 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 03 06:07:52 crc kubenswrapper[4854]: I0103 06:07:52.627270 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 03 06:07:52 crc kubenswrapper[4854]: I0103 06:07:52.636950 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 03 06:07:52 crc kubenswrapper[4854]: I0103 06:07:52.639343 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 03 06:07:52 crc kubenswrapper[4854]: I0103 06:07:52.673118 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37d44ca5-cb07-43ce-8bf3-13e0311e4c89-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"37d44ca5-cb07-43ce-8bf3-13e0311e4c89\") " pod="openstack/nova-api-0" Jan 03 06:07:52 crc kubenswrapper[4854]: I0103 06:07:52.673189 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/37d44ca5-cb07-43ce-8bf3-13e0311e4c89-public-tls-certs\") pod \"nova-api-0\" (UID: \"37d44ca5-cb07-43ce-8bf3-13e0311e4c89\") " pod="openstack/nova-api-0" Jan 03 06:07:52 crc kubenswrapper[4854]: I0103 06:07:52.673250 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37d44ca5-cb07-43ce-8bf3-13e0311e4c89-logs\") pod \"nova-api-0\" (UID: \"37d44ca5-cb07-43ce-8bf3-13e0311e4c89\") " pod="openstack/nova-api-0" Jan 03 06:07:52 crc kubenswrapper[4854]: I0103 06:07:52.673306 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37d44ca5-cb07-43ce-8bf3-13e0311e4c89-config-data\") pod \"nova-api-0\" (UID: \"37d44ca5-cb07-43ce-8bf3-13e0311e4c89\") " pod="openstack/nova-api-0" Jan 03 06:07:52 crc kubenswrapper[4854]: I0103 06:07:52.674264 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/37d44ca5-cb07-43ce-8bf3-13e0311e4c89-internal-tls-certs\") pod \"nova-api-0\" (UID: \"37d44ca5-cb07-43ce-8bf3-13e0311e4c89\") " pod="openstack/nova-api-0" Jan 03 06:07:52 crc kubenswrapper[4854]: I0103 06:07:52.674382 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vcxg\" (UniqueName: \"kubernetes.io/projected/37d44ca5-cb07-43ce-8bf3-13e0311e4c89-kube-api-access-4vcxg\") pod \"nova-api-0\" (UID: \"37d44ca5-cb07-43ce-8bf3-13e0311e4c89\") " pod="openstack/nova-api-0" Jan 03 06:07:52 crc kubenswrapper[4854]: I0103 06:07:52.777119 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37d44ca5-cb07-43ce-8bf3-13e0311e4c89-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"37d44ca5-cb07-43ce-8bf3-13e0311e4c89\") " pod="openstack/nova-api-0" Jan 03 06:07:52 crc kubenswrapper[4854]: I0103 06:07:52.777203 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/37d44ca5-cb07-43ce-8bf3-13e0311e4c89-public-tls-certs\") pod \"nova-api-0\" (UID: \"37d44ca5-cb07-43ce-8bf3-13e0311e4c89\") " pod="openstack/nova-api-0" Jan 03 06:07:52 crc kubenswrapper[4854]: I0103 06:07:52.777261 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37d44ca5-cb07-43ce-8bf3-13e0311e4c89-logs\") pod \"nova-api-0\" (UID: \"37d44ca5-cb07-43ce-8bf3-13e0311e4c89\") " pod="openstack/nova-api-0" Jan 03 06:07:52 crc kubenswrapper[4854]: I0103 06:07:52.777306 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37d44ca5-cb07-43ce-8bf3-13e0311e4c89-config-data\") pod \"nova-api-0\" (UID: \"37d44ca5-cb07-43ce-8bf3-13e0311e4c89\") " pod="openstack/nova-api-0" Jan 03 06:07:52 crc kubenswrapper[4854]: I0103 06:07:52.777357 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/37d44ca5-cb07-43ce-8bf3-13e0311e4c89-internal-tls-certs\") pod \"nova-api-0\" (UID: \"37d44ca5-cb07-43ce-8bf3-13e0311e4c89\") " pod="openstack/nova-api-0" Jan 03 06:07:52 crc kubenswrapper[4854]: I0103 06:07:52.777433 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vcxg\" (UniqueName: \"kubernetes.io/projected/37d44ca5-cb07-43ce-8bf3-13e0311e4c89-kube-api-access-4vcxg\") pod \"nova-api-0\" (UID: \"37d44ca5-cb07-43ce-8bf3-13e0311e4c89\") " pod="openstack/nova-api-0" Jan 03 06:07:52 crc kubenswrapper[4854]: I0103 06:07:52.777898 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37d44ca5-cb07-43ce-8bf3-13e0311e4c89-logs\") pod \"nova-api-0\" (UID: \"37d44ca5-cb07-43ce-8bf3-13e0311e4c89\") " pod="openstack/nova-api-0" Jan 03 06:07:52 crc kubenswrapper[4854]: I0103 06:07:52.782141 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37d44ca5-cb07-43ce-8bf3-13e0311e4c89-config-data\") pod \"nova-api-0\" (UID: \"37d44ca5-cb07-43ce-8bf3-13e0311e4c89\") " pod="openstack/nova-api-0" Jan 03 06:07:52 crc kubenswrapper[4854]: I0103 06:07:52.782350 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/37d44ca5-cb07-43ce-8bf3-13e0311e4c89-internal-tls-certs\") pod \"nova-api-0\" (UID: \"37d44ca5-cb07-43ce-8bf3-13e0311e4c89\") " pod="openstack/nova-api-0" Jan 03 06:07:52 crc kubenswrapper[4854]: I0103 06:07:52.782522 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/37d44ca5-cb07-43ce-8bf3-13e0311e4c89-public-tls-certs\") pod \"nova-api-0\" (UID: \"37d44ca5-cb07-43ce-8bf3-13e0311e4c89\") " pod="openstack/nova-api-0" Jan 03 06:07:52 crc kubenswrapper[4854]: I0103 06:07:52.794286 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vcxg\" (UniqueName: \"kubernetes.io/projected/37d44ca5-cb07-43ce-8bf3-13e0311e4c89-kube-api-access-4vcxg\") pod \"nova-api-0\" (UID: \"37d44ca5-cb07-43ce-8bf3-13e0311e4c89\") " pod="openstack/nova-api-0" Jan 03 06:07:52 crc kubenswrapper[4854]: I0103 06:07:52.801185 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37d44ca5-cb07-43ce-8bf3-13e0311e4c89-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"37d44ca5-cb07-43ce-8bf3-13e0311e4c89\") " pod="openstack/nova-api-0" Jan 03 06:07:53 crc kubenswrapper[4854]: I0103 06:07:53.039017 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 03 06:07:53 crc kubenswrapper[4854]: E0103 06:07:53.058775 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 74bd7b9349f9f238f6154ab99c91cd976a90d67f76d30b501e2bfc9b0024e9ca is running failed: container process not found" containerID="74bd7b9349f9f238f6154ab99c91cd976a90d67f76d30b501e2bfc9b0024e9ca" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 03 06:07:53 crc kubenswrapper[4854]: E0103 06:07:53.059197 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 74bd7b9349f9f238f6154ab99c91cd976a90d67f76d30b501e2bfc9b0024e9ca is running failed: container process not found" containerID="74bd7b9349f9f238f6154ab99c91cd976a90d67f76d30b501e2bfc9b0024e9ca" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 03 06:07:53 crc kubenswrapper[4854]: E0103 06:07:53.059622 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 74bd7b9349f9f238f6154ab99c91cd976a90d67f76d30b501e2bfc9b0024e9ca is running failed: container process not found" containerID="74bd7b9349f9f238f6154ab99c91cd976a90d67f76d30b501e2bfc9b0024e9ca" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 03 06:07:53 crc kubenswrapper[4854]: E0103 06:07:53.059693 4854 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 74bd7b9349f9f238f6154ab99c91cd976a90d67f76d30b501e2bfc9b0024e9ca is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="b6912a97-3357-44a4-b06f-284d4ec6c357" containerName="nova-scheduler-scheduler" Jan 03 06:07:53 crc kubenswrapper[4854]: I0103 06:07:53.192716 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 03 06:07:53 crc kubenswrapper[4854]: I0103 06:07:53.291598 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6912a97-3357-44a4-b06f-284d4ec6c357-combined-ca-bundle\") pod \"b6912a97-3357-44a4-b06f-284d4ec6c357\" (UID: \"b6912a97-3357-44a4-b06f-284d4ec6c357\") " Jan 03 06:07:53 crc kubenswrapper[4854]: I0103 06:07:53.291669 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shpcb\" (UniqueName: \"kubernetes.io/projected/b6912a97-3357-44a4-b06f-284d4ec6c357-kube-api-access-shpcb\") pod \"b6912a97-3357-44a4-b06f-284d4ec6c357\" (UID: \"b6912a97-3357-44a4-b06f-284d4ec6c357\") " Jan 03 06:07:53 crc kubenswrapper[4854]: I0103 06:07:53.291720 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6912a97-3357-44a4-b06f-284d4ec6c357-config-data\") pod \"b6912a97-3357-44a4-b06f-284d4ec6c357\" (UID: \"b6912a97-3357-44a4-b06f-284d4ec6c357\") " Jan 03 06:07:53 crc kubenswrapper[4854]: I0103 06:07:53.314041 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6912a97-3357-44a4-b06f-284d4ec6c357-kube-api-access-shpcb" (OuterVolumeSpecName: "kube-api-access-shpcb") pod "b6912a97-3357-44a4-b06f-284d4ec6c357" (UID: "b6912a97-3357-44a4-b06f-284d4ec6c357"). InnerVolumeSpecName "kube-api-access-shpcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:07:53 crc kubenswrapper[4854]: I0103 06:07:53.325043 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6912a97-3357-44a4-b06f-284d4ec6c357-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b6912a97-3357-44a4-b06f-284d4ec6c357" (UID: "b6912a97-3357-44a4-b06f-284d4ec6c357"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:07:53 crc kubenswrapper[4854]: I0103 06:07:53.364705 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6912a97-3357-44a4-b06f-284d4ec6c357-config-data" (OuterVolumeSpecName: "config-data") pod "b6912a97-3357-44a4-b06f-284d4ec6c357" (UID: "b6912a97-3357-44a4-b06f-284d4ec6c357"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:07:53 crc kubenswrapper[4854]: I0103 06:07:53.395788 4854 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6912a97-3357-44a4-b06f-284d4ec6c357-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:53 crc kubenswrapper[4854]: I0103 06:07:53.395825 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shpcb\" (UniqueName: \"kubernetes.io/projected/b6912a97-3357-44a4-b06f-284d4ec6c357-kube-api-access-shpcb\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:53 crc kubenswrapper[4854]: I0103 06:07:53.395837 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6912a97-3357-44a4-b06f-284d4ec6c357-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:53 crc kubenswrapper[4854]: I0103 06:07:53.571639 4854 generic.go:334] "Generic (PLEG): container finished" podID="b6912a97-3357-44a4-b06f-284d4ec6c357" containerID="74bd7b9349f9f238f6154ab99c91cd976a90d67f76d30b501e2bfc9b0024e9ca" exitCode=0 Jan 03 06:07:53 crc kubenswrapper[4854]: I0103 06:07:53.571681 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b6912a97-3357-44a4-b06f-284d4ec6c357","Type":"ContainerDied","Data":"74bd7b9349f9f238f6154ab99c91cd976a90d67f76d30b501e2bfc9b0024e9ca"} Jan 03 06:07:53 crc kubenswrapper[4854]: I0103 06:07:53.571719 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b6912a97-3357-44a4-b06f-284d4ec6c357","Type":"ContainerDied","Data":"0c3f2acec7326b78719a0cec7ab785a8dffa2b53329970846f9ea254fc24b287"} Jan 03 06:07:53 crc kubenswrapper[4854]: I0103 06:07:53.571730 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 03 06:07:53 crc kubenswrapper[4854]: I0103 06:07:53.571739 4854 scope.go:117] "RemoveContainer" containerID="74bd7b9349f9f238f6154ab99c91cd976a90d67f76d30b501e2bfc9b0024e9ca" Jan 03 06:07:53 crc kubenswrapper[4854]: I0103 06:07:53.609986 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 03 06:07:53 crc kubenswrapper[4854]: I0103 06:07:53.612343 4854 scope.go:117] "RemoveContainer" containerID="74bd7b9349f9f238f6154ab99c91cd976a90d67f76d30b501e2bfc9b0024e9ca" Jan 03 06:07:53 crc kubenswrapper[4854]: E0103 06:07:53.614467 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74bd7b9349f9f238f6154ab99c91cd976a90d67f76d30b501e2bfc9b0024e9ca\": container with ID starting with 74bd7b9349f9f238f6154ab99c91cd976a90d67f76d30b501e2bfc9b0024e9ca not found: ID does not exist" containerID="74bd7b9349f9f238f6154ab99c91cd976a90d67f76d30b501e2bfc9b0024e9ca" Jan 03 06:07:53 crc kubenswrapper[4854]: I0103 06:07:53.614532 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74bd7b9349f9f238f6154ab99c91cd976a90d67f76d30b501e2bfc9b0024e9ca"} err="failed to get container status \"74bd7b9349f9f238f6154ab99c91cd976a90d67f76d30b501e2bfc9b0024e9ca\": rpc error: code = NotFound desc = could not find container \"74bd7b9349f9f238f6154ab99c91cd976a90d67f76d30b501e2bfc9b0024e9ca\": container with ID starting with 74bd7b9349f9f238f6154ab99c91cd976a90d67f76d30b501e2bfc9b0024e9ca not found: ID does not exist" Jan 03 06:07:53 crc kubenswrapper[4854]: W0103 06:07:53.620465 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37d44ca5_cb07_43ce_8bf3_13e0311e4c89.slice/crio-bdfb91107a9b809e1ac32c1badadc313d2a549123052d6129facb6a1e9a39622 WatchSource:0}: Error finding container bdfb91107a9b809e1ac32c1badadc313d2a549123052d6129facb6a1e9a39622: Status 404 returned error can't find the container with id bdfb91107a9b809e1ac32c1badadc313d2a549123052d6129facb6a1e9a39622 Jan 03 06:07:53 crc kubenswrapper[4854]: I0103 06:07:53.637835 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 03 06:07:53 crc kubenswrapper[4854]: I0103 06:07:53.680512 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 03 06:07:53 crc kubenswrapper[4854]: I0103 06:07:53.717091 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 03 06:07:53 crc kubenswrapper[4854]: E0103 06:07:53.717618 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6912a97-3357-44a4-b06f-284d4ec6c357" containerName="nova-scheduler-scheduler" Jan 03 06:07:53 crc kubenswrapper[4854]: I0103 06:07:53.717636 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6912a97-3357-44a4-b06f-284d4ec6c357" containerName="nova-scheduler-scheduler" Jan 03 06:07:53 crc kubenswrapper[4854]: I0103 06:07:53.717882 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6912a97-3357-44a4-b06f-284d4ec6c357" containerName="nova-scheduler-scheduler" Jan 03 06:07:53 crc kubenswrapper[4854]: I0103 06:07:53.718812 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 03 06:07:53 crc kubenswrapper[4854]: I0103 06:07:53.721870 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 03 06:07:53 crc kubenswrapper[4854]: I0103 06:07:53.733677 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 03 06:07:53 crc kubenswrapper[4854]: I0103 06:07:53.808440 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgf4x\" (UniqueName: \"kubernetes.io/projected/b650c464-3b56-4ef8-9ccd-a00b23590e37-kube-api-access-lgf4x\") pod \"nova-scheduler-0\" (UID: \"b650c464-3b56-4ef8-9ccd-a00b23590e37\") " pod="openstack/nova-scheduler-0" Jan 03 06:07:53 crc kubenswrapper[4854]: I0103 06:07:53.808765 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b650c464-3b56-4ef8-9ccd-a00b23590e37-config-data\") pod \"nova-scheduler-0\" (UID: \"b650c464-3b56-4ef8-9ccd-a00b23590e37\") " pod="openstack/nova-scheduler-0" Jan 03 06:07:53 crc kubenswrapper[4854]: I0103 06:07:53.808804 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b650c464-3b56-4ef8-9ccd-a00b23590e37-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b650c464-3b56-4ef8-9ccd-a00b23590e37\") " pod="openstack/nova-scheduler-0" Jan 03 06:07:53 crc kubenswrapper[4854]: I0103 06:07:53.864886 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="261c8dea-757c-4e06-9bd5-a39fdb96f34e" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.250:8775/\": read tcp 10.217.0.2:49806->10.217.0.250:8775: read: connection reset by peer" Jan 03 06:07:53 crc kubenswrapper[4854]: I0103 06:07:53.864909 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="261c8dea-757c-4e06-9bd5-a39fdb96f34e" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.250:8775/\": read tcp 10.217.0.2:49810->10.217.0.250:8775: read: connection reset by peer" Jan 03 06:07:53 crc kubenswrapper[4854]: I0103 06:07:53.910836 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgf4x\" (UniqueName: \"kubernetes.io/projected/b650c464-3b56-4ef8-9ccd-a00b23590e37-kube-api-access-lgf4x\") pod \"nova-scheduler-0\" (UID: \"b650c464-3b56-4ef8-9ccd-a00b23590e37\") " pod="openstack/nova-scheduler-0" Jan 03 06:07:53 crc kubenswrapper[4854]: I0103 06:07:53.910987 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b650c464-3b56-4ef8-9ccd-a00b23590e37-config-data\") pod \"nova-scheduler-0\" (UID: \"b650c464-3b56-4ef8-9ccd-a00b23590e37\") " pod="openstack/nova-scheduler-0" Jan 03 06:07:53 crc kubenswrapper[4854]: I0103 06:07:53.911036 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b650c464-3b56-4ef8-9ccd-a00b23590e37-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b650c464-3b56-4ef8-9ccd-a00b23590e37\") " pod="openstack/nova-scheduler-0" Jan 03 06:07:53 crc kubenswrapper[4854]: I0103 06:07:53.917940 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b650c464-3b56-4ef8-9ccd-a00b23590e37-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b650c464-3b56-4ef8-9ccd-a00b23590e37\") " pod="openstack/nova-scheduler-0" Jan 03 06:07:53 crc kubenswrapper[4854]: I0103 06:07:53.919962 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b650c464-3b56-4ef8-9ccd-a00b23590e37-config-data\") pod \"nova-scheduler-0\" (UID: \"b650c464-3b56-4ef8-9ccd-a00b23590e37\") " pod="openstack/nova-scheduler-0" Jan 03 06:07:53 crc kubenswrapper[4854]: I0103 06:07:53.931364 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgf4x\" (UniqueName: \"kubernetes.io/projected/b650c464-3b56-4ef8-9ccd-a00b23590e37-kube-api-access-lgf4x\") pod \"nova-scheduler-0\" (UID: \"b650c464-3b56-4ef8-9ccd-a00b23590e37\") " pod="openstack/nova-scheduler-0" Jan 03 06:07:54 crc kubenswrapper[4854]: I0103 06:07:54.044891 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 03 06:07:54 crc kubenswrapper[4854]: I0103 06:07:54.135046 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6912a97-3357-44a4-b06f-284d4ec6c357" path="/var/lib/kubelet/pods/b6912a97-3357-44a4-b06f-284d4ec6c357/volumes" Jan 03 06:07:54 crc kubenswrapper[4854]: I0103 06:07:54.136143 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d" path="/var/lib/kubelet/pods/bb02f13a-dcf3-4b20-b2a8-6fa7f337b34d/volumes" Jan 03 06:07:54 crc kubenswrapper[4854]: I0103 06:07:54.563051 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 03 06:07:54 crc kubenswrapper[4854]: I0103 06:07:54.592246 4854 generic.go:334] "Generic (PLEG): container finished" podID="261c8dea-757c-4e06-9bd5-a39fdb96f34e" containerID="fb7017417a322e9530ca80496fb14a84bbcde2e3df4b43025cf7e1315f818941" exitCode=0 Jan 03 06:07:54 crc kubenswrapper[4854]: I0103 06:07:54.592328 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"261c8dea-757c-4e06-9bd5-a39fdb96f34e","Type":"ContainerDied","Data":"fb7017417a322e9530ca80496fb14a84bbcde2e3df4b43025cf7e1315f818941"} Jan 03 06:07:54 crc kubenswrapper[4854]: I0103 06:07:54.593836 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"37d44ca5-cb07-43ce-8bf3-13e0311e4c89","Type":"ContainerStarted","Data":"89661fbcd53ce72a657f586be0e761ccf153bc588190b9586a2530cd4aece00d"} Jan 03 06:07:54 crc kubenswrapper[4854]: I0103 06:07:54.594233 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"37d44ca5-cb07-43ce-8bf3-13e0311e4c89","Type":"ContainerStarted","Data":"bdfb91107a9b809e1ac32c1badadc313d2a549123052d6129facb6a1e9a39622"} Jan 03 06:07:54 crc kubenswrapper[4854]: I0103 06:07:54.594914 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b650c464-3b56-4ef8-9ccd-a00b23590e37","Type":"ContainerStarted","Data":"e9ce8044fc94a62b8a6b17cb553e67a513a1f37e6baaa694e0dcf043b9fa862c"} Jan 03 06:07:55 crc kubenswrapper[4854]: I0103 06:07:55.084008 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 03 06:07:55 crc kubenswrapper[4854]: I0103 06:07:55.241823 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6bhr\" (UniqueName: \"kubernetes.io/projected/261c8dea-757c-4e06-9bd5-a39fdb96f34e-kube-api-access-t6bhr\") pod \"261c8dea-757c-4e06-9bd5-a39fdb96f34e\" (UID: \"261c8dea-757c-4e06-9bd5-a39fdb96f34e\") " Jan 03 06:07:55 crc kubenswrapper[4854]: I0103 06:07:55.241871 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/261c8dea-757c-4e06-9bd5-a39fdb96f34e-combined-ca-bundle\") pod \"261c8dea-757c-4e06-9bd5-a39fdb96f34e\" (UID: \"261c8dea-757c-4e06-9bd5-a39fdb96f34e\") " Jan 03 06:07:55 crc kubenswrapper[4854]: I0103 06:07:55.242049 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/261c8dea-757c-4e06-9bd5-a39fdb96f34e-config-data\") pod \"261c8dea-757c-4e06-9bd5-a39fdb96f34e\" (UID: \"261c8dea-757c-4e06-9bd5-a39fdb96f34e\") " Jan 03 06:07:55 crc kubenswrapper[4854]: I0103 06:07:55.242189 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/261c8dea-757c-4e06-9bd5-a39fdb96f34e-nova-metadata-tls-certs\") pod \"261c8dea-757c-4e06-9bd5-a39fdb96f34e\" (UID: \"261c8dea-757c-4e06-9bd5-a39fdb96f34e\") " Jan 03 06:07:55 crc kubenswrapper[4854]: I0103 06:07:55.242269 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/261c8dea-757c-4e06-9bd5-a39fdb96f34e-logs\") pod \"261c8dea-757c-4e06-9bd5-a39fdb96f34e\" (UID: \"261c8dea-757c-4e06-9bd5-a39fdb96f34e\") " Jan 03 06:07:55 crc kubenswrapper[4854]: I0103 06:07:55.245427 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/261c8dea-757c-4e06-9bd5-a39fdb96f34e-logs" (OuterVolumeSpecName: "logs") pod "261c8dea-757c-4e06-9bd5-a39fdb96f34e" (UID: "261c8dea-757c-4e06-9bd5-a39fdb96f34e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:07:55 crc kubenswrapper[4854]: I0103 06:07:55.262905 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/261c8dea-757c-4e06-9bd5-a39fdb96f34e-kube-api-access-t6bhr" (OuterVolumeSpecName: "kube-api-access-t6bhr") pod "261c8dea-757c-4e06-9bd5-a39fdb96f34e" (UID: "261c8dea-757c-4e06-9bd5-a39fdb96f34e"). InnerVolumeSpecName "kube-api-access-t6bhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:07:55 crc kubenswrapper[4854]: I0103 06:07:55.306256 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/261c8dea-757c-4e06-9bd5-a39fdb96f34e-config-data" (OuterVolumeSpecName: "config-data") pod "261c8dea-757c-4e06-9bd5-a39fdb96f34e" (UID: "261c8dea-757c-4e06-9bd5-a39fdb96f34e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:07:55 crc kubenswrapper[4854]: I0103 06:07:55.316538 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/261c8dea-757c-4e06-9bd5-a39fdb96f34e-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "261c8dea-757c-4e06-9bd5-a39fdb96f34e" (UID: "261c8dea-757c-4e06-9bd5-a39fdb96f34e"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:07:55 crc kubenswrapper[4854]: I0103 06:07:55.332351 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/261c8dea-757c-4e06-9bd5-a39fdb96f34e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "261c8dea-757c-4e06-9bd5-a39fdb96f34e" (UID: "261c8dea-757c-4e06-9bd5-a39fdb96f34e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:07:55 crc kubenswrapper[4854]: I0103 06:07:55.344660 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6bhr\" (UniqueName: \"kubernetes.io/projected/261c8dea-757c-4e06-9bd5-a39fdb96f34e-kube-api-access-t6bhr\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:55 crc kubenswrapper[4854]: I0103 06:07:55.344697 4854 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/261c8dea-757c-4e06-9bd5-a39fdb96f34e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:55 crc kubenswrapper[4854]: I0103 06:07:55.344707 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/261c8dea-757c-4e06-9bd5-a39fdb96f34e-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:55 crc kubenswrapper[4854]: I0103 06:07:55.344717 4854 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/261c8dea-757c-4e06-9bd5-a39fdb96f34e-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:55 crc kubenswrapper[4854]: I0103 06:07:55.344726 4854 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/261c8dea-757c-4e06-9bd5-a39fdb96f34e-logs\") on node \"crc\" DevicePath \"\"" Jan 03 06:07:55 crc kubenswrapper[4854]: I0103 06:07:55.614758 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"261c8dea-757c-4e06-9bd5-a39fdb96f34e","Type":"ContainerDied","Data":"19a41f2c2b4aeb03ad8e5e02bce7b35d2cd782e31a11ec03eab4f6befc656f61"} Jan 03 06:07:55 crc kubenswrapper[4854]: I0103 06:07:55.614784 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 03 06:07:55 crc kubenswrapper[4854]: I0103 06:07:55.614826 4854 scope.go:117] "RemoveContainer" containerID="fb7017417a322e9530ca80496fb14a84bbcde2e3df4b43025cf7e1315f818941" Jan 03 06:07:55 crc kubenswrapper[4854]: I0103 06:07:55.620092 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"37d44ca5-cb07-43ce-8bf3-13e0311e4c89","Type":"ContainerStarted","Data":"b708fc3fd8d4c268ea84169251115123663e56bfe1a9b46cab22453b94970132"} Jan 03 06:07:55 crc kubenswrapper[4854]: I0103 06:07:55.632390 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b650c464-3b56-4ef8-9ccd-a00b23590e37","Type":"ContainerStarted","Data":"bc6c380e562e3cc7e38479daa034eba0bdd4fe5b8e02288f27b4f9c30d91d3bd"} Jan 03 06:07:55 crc kubenswrapper[4854]: I0103 06:07:55.657932 4854 scope.go:117] "RemoveContainer" containerID="3c3deddc92d214919071046f9984daf5656b1a6e4ff0ce2e45db1651ac1ac96d" Jan 03 06:07:55 crc kubenswrapper[4854]: I0103 06:07:55.684088 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.684055935 podStartE2EDuration="3.684055935s" podCreationTimestamp="2026-01-03 06:07:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:07:55.679003159 +0000 UTC m=+1654.005579721" watchObservedRunningTime="2026-01-03 06:07:55.684055935 +0000 UTC m=+1654.010632507" Jan 03 06:07:55 crc kubenswrapper[4854]: I0103 06:07:55.764215 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 03 06:07:55 crc kubenswrapper[4854]: I0103 06:07:55.814859 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 03 06:07:55 crc kubenswrapper[4854]: I0103 06:07:55.829554 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.8295340429999998 podStartE2EDuration="2.829534043s" podCreationTimestamp="2026-01-03 06:07:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:07:55.733311164 +0000 UTC m=+1654.059887736" watchObservedRunningTime="2026-01-03 06:07:55.829534043 +0000 UTC m=+1654.156110605" Jan 03 06:07:55 crc kubenswrapper[4854]: I0103 06:07:55.829677 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 03 06:07:55 crc kubenswrapper[4854]: E0103 06:07:55.830338 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="261c8dea-757c-4e06-9bd5-a39fdb96f34e" containerName="nova-metadata-log" Jan 03 06:07:55 crc kubenswrapper[4854]: I0103 06:07:55.830371 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="261c8dea-757c-4e06-9bd5-a39fdb96f34e" containerName="nova-metadata-log" Jan 03 06:07:55 crc kubenswrapper[4854]: E0103 06:07:55.830383 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="261c8dea-757c-4e06-9bd5-a39fdb96f34e" containerName="nova-metadata-metadata" Jan 03 06:07:55 crc kubenswrapper[4854]: I0103 06:07:55.830391 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="261c8dea-757c-4e06-9bd5-a39fdb96f34e" containerName="nova-metadata-metadata" Jan 03 06:07:55 crc kubenswrapper[4854]: I0103 06:07:55.830630 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="261c8dea-757c-4e06-9bd5-a39fdb96f34e" containerName="nova-metadata-log" Jan 03 06:07:55 crc kubenswrapper[4854]: I0103 06:07:55.830656 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="261c8dea-757c-4e06-9bd5-a39fdb96f34e" containerName="nova-metadata-metadata" Jan 03 06:07:55 crc kubenswrapper[4854]: I0103 06:07:55.832034 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 03 06:07:55 crc kubenswrapper[4854]: I0103 06:07:55.834813 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 03 06:07:55 crc kubenswrapper[4854]: I0103 06:07:55.835183 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 03 06:07:55 crc kubenswrapper[4854]: I0103 06:07:55.856453 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 03 06:07:55 crc kubenswrapper[4854]: I0103 06:07:55.970524 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64c47821-9bcb-435f-9802-15d45eb73f52-config-data\") pod \"nova-metadata-0\" (UID: \"64c47821-9bcb-435f-9802-15d45eb73f52\") " pod="openstack/nova-metadata-0" Jan 03 06:07:55 crc kubenswrapper[4854]: I0103 06:07:55.970856 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/64c47821-9bcb-435f-9802-15d45eb73f52-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"64c47821-9bcb-435f-9802-15d45eb73f52\") " pod="openstack/nova-metadata-0" Jan 03 06:07:55 crc kubenswrapper[4854]: I0103 06:07:55.971205 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64c47821-9bcb-435f-9802-15d45eb73f52-logs\") pod \"nova-metadata-0\" (UID: \"64c47821-9bcb-435f-9802-15d45eb73f52\") " pod="openstack/nova-metadata-0" Jan 03 06:07:55 crc kubenswrapper[4854]: I0103 06:07:55.971300 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrxzp\" (UniqueName: \"kubernetes.io/projected/64c47821-9bcb-435f-9802-15d45eb73f52-kube-api-access-hrxzp\") pod \"nova-metadata-0\" (UID: \"64c47821-9bcb-435f-9802-15d45eb73f52\") " pod="openstack/nova-metadata-0" Jan 03 06:07:55 crc kubenswrapper[4854]: I0103 06:07:55.971475 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64c47821-9bcb-435f-9802-15d45eb73f52-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"64c47821-9bcb-435f-9802-15d45eb73f52\") " pod="openstack/nova-metadata-0" Jan 03 06:07:56 crc kubenswrapper[4854]: I0103 06:07:56.074312 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/64c47821-9bcb-435f-9802-15d45eb73f52-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"64c47821-9bcb-435f-9802-15d45eb73f52\") " pod="openstack/nova-metadata-0" Jan 03 06:07:56 crc kubenswrapper[4854]: I0103 06:07:56.074433 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64c47821-9bcb-435f-9802-15d45eb73f52-logs\") pod \"nova-metadata-0\" (UID: \"64c47821-9bcb-435f-9802-15d45eb73f52\") " pod="openstack/nova-metadata-0" Jan 03 06:07:56 crc kubenswrapper[4854]: I0103 06:07:56.074475 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrxzp\" (UniqueName: \"kubernetes.io/projected/64c47821-9bcb-435f-9802-15d45eb73f52-kube-api-access-hrxzp\") pod \"nova-metadata-0\" (UID: \"64c47821-9bcb-435f-9802-15d45eb73f52\") " pod="openstack/nova-metadata-0" Jan 03 06:07:56 crc kubenswrapper[4854]: I0103 06:07:56.074552 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64c47821-9bcb-435f-9802-15d45eb73f52-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"64c47821-9bcb-435f-9802-15d45eb73f52\") " pod="openstack/nova-metadata-0" Jan 03 06:07:56 crc kubenswrapper[4854]: I0103 06:07:56.074716 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64c47821-9bcb-435f-9802-15d45eb73f52-config-data\") pod \"nova-metadata-0\" (UID: \"64c47821-9bcb-435f-9802-15d45eb73f52\") " pod="openstack/nova-metadata-0" Jan 03 06:07:56 crc kubenswrapper[4854]: I0103 06:07:56.074912 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64c47821-9bcb-435f-9802-15d45eb73f52-logs\") pod \"nova-metadata-0\" (UID: \"64c47821-9bcb-435f-9802-15d45eb73f52\") " pod="openstack/nova-metadata-0" Jan 03 06:07:56 crc kubenswrapper[4854]: I0103 06:07:56.080166 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/64c47821-9bcb-435f-9802-15d45eb73f52-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"64c47821-9bcb-435f-9802-15d45eb73f52\") " pod="openstack/nova-metadata-0" Jan 03 06:07:56 crc kubenswrapper[4854]: I0103 06:07:56.080799 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64c47821-9bcb-435f-9802-15d45eb73f52-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"64c47821-9bcb-435f-9802-15d45eb73f52\") " pod="openstack/nova-metadata-0" Jan 03 06:07:56 crc kubenswrapper[4854]: I0103 06:07:56.090905 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64c47821-9bcb-435f-9802-15d45eb73f52-config-data\") pod \"nova-metadata-0\" (UID: \"64c47821-9bcb-435f-9802-15d45eb73f52\") " pod="openstack/nova-metadata-0" Jan 03 06:07:56 crc kubenswrapper[4854]: I0103 06:07:56.095873 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrxzp\" (UniqueName: \"kubernetes.io/projected/64c47821-9bcb-435f-9802-15d45eb73f52-kube-api-access-hrxzp\") pod \"nova-metadata-0\" (UID: \"64c47821-9bcb-435f-9802-15d45eb73f52\") " pod="openstack/nova-metadata-0" Jan 03 06:07:56 crc kubenswrapper[4854]: I0103 06:07:56.136921 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="261c8dea-757c-4e06-9bd5-a39fdb96f34e" path="/var/lib/kubelet/pods/261c8dea-757c-4e06-9bd5-a39fdb96f34e/volumes" Jan 03 06:07:56 crc kubenswrapper[4854]: I0103 06:07:56.202866 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 03 06:07:56 crc kubenswrapper[4854]: I0103 06:07:56.723136 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 03 06:07:57 crc kubenswrapper[4854]: I0103 06:07:57.656785 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"64c47821-9bcb-435f-9802-15d45eb73f52","Type":"ContainerStarted","Data":"70958aecaeebcc1884d510581ab40227fa3f3f457c203a48cb1f2e2b2f452bff"} Jan 03 06:07:57 crc kubenswrapper[4854]: I0103 06:07:57.657160 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"64c47821-9bcb-435f-9802-15d45eb73f52","Type":"ContainerStarted","Data":"2a6f1dd6a96080c8344fb1a5e2f95de188f8e021002b4bb71dd95d4d9093778d"} Jan 03 06:07:57 crc kubenswrapper[4854]: I0103 06:07:57.657171 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"64c47821-9bcb-435f-9802-15d45eb73f52","Type":"ContainerStarted","Data":"28042a7d7a1d868d65ec66b379afc135cad4e0de27380b7436202cdd16f9833e"} Jan 03 06:07:57 crc kubenswrapper[4854]: I0103 06:07:57.693775 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.693753233 podStartE2EDuration="2.693753233s" podCreationTimestamp="2026-01-03 06:07:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:07:57.673854897 +0000 UTC m=+1656.000431469" watchObservedRunningTime="2026-01-03 06:07:57.693753233 +0000 UTC m=+1656.020329805" Jan 03 06:07:59 crc kubenswrapper[4854]: I0103 06:07:59.045767 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 03 06:08:00 crc kubenswrapper[4854]: I0103 06:08:00.498559 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 03 06:08:00 crc kubenswrapper[4854]: I0103 06:08:00.598966 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7eb108e4-cabe-4eca-afb4-4104b147b759-config-data\") pod \"7eb108e4-cabe-4eca-afb4-4104b147b759\" (UID: \"7eb108e4-cabe-4eca-afb4-4104b147b759\") " Jan 03 06:08:00 crc kubenswrapper[4854]: I0103 06:08:00.599163 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7eb108e4-cabe-4eca-afb4-4104b147b759-scripts\") pod \"7eb108e4-cabe-4eca-afb4-4104b147b759\" (UID: \"7eb108e4-cabe-4eca-afb4-4104b147b759\") " Jan 03 06:08:00 crc kubenswrapper[4854]: I0103 06:08:00.599196 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwf7f\" (UniqueName: \"kubernetes.io/projected/7eb108e4-cabe-4eca-afb4-4104b147b759-kube-api-access-cwf7f\") pod \"7eb108e4-cabe-4eca-afb4-4104b147b759\" (UID: \"7eb108e4-cabe-4eca-afb4-4104b147b759\") " Jan 03 06:08:00 crc kubenswrapper[4854]: I0103 06:08:00.599241 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7eb108e4-cabe-4eca-afb4-4104b147b759-combined-ca-bundle\") pod \"7eb108e4-cabe-4eca-afb4-4104b147b759\" (UID: \"7eb108e4-cabe-4eca-afb4-4104b147b759\") " Jan 03 06:08:00 crc kubenswrapper[4854]: I0103 06:08:00.607636 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7eb108e4-cabe-4eca-afb4-4104b147b759-kube-api-access-cwf7f" (OuterVolumeSpecName: "kube-api-access-cwf7f") pod "7eb108e4-cabe-4eca-afb4-4104b147b759" (UID: "7eb108e4-cabe-4eca-afb4-4104b147b759"). InnerVolumeSpecName "kube-api-access-cwf7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:08:00 crc kubenswrapper[4854]: I0103 06:08:00.615247 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7eb108e4-cabe-4eca-afb4-4104b147b759-scripts" (OuterVolumeSpecName: "scripts") pod "7eb108e4-cabe-4eca-afb4-4104b147b759" (UID: "7eb108e4-cabe-4eca-afb4-4104b147b759"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:08:00 crc kubenswrapper[4854]: I0103 06:08:00.700186 4854 generic.go:334] "Generic (PLEG): container finished" podID="7eb108e4-cabe-4eca-afb4-4104b147b759" containerID="9cd01d567662b47bde48758971fdee4db5f1305265086e33fdba07616aa2ab50" exitCode=137 Jan 03 06:08:00 crc kubenswrapper[4854]: I0103 06:08:00.700235 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7eb108e4-cabe-4eca-afb4-4104b147b759","Type":"ContainerDied","Data":"9cd01d567662b47bde48758971fdee4db5f1305265086e33fdba07616aa2ab50"} Jan 03 06:08:00 crc kubenswrapper[4854]: I0103 06:08:00.700245 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 03 06:08:00 crc kubenswrapper[4854]: I0103 06:08:00.700262 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7eb108e4-cabe-4eca-afb4-4104b147b759","Type":"ContainerDied","Data":"38b30f234c6927ac913a173a496b85168711da8e88c0d3d3981155cda1803182"} Jan 03 06:08:00 crc kubenswrapper[4854]: I0103 06:08:00.700277 4854 scope.go:117] "RemoveContainer" containerID="9cd01d567662b47bde48758971fdee4db5f1305265086e33fdba07616aa2ab50" Jan 03 06:08:00 crc kubenswrapper[4854]: I0103 06:08:00.702328 4854 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7eb108e4-cabe-4eca-afb4-4104b147b759-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:08:00 crc kubenswrapper[4854]: I0103 06:08:00.702432 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwf7f\" (UniqueName: \"kubernetes.io/projected/7eb108e4-cabe-4eca-afb4-4104b147b759-kube-api-access-cwf7f\") on node \"crc\" DevicePath \"\"" Jan 03 06:08:00 crc kubenswrapper[4854]: I0103 06:08:00.758243 4854 scope.go:117] "RemoveContainer" containerID="f6c24955dcc27f14af561c2ad0699104d42cf1ce682d3f8bdfeee35908fe078b" Jan 03 06:08:00 crc kubenswrapper[4854]: I0103 06:08:00.772783 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7eb108e4-cabe-4eca-afb4-4104b147b759-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7eb108e4-cabe-4eca-afb4-4104b147b759" (UID: "7eb108e4-cabe-4eca-afb4-4104b147b759"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:08:00 crc kubenswrapper[4854]: I0103 06:08:00.779681 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7eb108e4-cabe-4eca-afb4-4104b147b759-config-data" (OuterVolumeSpecName: "config-data") pod "7eb108e4-cabe-4eca-afb4-4104b147b759" (UID: "7eb108e4-cabe-4eca-afb4-4104b147b759"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:08:00 crc kubenswrapper[4854]: I0103 06:08:00.793207 4854 scope.go:117] "RemoveContainer" containerID="8d6faa93cfe4277de9a204cc0900087b922d49a672da006257f3580cf27c23c8" Jan 03 06:08:00 crc kubenswrapper[4854]: I0103 06:08:00.805293 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7eb108e4-cabe-4eca-afb4-4104b147b759-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 06:08:00 crc kubenswrapper[4854]: I0103 06:08:00.805330 4854 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7eb108e4-cabe-4eca-afb4-4104b147b759-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:08:00 crc kubenswrapper[4854]: I0103 06:08:00.815149 4854 scope.go:117] "RemoveContainer" containerID="0e2dc2ee6a888765aa49f91ae537905fd2fc0ad221a0bd1eeb89aa8528c0811d" Jan 03 06:08:00 crc kubenswrapper[4854]: I0103 06:08:00.842624 4854 scope.go:117] "RemoveContainer" containerID="9cd01d567662b47bde48758971fdee4db5f1305265086e33fdba07616aa2ab50" Jan 03 06:08:00 crc kubenswrapper[4854]: E0103 06:08:00.844066 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cd01d567662b47bde48758971fdee4db5f1305265086e33fdba07616aa2ab50\": container with ID starting with 9cd01d567662b47bde48758971fdee4db5f1305265086e33fdba07616aa2ab50 not found: ID does not exist" containerID="9cd01d567662b47bde48758971fdee4db5f1305265086e33fdba07616aa2ab50" Jan 03 06:08:00 crc kubenswrapper[4854]: I0103 06:08:00.844128 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cd01d567662b47bde48758971fdee4db5f1305265086e33fdba07616aa2ab50"} err="failed to get container status \"9cd01d567662b47bde48758971fdee4db5f1305265086e33fdba07616aa2ab50\": rpc error: code = NotFound desc = could not find container \"9cd01d567662b47bde48758971fdee4db5f1305265086e33fdba07616aa2ab50\": container with ID starting with 9cd01d567662b47bde48758971fdee4db5f1305265086e33fdba07616aa2ab50 not found: ID does not exist" Jan 03 06:08:00 crc kubenswrapper[4854]: I0103 06:08:00.844162 4854 scope.go:117] "RemoveContainer" containerID="f6c24955dcc27f14af561c2ad0699104d42cf1ce682d3f8bdfeee35908fe078b" Jan 03 06:08:00 crc kubenswrapper[4854]: E0103 06:08:00.844635 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6c24955dcc27f14af561c2ad0699104d42cf1ce682d3f8bdfeee35908fe078b\": container with ID starting with f6c24955dcc27f14af561c2ad0699104d42cf1ce682d3f8bdfeee35908fe078b not found: ID does not exist" containerID="f6c24955dcc27f14af561c2ad0699104d42cf1ce682d3f8bdfeee35908fe078b" Jan 03 06:08:00 crc kubenswrapper[4854]: I0103 06:08:00.844656 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6c24955dcc27f14af561c2ad0699104d42cf1ce682d3f8bdfeee35908fe078b"} err="failed to get container status \"f6c24955dcc27f14af561c2ad0699104d42cf1ce682d3f8bdfeee35908fe078b\": rpc error: code = NotFound desc = could not find container \"f6c24955dcc27f14af561c2ad0699104d42cf1ce682d3f8bdfeee35908fe078b\": container with ID starting with f6c24955dcc27f14af561c2ad0699104d42cf1ce682d3f8bdfeee35908fe078b not found: ID does not exist" Jan 03 06:08:00 crc kubenswrapper[4854]: I0103 06:08:00.844669 4854 scope.go:117] "RemoveContainer" containerID="8d6faa93cfe4277de9a204cc0900087b922d49a672da006257f3580cf27c23c8" Jan 03 06:08:00 crc kubenswrapper[4854]: E0103 06:08:00.844918 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d6faa93cfe4277de9a204cc0900087b922d49a672da006257f3580cf27c23c8\": container with ID starting with 8d6faa93cfe4277de9a204cc0900087b922d49a672da006257f3580cf27c23c8 not found: ID does not exist" containerID="8d6faa93cfe4277de9a204cc0900087b922d49a672da006257f3580cf27c23c8" Jan 03 06:08:00 crc kubenswrapper[4854]: I0103 06:08:00.844937 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d6faa93cfe4277de9a204cc0900087b922d49a672da006257f3580cf27c23c8"} err="failed to get container status \"8d6faa93cfe4277de9a204cc0900087b922d49a672da006257f3580cf27c23c8\": rpc error: code = NotFound desc = could not find container \"8d6faa93cfe4277de9a204cc0900087b922d49a672da006257f3580cf27c23c8\": container with ID starting with 8d6faa93cfe4277de9a204cc0900087b922d49a672da006257f3580cf27c23c8 not found: ID does not exist" Jan 03 06:08:00 crc kubenswrapper[4854]: I0103 06:08:00.844949 4854 scope.go:117] "RemoveContainer" containerID="0e2dc2ee6a888765aa49f91ae537905fd2fc0ad221a0bd1eeb89aa8528c0811d" Jan 03 06:08:00 crc kubenswrapper[4854]: E0103 06:08:00.845259 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e2dc2ee6a888765aa49f91ae537905fd2fc0ad221a0bd1eeb89aa8528c0811d\": container with ID starting with 0e2dc2ee6a888765aa49f91ae537905fd2fc0ad221a0bd1eeb89aa8528c0811d not found: ID does not exist" containerID="0e2dc2ee6a888765aa49f91ae537905fd2fc0ad221a0bd1eeb89aa8528c0811d" Jan 03 06:08:00 crc kubenswrapper[4854]: I0103 06:08:00.845278 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e2dc2ee6a888765aa49f91ae537905fd2fc0ad221a0bd1eeb89aa8528c0811d"} err="failed to get container status \"0e2dc2ee6a888765aa49f91ae537905fd2fc0ad221a0bd1eeb89aa8528c0811d\": rpc error: code = NotFound desc = could not find container \"0e2dc2ee6a888765aa49f91ae537905fd2fc0ad221a0bd1eeb89aa8528c0811d\": container with ID starting with 0e2dc2ee6a888765aa49f91ae537905fd2fc0ad221a0bd1eeb89aa8528c0811d not found: ID does not exist" Jan 03 06:08:01 crc kubenswrapper[4854]: I0103 06:08:01.037958 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 03 06:08:01 crc kubenswrapper[4854]: I0103 06:08:01.050193 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Jan 03 06:08:01 crc kubenswrapper[4854]: I0103 06:08:01.068900 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 03 06:08:01 crc kubenswrapper[4854]: E0103 06:08:01.069567 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eb108e4-cabe-4eca-afb4-4104b147b759" containerName="aodh-notifier" Jan 03 06:08:01 crc kubenswrapper[4854]: I0103 06:08:01.069599 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eb108e4-cabe-4eca-afb4-4104b147b759" containerName="aodh-notifier" Jan 03 06:08:01 crc kubenswrapper[4854]: E0103 06:08:01.069631 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eb108e4-cabe-4eca-afb4-4104b147b759" containerName="aodh-listener" Jan 03 06:08:01 crc kubenswrapper[4854]: I0103 06:08:01.069641 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eb108e4-cabe-4eca-afb4-4104b147b759" containerName="aodh-listener" Jan 03 06:08:01 crc kubenswrapper[4854]: E0103 06:08:01.069660 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eb108e4-cabe-4eca-afb4-4104b147b759" containerName="aodh-api" Jan 03 06:08:01 crc kubenswrapper[4854]: I0103 06:08:01.069668 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eb108e4-cabe-4eca-afb4-4104b147b759" containerName="aodh-api" Jan 03 06:08:01 crc kubenswrapper[4854]: E0103 06:08:01.069696 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eb108e4-cabe-4eca-afb4-4104b147b759" containerName="aodh-evaluator" Jan 03 06:08:01 crc kubenswrapper[4854]: I0103 06:08:01.069705 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eb108e4-cabe-4eca-afb4-4104b147b759" containerName="aodh-evaluator" Jan 03 06:08:01 crc kubenswrapper[4854]: I0103 06:08:01.070056 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="7eb108e4-cabe-4eca-afb4-4104b147b759" containerName="aodh-evaluator" Jan 03 06:08:01 crc kubenswrapper[4854]: I0103 06:08:01.070133 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="7eb108e4-cabe-4eca-afb4-4104b147b759" containerName="aodh-api" Jan 03 06:08:01 crc kubenswrapper[4854]: I0103 06:08:01.070155 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="7eb108e4-cabe-4eca-afb4-4104b147b759" containerName="aodh-notifier" Jan 03 06:08:01 crc kubenswrapper[4854]: I0103 06:08:01.070183 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="7eb108e4-cabe-4eca-afb4-4104b147b759" containerName="aodh-listener" Jan 03 06:08:01 crc kubenswrapper[4854]: I0103 06:08:01.073776 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 03 06:08:01 crc kubenswrapper[4854]: I0103 06:08:01.077680 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-bkf2n" Jan 03 06:08:01 crc kubenswrapper[4854]: I0103 06:08:01.077848 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Jan 03 06:08:01 crc kubenswrapper[4854]: I0103 06:08:01.077951 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 03 06:08:01 crc kubenswrapper[4854]: I0103 06:08:01.078130 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Jan 03 06:08:01 crc kubenswrapper[4854]: I0103 06:08:01.078249 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 03 06:08:01 crc kubenswrapper[4854]: I0103 06:08:01.084121 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 03 06:08:01 crc kubenswrapper[4854]: I0103 06:08:01.203376 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 03 06:08:01 crc kubenswrapper[4854]: I0103 06:08:01.203418 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 03 06:08:01 crc kubenswrapper[4854]: I0103 06:08:01.214942 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df567d27-a8cb-4757-8f04-c46469c0a7e4-combined-ca-bundle\") pod \"aodh-0\" (UID: \"df567d27-a8cb-4757-8f04-c46469c0a7e4\") " pod="openstack/aodh-0" Jan 03 06:08:01 crc kubenswrapper[4854]: I0103 06:08:01.215044 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df567d27-a8cb-4757-8f04-c46469c0a7e4-config-data\") pod \"aodh-0\" (UID: \"df567d27-a8cb-4757-8f04-c46469c0a7e4\") " pod="openstack/aodh-0" Jan 03 06:08:01 crc kubenswrapper[4854]: I0103 06:08:01.215154 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8tgw\" (UniqueName: \"kubernetes.io/projected/df567d27-a8cb-4757-8f04-c46469c0a7e4-kube-api-access-n8tgw\") pod \"aodh-0\" (UID: \"df567d27-a8cb-4757-8f04-c46469c0a7e4\") " pod="openstack/aodh-0" Jan 03 06:08:01 crc kubenswrapper[4854]: I0103 06:08:01.215211 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df567d27-a8cb-4757-8f04-c46469c0a7e4-scripts\") pod \"aodh-0\" (UID: \"df567d27-a8cb-4757-8f04-c46469c0a7e4\") " pod="openstack/aodh-0" Jan 03 06:08:01 crc kubenswrapper[4854]: I0103 06:08:01.215266 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/df567d27-a8cb-4757-8f04-c46469c0a7e4-public-tls-certs\") pod \"aodh-0\" (UID: \"df567d27-a8cb-4757-8f04-c46469c0a7e4\") " pod="openstack/aodh-0" Jan 03 06:08:01 crc kubenswrapper[4854]: I0103 06:08:01.215287 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/df567d27-a8cb-4757-8f04-c46469c0a7e4-internal-tls-certs\") pod \"aodh-0\" (UID: \"df567d27-a8cb-4757-8f04-c46469c0a7e4\") " pod="openstack/aodh-0" Jan 03 06:08:01 crc kubenswrapper[4854]: I0103 06:08:01.317648 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df567d27-a8cb-4757-8f04-c46469c0a7e4-combined-ca-bundle\") pod \"aodh-0\" (UID: \"df567d27-a8cb-4757-8f04-c46469c0a7e4\") " pod="openstack/aodh-0" Jan 03 06:08:01 crc kubenswrapper[4854]: I0103 06:08:01.317767 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df567d27-a8cb-4757-8f04-c46469c0a7e4-config-data\") pod \"aodh-0\" (UID: \"df567d27-a8cb-4757-8f04-c46469c0a7e4\") " pod="openstack/aodh-0" Jan 03 06:08:01 crc kubenswrapper[4854]: I0103 06:08:01.317879 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8tgw\" (UniqueName: \"kubernetes.io/projected/df567d27-a8cb-4757-8f04-c46469c0a7e4-kube-api-access-n8tgw\") pod \"aodh-0\" (UID: \"df567d27-a8cb-4757-8f04-c46469c0a7e4\") " pod="openstack/aodh-0" Jan 03 06:08:01 crc kubenswrapper[4854]: I0103 06:08:01.317953 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df567d27-a8cb-4757-8f04-c46469c0a7e4-scripts\") pod \"aodh-0\" (UID: \"df567d27-a8cb-4757-8f04-c46469c0a7e4\") " pod="openstack/aodh-0" Jan 03 06:08:01 crc kubenswrapper[4854]: I0103 06:08:01.317997 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/df567d27-a8cb-4757-8f04-c46469c0a7e4-public-tls-certs\") pod \"aodh-0\" (UID: \"df567d27-a8cb-4757-8f04-c46469c0a7e4\") " pod="openstack/aodh-0" Jan 03 06:08:01 crc kubenswrapper[4854]: I0103 06:08:01.318017 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/df567d27-a8cb-4757-8f04-c46469c0a7e4-internal-tls-certs\") pod \"aodh-0\" (UID: \"df567d27-a8cb-4757-8f04-c46469c0a7e4\") " pod="openstack/aodh-0" Jan 03 06:08:01 crc kubenswrapper[4854]: I0103 06:08:01.321679 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df567d27-a8cb-4757-8f04-c46469c0a7e4-scripts\") pod \"aodh-0\" (UID: \"df567d27-a8cb-4757-8f04-c46469c0a7e4\") " pod="openstack/aodh-0" Jan 03 06:08:01 crc kubenswrapper[4854]: I0103 06:08:01.322311 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df567d27-a8cb-4757-8f04-c46469c0a7e4-config-data\") pod \"aodh-0\" (UID: \"df567d27-a8cb-4757-8f04-c46469c0a7e4\") " pod="openstack/aodh-0" Jan 03 06:08:01 crc kubenswrapper[4854]: I0103 06:08:01.322841 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/df567d27-a8cb-4757-8f04-c46469c0a7e4-internal-tls-certs\") pod \"aodh-0\" (UID: \"df567d27-a8cb-4757-8f04-c46469c0a7e4\") " pod="openstack/aodh-0" Jan 03 06:08:01 crc kubenswrapper[4854]: I0103 06:08:01.323317 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/df567d27-a8cb-4757-8f04-c46469c0a7e4-public-tls-certs\") pod \"aodh-0\" (UID: \"df567d27-a8cb-4757-8f04-c46469c0a7e4\") " pod="openstack/aodh-0" Jan 03 06:08:01 crc kubenswrapper[4854]: I0103 06:08:01.325008 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df567d27-a8cb-4757-8f04-c46469c0a7e4-combined-ca-bundle\") pod \"aodh-0\" (UID: \"df567d27-a8cb-4757-8f04-c46469c0a7e4\") " pod="openstack/aodh-0" Jan 03 06:08:01 crc kubenswrapper[4854]: I0103 06:08:01.336831 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8tgw\" (UniqueName: \"kubernetes.io/projected/df567d27-a8cb-4757-8f04-c46469c0a7e4-kube-api-access-n8tgw\") pod \"aodh-0\" (UID: \"df567d27-a8cb-4757-8f04-c46469c0a7e4\") " pod="openstack/aodh-0" Jan 03 06:08:01 crc kubenswrapper[4854]: I0103 06:08:01.421708 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 03 06:08:01 crc kubenswrapper[4854]: I0103 06:08:01.918392 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 03 06:08:02 crc kubenswrapper[4854]: I0103 06:08:02.136510 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7eb108e4-cabe-4eca-afb4-4104b147b759" path="/var/lib/kubelet/pods/7eb108e4-cabe-4eca-afb4-4104b147b759/volumes" Jan 03 06:08:02 crc kubenswrapper[4854]: I0103 06:08:02.734725 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"df567d27-a8cb-4757-8f04-c46469c0a7e4","Type":"ContainerStarted","Data":"fb3a9ab99a93c69c71499c0b9cd7b71c4cb1a4b2ae434995657da4b3fd1936d6"} Jan 03 06:08:03 crc kubenswrapper[4854]: I0103 06:08:03.043648 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 03 06:08:03 crc kubenswrapper[4854]: I0103 06:08:03.043694 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 03 06:08:03 crc kubenswrapper[4854]: I0103 06:08:03.754215 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"df567d27-a8cb-4757-8f04-c46469c0a7e4","Type":"ContainerStarted","Data":"1589d6a12003c6cf4965e1e3723d23c6200d32fc9e0c5032843c0757ea4173d0"} Jan 03 06:08:03 crc kubenswrapper[4854]: I0103 06:08:03.755312 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"df567d27-a8cb-4757-8f04-c46469c0a7e4","Type":"ContainerStarted","Data":"d06d6bad16d7d5f4d1a48a197f5e649ffdb7503b92922d3d84065c3d6270283e"} Jan 03 06:08:04 crc kubenswrapper[4854]: I0103 06:08:04.045844 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 03 06:08:04 crc kubenswrapper[4854]: I0103 06:08:04.065752 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="37d44ca5-cb07-43ce-8bf3-13e0311e4c89" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.5:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 06:08:04 crc kubenswrapper[4854]: I0103 06:08:04.066108 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="37d44ca5-cb07-43ce-8bf3-13e0311e4c89" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.5:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 06:08:04 crc kubenswrapper[4854]: I0103 06:08:04.086825 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 03 06:08:04 crc kubenswrapper[4854]: I0103 06:08:04.768062 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"df567d27-a8cb-4757-8f04-c46469c0a7e4","Type":"ContainerStarted","Data":"3c809ffcc76650f7b9a4d185df60590f7e8f208a277c9315e67ff91ad4db79e3"} Jan 03 06:08:04 crc kubenswrapper[4854]: I0103 06:08:04.803209 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 03 06:08:05 crc kubenswrapper[4854]: I0103 06:08:05.781256 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"df567d27-a8cb-4757-8f04-c46469c0a7e4","Type":"ContainerStarted","Data":"e39e0543ca796c855c85b3fcdebe71a5e9e48c175344d5e3a7ff77391354c18c"} Jan 03 06:08:05 crc kubenswrapper[4854]: I0103 06:08:05.811366 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=1.8654970290000001 podStartE2EDuration="4.811343867s" podCreationTimestamp="2026-01-03 06:08:01 +0000 UTC" firstStartedPulling="2026-01-03 06:08:01.920830316 +0000 UTC m=+1660.247406888" lastFinishedPulling="2026-01-03 06:08:04.866677134 +0000 UTC m=+1663.193253726" observedRunningTime="2026-01-03 06:08:05.806986508 +0000 UTC m=+1664.133563090" watchObservedRunningTime="2026-01-03 06:08:05.811343867 +0000 UTC m=+1664.137920439" Jan 03 06:08:06 crc kubenswrapper[4854]: I0103 06:08:06.203236 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 03 06:08:06 crc kubenswrapper[4854]: I0103 06:08:06.203298 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 03 06:08:07 crc kubenswrapper[4854]: I0103 06:08:07.224351 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="64c47821-9bcb-435f-9802-15d45eb73f52" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.7:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 06:08:07 crc kubenswrapper[4854]: I0103 06:08:07.224375 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="64c47821-9bcb-435f-9802-15d45eb73f52" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.7:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 06:08:11 crc kubenswrapper[4854]: I0103 06:08:11.755581 4854 patch_prober.go:28] interesting pod/machine-config-daemon-qdhfx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 06:08:11 crc kubenswrapper[4854]: I0103 06:08:11.756140 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 06:08:13 crc kubenswrapper[4854]: I0103 06:08:13.049270 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 03 06:08:13 crc kubenswrapper[4854]: I0103 06:08:13.051035 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 03 06:08:13 crc kubenswrapper[4854]: I0103 06:08:13.051600 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 03 06:08:13 crc kubenswrapper[4854]: I0103 06:08:13.058708 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 03 06:08:13 crc kubenswrapper[4854]: I0103 06:08:13.881620 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 03 06:08:13 crc kubenswrapper[4854]: I0103 06:08:13.890652 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 03 06:08:16 crc kubenswrapper[4854]: I0103 06:08:16.148624 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 03 06:08:16 crc kubenswrapper[4854]: I0103 06:08:16.221887 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 03 06:08:16 crc kubenswrapper[4854]: I0103 06:08:16.225909 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 03 06:08:16 crc kubenswrapper[4854]: I0103 06:08:16.230646 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 03 06:08:16 crc kubenswrapper[4854]: I0103 06:08:16.932167 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 03 06:08:20 crc kubenswrapper[4854]: I0103 06:08:20.496106 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 03 06:08:20 crc kubenswrapper[4854]: I0103 06:08:20.496819 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="b2518f81-3d3d-47a6-a157-19c2685f07d2" containerName="kube-state-metrics" containerID="cri-o://b65f5d8c9356c57828ae2c5f130c3053c2ad374bb9721b32ae54d663648d9a17" gracePeriod=30 Jan 03 06:08:20 crc kubenswrapper[4854]: I0103 06:08:20.688033 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 03 06:08:20 crc kubenswrapper[4854]: I0103 06:08:20.688717 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="34ba0145-7948-47f0-bec5-7f5fc6cb1150" containerName="mysqld-exporter" containerID="cri-o://02d19c211e557252722ce483b873a9bb932af341ec47b481c980ccc8a449aaeb" gracePeriod=30 Jan 03 06:08:20 crc kubenswrapper[4854]: I0103 06:08:20.978643 4854 generic.go:334] "Generic (PLEG): container finished" podID="b2518f81-3d3d-47a6-a157-19c2685f07d2" containerID="b65f5d8c9356c57828ae2c5f130c3053c2ad374bb9721b32ae54d663648d9a17" exitCode=2 Jan 03 06:08:20 crc kubenswrapper[4854]: I0103 06:08:20.978720 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b2518f81-3d3d-47a6-a157-19c2685f07d2","Type":"ContainerDied","Data":"b65f5d8c9356c57828ae2c5f130c3053c2ad374bb9721b32ae54d663648d9a17"} Jan 03 06:08:20 crc kubenswrapper[4854]: I0103 06:08:20.982328 4854 generic.go:334] "Generic (PLEG): container finished" podID="34ba0145-7948-47f0-bec5-7f5fc6cb1150" containerID="02d19c211e557252722ce483b873a9bb932af341ec47b481c980ccc8a449aaeb" exitCode=2 Jan 03 06:08:20 crc kubenswrapper[4854]: I0103 06:08:20.982367 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"34ba0145-7948-47f0-bec5-7f5fc6cb1150","Type":"ContainerDied","Data":"02d19c211e557252722ce483b873a9bb932af341ec47b481c980ccc8a449aaeb"} Jan 03 06:08:21 crc kubenswrapper[4854]: I0103 06:08:21.318020 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 03 06:08:21 crc kubenswrapper[4854]: I0103 06:08:21.327351 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 03 06:08:21 crc kubenswrapper[4854]: I0103 06:08:21.462353 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whvlb\" (UniqueName: \"kubernetes.io/projected/34ba0145-7948-47f0-bec5-7f5fc6cb1150-kube-api-access-whvlb\") pod \"34ba0145-7948-47f0-bec5-7f5fc6cb1150\" (UID: \"34ba0145-7948-47f0-bec5-7f5fc6cb1150\") " Jan 03 06:08:21 crc kubenswrapper[4854]: I0103 06:08:21.462532 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34ba0145-7948-47f0-bec5-7f5fc6cb1150-combined-ca-bundle\") pod \"34ba0145-7948-47f0-bec5-7f5fc6cb1150\" (UID: \"34ba0145-7948-47f0-bec5-7f5fc6cb1150\") " Jan 03 06:08:21 crc kubenswrapper[4854]: I0103 06:08:21.462795 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbpsb\" (UniqueName: \"kubernetes.io/projected/b2518f81-3d3d-47a6-a157-19c2685f07d2-kube-api-access-fbpsb\") pod \"b2518f81-3d3d-47a6-a157-19c2685f07d2\" (UID: \"b2518f81-3d3d-47a6-a157-19c2685f07d2\") " Jan 03 06:08:21 crc kubenswrapper[4854]: I0103 06:08:21.462823 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34ba0145-7948-47f0-bec5-7f5fc6cb1150-config-data\") pod \"34ba0145-7948-47f0-bec5-7f5fc6cb1150\" (UID: \"34ba0145-7948-47f0-bec5-7f5fc6cb1150\") " Jan 03 06:08:21 crc kubenswrapper[4854]: I0103 06:08:21.470334 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2518f81-3d3d-47a6-a157-19c2685f07d2-kube-api-access-fbpsb" (OuterVolumeSpecName: "kube-api-access-fbpsb") pod "b2518f81-3d3d-47a6-a157-19c2685f07d2" (UID: "b2518f81-3d3d-47a6-a157-19c2685f07d2"). InnerVolumeSpecName "kube-api-access-fbpsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:08:21 crc kubenswrapper[4854]: I0103 06:08:21.470989 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34ba0145-7948-47f0-bec5-7f5fc6cb1150-kube-api-access-whvlb" (OuterVolumeSpecName: "kube-api-access-whvlb") pod "34ba0145-7948-47f0-bec5-7f5fc6cb1150" (UID: "34ba0145-7948-47f0-bec5-7f5fc6cb1150"). InnerVolumeSpecName "kube-api-access-whvlb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:08:21 crc kubenswrapper[4854]: I0103 06:08:21.502139 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34ba0145-7948-47f0-bec5-7f5fc6cb1150-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "34ba0145-7948-47f0-bec5-7f5fc6cb1150" (UID: "34ba0145-7948-47f0-bec5-7f5fc6cb1150"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:08:21 crc kubenswrapper[4854]: I0103 06:08:21.538951 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34ba0145-7948-47f0-bec5-7f5fc6cb1150-config-data" (OuterVolumeSpecName: "config-data") pod "34ba0145-7948-47f0-bec5-7f5fc6cb1150" (UID: "34ba0145-7948-47f0-bec5-7f5fc6cb1150"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:08:21 crc kubenswrapper[4854]: I0103 06:08:21.565467 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbpsb\" (UniqueName: \"kubernetes.io/projected/b2518f81-3d3d-47a6-a157-19c2685f07d2-kube-api-access-fbpsb\") on node \"crc\" DevicePath \"\"" Jan 03 06:08:21 crc kubenswrapper[4854]: I0103 06:08:21.565562 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34ba0145-7948-47f0-bec5-7f5fc6cb1150-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 06:08:21 crc kubenswrapper[4854]: I0103 06:08:21.565575 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whvlb\" (UniqueName: \"kubernetes.io/projected/34ba0145-7948-47f0-bec5-7f5fc6cb1150-kube-api-access-whvlb\") on node \"crc\" DevicePath \"\"" Jan 03 06:08:21 crc kubenswrapper[4854]: I0103 06:08:21.565588 4854 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34ba0145-7948-47f0-bec5-7f5fc6cb1150-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:08:22 crc kubenswrapper[4854]: I0103 06:08:22.006632 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 03 06:08:22 crc kubenswrapper[4854]: I0103 06:08:22.006659 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b2518f81-3d3d-47a6-a157-19c2685f07d2","Type":"ContainerDied","Data":"e38e63c3678040ae4e8bdfad604a4579c1e8551a859f7aa50b44b27d06c18126"} Jan 03 06:08:22 crc kubenswrapper[4854]: I0103 06:08:22.006788 4854 scope.go:117] "RemoveContainer" containerID="b65f5d8c9356c57828ae2c5f130c3053c2ad374bb9721b32ae54d663648d9a17" Jan 03 06:08:22 crc kubenswrapper[4854]: I0103 06:08:22.009402 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"34ba0145-7948-47f0-bec5-7f5fc6cb1150","Type":"ContainerDied","Data":"fe3cf8768f12da88fd1a54bdc23aaee6ba78a7d2bda073cf5c769354949edd57"} Jan 03 06:08:22 crc kubenswrapper[4854]: I0103 06:08:22.009452 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 03 06:08:22 crc kubenswrapper[4854]: I0103 06:08:22.061546 4854 scope.go:117] "RemoveContainer" containerID="02d19c211e557252722ce483b873a9bb932af341ec47b481c980ccc8a449aaeb" Jan 03 06:08:22 crc kubenswrapper[4854]: I0103 06:08:22.078218 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 03 06:08:22 crc kubenswrapper[4854]: I0103 06:08:22.105375 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 03 06:08:22 crc kubenswrapper[4854]: I0103 06:08:22.117272 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 03 06:08:22 crc kubenswrapper[4854]: I0103 06:08:22.200018 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34ba0145-7948-47f0-bec5-7f5fc6cb1150" path="/var/lib/kubelet/pods/34ba0145-7948-47f0-bec5-7f5fc6cb1150/volumes" Jan 03 06:08:22 crc kubenswrapper[4854]: I0103 06:08:22.200940 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Jan 03 06:08:22 crc kubenswrapper[4854]: E0103 06:08:22.201393 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ba0145-7948-47f0-bec5-7f5fc6cb1150" containerName="mysqld-exporter" Jan 03 06:08:22 crc kubenswrapper[4854]: I0103 06:08:22.201411 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ba0145-7948-47f0-bec5-7f5fc6cb1150" containerName="mysqld-exporter" Jan 03 06:08:22 crc kubenswrapper[4854]: E0103 06:08:22.201450 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2518f81-3d3d-47a6-a157-19c2685f07d2" containerName="kube-state-metrics" Jan 03 06:08:22 crc kubenswrapper[4854]: I0103 06:08:22.201457 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2518f81-3d3d-47a6-a157-19c2685f07d2" containerName="kube-state-metrics" Jan 03 06:08:22 crc kubenswrapper[4854]: I0103 06:08:22.201715 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="34ba0145-7948-47f0-bec5-7f5fc6cb1150" containerName="mysqld-exporter" Jan 03 06:08:22 crc kubenswrapper[4854]: I0103 06:08:22.201733 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2518f81-3d3d-47a6-a157-19c2685f07d2" containerName="kube-state-metrics" Jan 03 06:08:22 crc kubenswrapper[4854]: I0103 06:08:22.203333 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 03 06:08:22 crc kubenswrapper[4854]: I0103 06:08:22.206172 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Jan 03 06:08:22 crc kubenswrapper[4854]: I0103 06:08:22.206466 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Jan 03 06:08:22 crc kubenswrapper[4854]: I0103 06:08:22.211440 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 03 06:08:22 crc kubenswrapper[4854]: I0103 06:08:22.230102 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 03 06:08:22 crc kubenswrapper[4854]: I0103 06:08:22.249068 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 03 06:08:22 crc kubenswrapper[4854]: I0103 06:08:22.251060 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 03 06:08:22 crc kubenswrapper[4854]: I0103 06:08:22.253240 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 03 06:08:22 crc kubenswrapper[4854]: I0103 06:08:22.254661 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 03 06:08:22 crc kubenswrapper[4854]: I0103 06:08:22.266216 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 03 06:08:22 crc kubenswrapper[4854]: I0103 06:08:22.288023 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2bx7\" (UniqueName: \"kubernetes.io/projected/5325c3f3-d386-41ed-aa06-b0adfc7ce2b9-kube-api-access-l2bx7\") pod \"mysqld-exporter-0\" (UID: \"5325c3f3-d386-41ed-aa06-b0adfc7ce2b9\") " pod="openstack/mysqld-exporter-0" Jan 03 06:08:22 crc kubenswrapper[4854]: I0103 06:08:22.288191 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/006530e4-7385-4334-80e8-86bfcf5f645f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"006530e4-7385-4334-80e8-86bfcf5f645f\") " pod="openstack/kube-state-metrics-0" Jan 03 06:08:22 crc kubenswrapper[4854]: I0103 06:08:22.288267 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5325c3f3-d386-41ed-aa06-b0adfc7ce2b9-config-data\") pod \"mysqld-exporter-0\" (UID: \"5325c3f3-d386-41ed-aa06-b0adfc7ce2b9\") " pod="openstack/mysqld-exporter-0" Jan 03 06:08:22 crc kubenswrapper[4854]: I0103 06:08:22.288492 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/5325c3f3-d386-41ed-aa06-b0adfc7ce2b9-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"5325c3f3-d386-41ed-aa06-b0adfc7ce2b9\") " pod="openstack/mysqld-exporter-0" Jan 03 06:08:22 crc kubenswrapper[4854]: I0103 06:08:22.288529 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/006530e4-7385-4334-80e8-86bfcf5f645f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"006530e4-7385-4334-80e8-86bfcf5f645f\") " pod="openstack/kube-state-metrics-0" Jan 03 06:08:22 crc kubenswrapper[4854]: I0103 06:08:22.288575 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/006530e4-7385-4334-80e8-86bfcf5f645f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"006530e4-7385-4334-80e8-86bfcf5f645f\") " pod="openstack/kube-state-metrics-0" Jan 03 06:08:22 crc kubenswrapper[4854]: I0103 06:08:22.288972 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5325c3f3-d386-41ed-aa06-b0adfc7ce2b9-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"5325c3f3-d386-41ed-aa06-b0adfc7ce2b9\") " pod="openstack/mysqld-exporter-0" Jan 03 06:08:22 crc kubenswrapper[4854]: I0103 06:08:22.289013 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6mmk\" (UniqueName: \"kubernetes.io/projected/006530e4-7385-4334-80e8-86bfcf5f645f-kube-api-access-s6mmk\") pod \"kube-state-metrics-0\" (UID: \"006530e4-7385-4334-80e8-86bfcf5f645f\") " pod="openstack/kube-state-metrics-0" Jan 03 06:08:22 crc kubenswrapper[4854]: I0103 06:08:22.391360 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2bx7\" (UniqueName: \"kubernetes.io/projected/5325c3f3-d386-41ed-aa06-b0adfc7ce2b9-kube-api-access-l2bx7\") pod \"mysqld-exporter-0\" (UID: \"5325c3f3-d386-41ed-aa06-b0adfc7ce2b9\") " pod="openstack/mysqld-exporter-0" Jan 03 06:08:22 crc kubenswrapper[4854]: I0103 06:08:22.391406 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/006530e4-7385-4334-80e8-86bfcf5f645f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"006530e4-7385-4334-80e8-86bfcf5f645f\") " pod="openstack/kube-state-metrics-0" Jan 03 06:08:22 crc kubenswrapper[4854]: I0103 06:08:22.391432 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5325c3f3-d386-41ed-aa06-b0adfc7ce2b9-config-data\") pod \"mysqld-exporter-0\" (UID: \"5325c3f3-d386-41ed-aa06-b0adfc7ce2b9\") " pod="openstack/mysqld-exporter-0" Jan 03 06:08:22 crc kubenswrapper[4854]: I0103 06:08:22.391496 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/5325c3f3-d386-41ed-aa06-b0adfc7ce2b9-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"5325c3f3-d386-41ed-aa06-b0adfc7ce2b9\") " pod="openstack/mysqld-exporter-0" Jan 03 06:08:22 crc kubenswrapper[4854]: I0103 06:08:22.391519 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/006530e4-7385-4334-80e8-86bfcf5f645f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"006530e4-7385-4334-80e8-86bfcf5f645f\") " pod="openstack/kube-state-metrics-0" Jan 03 06:08:22 crc kubenswrapper[4854]: I0103 06:08:22.391541 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/006530e4-7385-4334-80e8-86bfcf5f645f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"006530e4-7385-4334-80e8-86bfcf5f645f\") " pod="openstack/kube-state-metrics-0" Jan 03 06:08:22 crc kubenswrapper[4854]: I0103 06:08:22.391642 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5325c3f3-d386-41ed-aa06-b0adfc7ce2b9-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"5325c3f3-d386-41ed-aa06-b0adfc7ce2b9\") " pod="openstack/mysqld-exporter-0" Jan 03 06:08:22 crc kubenswrapper[4854]: I0103 06:08:22.391665 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6mmk\" (UniqueName: \"kubernetes.io/projected/006530e4-7385-4334-80e8-86bfcf5f645f-kube-api-access-s6mmk\") pod \"kube-state-metrics-0\" (UID: \"006530e4-7385-4334-80e8-86bfcf5f645f\") " pod="openstack/kube-state-metrics-0" Jan 03 06:08:22 crc kubenswrapper[4854]: I0103 06:08:22.397439 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/006530e4-7385-4334-80e8-86bfcf5f645f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"006530e4-7385-4334-80e8-86bfcf5f645f\") " pod="openstack/kube-state-metrics-0" Jan 03 06:08:22 crc kubenswrapper[4854]: I0103 06:08:22.398492 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/5325c3f3-d386-41ed-aa06-b0adfc7ce2b9-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"5325c3f3-d386-41ed-aa06-b0adfc7ce2b9\") " pod="openstack/mysqld-exporter-0" Jan 03 06:08:22 crc kubenswrapper[4854]: I0103 06:08:22.401735 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5325c3f3-d386-41ed-aa06-b0adfc7ce2b9-config-data\") pod \"mysqld-exporter-0\" (UID: \"5325c3f3-d386-41ed-aa06-b0adfc7ce2b9\") " pod="openstack/mysqld-exporter-0" Jan 03 06:08:22 crc kubenswrapper[4854]: I0103 06:08:22.403657 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5325c3f3-d386-41ed-aa06-b0adfc7ce2b9-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"5325c3f3-d386-41ed-aa06-b0adfc7ce2b9\") " pod="openstack/mysqld-exporter-0" Jan 03 06:08:22 crc kubenswrapper[4854]: I0103 06:08:22.404184 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/006530e4-7385-4334-80e8-86bfcf5f645f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"006530e4-7385-4334-80e8-86bfcf5f645f\") " pod="openstack/kube-state-metrics-0" Jan 03 06:08:22 crc kubenswrapper[4854]: I0103 06:08:22.405385 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/006530e4-7385-4334-80e8-86bfcf5f645f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"006530e4-7385-4334-80e8-86bfcf5f645f\") " pod="openstack/kube-state-metrics-0" Jan 03 06:08:22 crc kubenswrapper[4854]: I0103 06:08:22.415537 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6mmk\" (UniqueName: \"kubernetes.io/projected/006530e4-7385-4334-80e8-86bfcf5f645f-kube-api-access-s6mmk\") pod \"kube-state-metrics-0\" (UID: \"006530e4-7385-4334-80e8-86bfcf5f645f\") " pod="openstack/kube-state-metrics-0" Jan 03 06:08:22 crc kubenswrapper[4854]: I0103 06:08:22.415947 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2bx7\" (UniqueName: \"kubernetes.io/projected/5325c3f3-d386-41ed-aa06-b0adfc7ce2b9-kube-api-access-l2bx7\") pod \"mysqld-exporter-0\" (UID: \"5325c3f3-d386-41ed-aa06-b0adfc7ce2b9\") " pod="openstack/mysqld-exporter-0" Jan 03 06:08:22 crc kubenswrapper[4854]: I0103 06:08:22.535234 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 03 06:08:22 crc kubenswrapper[4854]: I0103 06:08:22.570751 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 03 06:08:23 crc kubenswrapper[4854]: I0103 06:08:23.100060 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 03 06:08:23 crc kubenswrapper[4854]: W0103 06:08:23.258429 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod006530e4_7385_4334_80e8_86bfcf5f645f.slice/crio-49bbdc3f5ee3b4d6be6ae174d3d9bbe919908a4ed125d9f8a5102e580aa005e0 WatchSource:0}: Error finding container 49bbdc3f5ee3b4d6be6ae174d3d9bbe919908a4ed125d9f8a5102e580aa005e0: Status 404 returned error can't find the container with id 49bbdc3f5ee3b4d6be6ae174d3d9bbe919908a4ed125d9f8a5102e580aa005e0 Jan 03 06:08:23 crc kubenswrapper[4854]: I0103 06:08:23.262423 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 03 06:08:23 crc kubenswrapper[4854]: I0103 06:08:23.348281 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:08:23 crc kubenswrapper[4854]: I0103 06:08:23.348633 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0b283d54-42a4-48fb-a5a7-952b74db16b8" containerName="ceilometer-central-agent" containerID="cri-o://db58d72d35553c6937243dceacb27de2ba2b335c91c86fa72db32f876385b442" gracePeriod=30 Jan 03 06:08:23 crc kubenswrapper[4854]: I0103 06:08:23.348772 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0b283d54-42a4-48fb-a5a7-952b74db16b8" containerName="ceilometer-notification-agent" containerID="cri-o://4cf10882e15fed543760e137cf2cd41e094d7b423415676d1b6ba961764d7ab7" gracePeriod=30 Jan 03 06:08:23 crc kubenswrapper[4854]: I0103 06:08:23.348783 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0b283d54-42a4-48fb-a5a7-952b74db16b8" containerName="sg-core" containerID="cri-o://e35fd78b689fe3742d239829ec7cdfe025369e2c75ba6ecfff04721d9d345281" gracePeriod=30 Jan 03 06:08:23 crc kubenswrapper[4854]: I0103 06:08:23.349034 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0b283d54-42a4-48fb-a5a7-952b74db16b8" containerName="proxy-httpd" containerID="cri-o://31fc5e08fb311d65ebebc4011426e8acfb33d8bab9f8b4fd5ab98acf724b42a9" gracePeriod=30 Jan 03 06:08:24 crc kubenswrapper[4854]: I0103 06:08:24.041170 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"006530e4-7385-4334-80e8-86bfcf5f645f","Type":"ContainerStarted","Data":"b58f07a91b101d402e5ad5f9c0b5494b8e40ab02b756053d8fab7a7e58a30fc7"} Jan 03 06:08:24 crc kubenswrapper[4854]: I0103 06:08:24.041508 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"006530e4-7385-4334-80e8-86bfcf5f645f","Type":"ContainerStarted","Data":"49bbdc3f5ee3b4d6be6ae174d3d9bbe919908a4ed125d9f8a5102e580aa005e0"} Jan 03 06:08:24 crc kubenswrapper[4854]: I0103 06:08:24.043208 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 03 06:08:24 crc kubenswrapper[4854]: I0103 06:08:24.058987 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"5325c3f3-d386-41ed-aa06-b0adfc7ce2b9","Type":"ContainerStarted","Data":"3ccf902ad4284bdb788152c9f3b494dcb62280545de0238b3cfb027487ab855d"} Jan 03 06:08:24 crc kubenswrapper[4854]: I0103 06:08:24.061740 4854 generic.go:334] "Generic (PLEG): container finished" podID="0b283d54-42a4-48fb-a5a7-952b74db16b8" containerID="31fc5e08fb311d65ebebc4011426e8acfb33d8bab9f8b4fd5ab98acf724b42a9" exitCode=0 Jan 03 06:08:24 crc kubenswrapper[4854]: I0103 06:08:24.061768 4854 generic.go:334] "Generic (PLEG): container finished" podID="0b283d54-42a4-48fb-a5a7-952b74db16b8" containerID="e35fd78b689fe3742d239829ec7cdfe025369e2c75ba6ecfff04721d9d345281" exitCode=2 Jan 03 06:08:24 crc kubenswrapper[4854]: I0103 06:08:24.061777 4854 generic.go:334] "Generic (PLEG): container finished" podID="0b283d54-42a4-48fb-a5a7-952b74db16b8" containerID="db58d72d35553c6937243dceacb27de2ba2b335c91c86fa72db32f876385b442" exitCode=0 Jan 03 06:08:24 crc kubenswrapper[4854]: I0103 06:08:24.061798 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b283d54-42a4-48fb-a5a7-952b74db16b8","Type":"ContainerDied","Data":"31fc5e08fb311d65ebebc4011426e8acfb33d8bab9f8b4fd5ab98acf724b42a9"} Jan 03 06:08:24 crc kubenswrapper[4854]: I0103 06:08:24.061827 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b283d54-42a4-48fb-a5a7-952b74db16b8","Type":"ContainerDied","Data":"e35fd78b689fe3742d239829ec7cdfe025369e2c75ba6ecfff04721d9d345281"} Jan 03 06:08:24 crc kubenswrapper[4854]: I0103 06:08:24.061837 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b283d54-42a4-48fb-a5a7-952b74db16b8","Type":"ContainerDied","Data":"db58d72d35553c6937243dceacb27de2ba2b335c91c86fa72db32f876385b442"} Jan 03 06:08:24 crc kubenswrapper[4854]: I0103 06:08:24.138421 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2518f81-3d3d-47a6-a157-19c2685f07d2" path="/var/lib/kubelet/pods/b2518f81-3d3d-47a6-a157-19c2685f07d2/volumes" Jan 03 06:08:24 crc kubenswrapper[4854]: E0103 06:08:24.402464 4854 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2518f81_3d3d_47a6_a157_19c2685f07d2.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34ba0145_7948_47f0_bec5_7f5fc6cb1150.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2518f81_3d3d_47a6_a157_19c2685f07d2.slice/crio-e38e63c3678040ae4e8bdfad604a4579c1e8551a859f7aa50b44b27d06c18126\": RecentStats: unable to find data in memory cache]" Jan 03 06:08:25 crc kubenswrapper[4854]: I0103 06:08:25.083625 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"5325c3f3-d386-41ed-aa06-b0adfc7ce2b9","Type":"ContainerStarted","Data":"a81d6a7184d9ecbbb93e955ead6d487eec3c56824c582758209af59ab07001e9"} Jan 03 06:08:25 crc kubenswrapper[4854]: I0103 06:08:25.117292 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.757395926 podStartE2EDuration="3.117273838s" podCreationTimestamp="2026-01-03 06:08:22 +0000 UTC" firstStartedPulling="2026-01-03 06:08:23.261825437 +0000 UTC m=+1681.588402009" lastFinishedPulling="2026-01-03 06:08:23.621703349 +0000 UTC m=+1681.948279921" observedRunningTime="2026-01-03 06:08:24.098880766 +0000 UTC m=+1682.425457338" watchObservedRunningTime="2026-01-03 06:08:25.117273838 +0000 UTC m=+1683.443850420" Jan 03 06:08:25 crc kubenswrapper[4854]: I0103 06:08:25.119754 4854 generic.go:334] "Generic (PLEG): container finished" podID="0b283d54-42a4-48fb-a5a7-952b74db16b8" containerID="4cf10882e15fed543760e137cf2cd41e094d7b423415676d1b6ba961764d7ab7" exitCode=0 Jan 03 06:08:25 crc kubenswrapper[4854]: I0103 06:08:25.119985 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b283d54-42a4-48fb-a5a7-952b74db16b8","Type":"ContainerDied","Data":"4cf10882e15fed543760e137cf2cd41e094d7b423415676d1b6ba961764d7ab7"} Jan 03 06:08:25 crc kubenswrapper[4854]: I0103 06:08:25.132142 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=2.252440466 podStartE2EDuration="3.132116428s" podCreationTimestamp="2026-01-03 06:08:22 +0000 UTC" firstStartedPulling="2026-01-03 06:08:23.093455319 +0000 UTC m=+1681.420031901" lastFinishedPulling="2026-01-03 06:08:23.973131291 +0000 UTC m=+1682.299707863" observedRunningTime="2026-01-03 06:08:25.116808376 +0000 UTC m=+1683.443384948" watchObservedRunningTime="2026-01-03 06:08:25.132116428 +0000 UTC m=+1683.458693000" Jan 03 06:08:25 crc kubenswrapper[4854]: I0103 06:08:25.503114 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 06:08:25 crc kubenswrapper[4854]: I0103 06:08:25.582068 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b283d54-42a4-48fb-a5a7-952b74db16b8-log-httpd\") pod \"0b283d54-42a4-48fb-a5a7-952b74db16b8\" (UID: \"0b283d54-42a4-48fb-a5a7-952b74db16b8\") " Jan 03 06:08:25 crc kubenswrapper[4854]: I0103 06:08:25.582244 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b283d54-42a4-48fb-a5a7-952b74db16b8-scripts\") pod \"0b283d54-42a4-48fb-a5a7-952b74db16b8\" (UID: \"0b283d54-42a4-48fb-a5a7-952b74db16b8\") " Jan 03 06:08:25 crc kubenswrapper[4854]: I0103 06:08:25.582325 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b283d54-42a4-48fb-a5a7-952b74db16b8-config-data\") pod \"0b283d54-42a4-48fb-a5a7-952b74db16b8\" (UID: \"0b283d54-42a4-48fb-a5a7-952b74db16b8\") " Jan 03 06:08:25 crc kubenswrapper[4854]: I0103 06:08:25.582536 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b283d54-42a4-48fb-a5a7-952b74db16b8-run-httpd\") pod \"0b283d54-42a4-48fb-a5a7-952b74db16b8\" (UID: \"0b283d54-42a4-48fb-a5a7-952b74db16b8\") " Jan 03 06:08:25 crc kubenswrapper[4854]: I0103 06:08:25.582601 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b283d54-42a4-48fb-a5a7-952b74db16b8-combined-ca-bundle\") pod \"0b283d54-42a4-48fb-a5a7-952b74db16b8\" (UID: \"0b283d54-42a4-48fb-a5a7-952b74db16b8\") " Jan 03 06:08:25 crc kubenswrapper[4854]: I0103 06:08:25.582647 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hx5c4\" (UniqueName: \"kubernetes.io/projected/0b283d54-42a4-48fb-a5a7-952b74db16b8-kube-api-access-hx5c4\") pod \"0b283d54-42a4-48fb-a5a7-952b74db16b8\" (UID: \"0b283d54-42a4-48fb-a5a7-952b74db16b8\") " Jan 03 06:08:25 crc kubenswrapper[4854]: I0103 06:08:25.582725 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0b283d54-42a4-48fb-a5a7-952b74db16b8-sg-core-conf-yaml\") pod \"0b283d54-42a4-48fb-a5a7-952b74db16b8\" (UID: \"0b283d54-42a4-48fb-a5a7-952b74db16b8\") " Jan 03 06:08:25 crc kubenswrapper[4854]: I0103 06:08:25.582820 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b283d54-42a4-48fb-a5a7-952b74db16b8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0b283d54-42a4-48fb-a5a7-952b74db16b8" (UID: "0b283d54-42a4-48fb-a5a7-952b74db16b8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:08:25 crc kubenswrapper[4854]: I0103 06:08:25.583473 4854 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b283d54-42a4-48fb-a5a7-952b74db16b8-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 03 06:08:25 crc kubenswrapper[4854]: I0103 06:08:25.583887 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b283d54-42a4-48fb-a5a7-952b74db16b8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0b283d54-42a4-48fb-a5a7-952b74db16b8" (UID: "0b283d54-42a4-48fb-a5a7-952b74db16b8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:08:25 crc kubenswrapper[4854]: I0103 06:08:25.588994 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b283d54-42a4-48fb-a5a7-952b74db16b8-scripts" (OuterVolumeSpecName: "scripts") pod "0b283d54-42a4-48fb-a5a7-952b74db16b8" (UID: "0b283d54-42a4-48fb-a5a7-952b74db16b8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:08:25 crc kubenswrapper[4854]: I0103 06:08:25.608698 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b283d54-42a4-48fb-a5a7-952b74db16b8-kube-api-access-hx5c4" (OuterVolumeSpecName: "kube-api-access-hx5c4") pod "0b283d54-42a4-48fb-a5a7-952b74db16b8" (UID: "0b283d54-42a4-48fb-a5a7-952b74db16b8"). InnerVolumeSpecName "kube-api-access-hx5c4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:08:25 crc kubenswrapper[4854]: I0103 06:08:25.648552 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b283d54-42a4-48fb-a5a7-952b74db16b8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0b283d54-42a4-48fb-a5a7-952b74db16b8" (UID: "0b283d54-42a4-48fb-a5a7-952b74db16b8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:08:25 crc kubenswrapper[4854]: I0103 06:08:25.688821 4854 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b283d54-42a4-48fb-a5a7-952b74db16b8-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 03 06:08:25 crc kubenswrapper[4854]: I0103 06:08:25.688859 4854 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b283d54-42a4-48fb-a5a7-952b74db16b8-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:08:25 crc kubenswrapper[4854]: I0103 06:08:25.688873 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hx5c4\" (UniqueName: \"kubernetes.io/projected/0b283d54-42a4-48fb-a5a7-952b74db16b8-kube-api-access-hx5c4\") on node \"crc\" DevicePath \"\"" Jan 03 06:08:25 crc kubenswrapper[4854]: I0103 06:08:25.688888 4854 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0b283d54-42a4-48fb-a5a7-952b74db16b8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 03 06:08:25 crc kubenswrapper[4854]: I0103 06:08:25.776919 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b283d54-42a4-48fb-a5a7-952b74db16b8-config-data" (OuterVolumeSpecName: "config-data") pod "0b283d54-42a4-48fb-a5a7-952b74db16b8" (UID: "0b283d54-42a4-48fb-a5a7-952b74db16b8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:08:25 crc kubenswrapper[4854]: I0103 06:08:25.776952 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b283d54-42a4-48fb-a5a7-952b74db16b8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b283d54-42a4-48fb-a5a7-952b74db16b8" (UID: "0b283d54-42a4-48fb-a5a7-952b74db16b8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:08:25 crc kubenswrapper[4854]: I0103 06:08:25.792887 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b283d54-42a4-48fb-a5a7-952b74db16b8-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 06:08:25 crc kubenswrapper[4854]: I0103 06:08:25.792929 4854 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b283d54-42a4-48fb-a5a7-952b74db16b8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:08:26 crc kubenswrapper[4854]: I0103 06:08:26.143533 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 06:08:26 crc kubenswrapper[4854]: I0103 06:08:26.143522 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b283d54-42a4-48fb-a5a7-952b74db16b8","Type":"ContainerDied","Data":"36e7aa05273f7d08b2bb38a84f9dbe9519e1a9c617efaafa14f5f6d8ad124f79"} Jan 03 06:08:26 crc kubenswrapper[4854]: I0103 06:08:26.143698 4854 scope.go:117] "RemoveContainer" containerID="31fc5e08fb311d65ebebc4011426e8acfb33d8bab9f8b4fd5ab98acf724b42a9" Jan 03 06:08:26 crc kubenswrapper[4854]: I0103 06:08:26.196939 4854 scope.go:117] "RemoveContainer" containerID="e35fd78b689fe3742d239829ec7cdfe025369e2c75ba6ecfff04721d9d345281" Jan 03 06:08:26 crc kubenswrapper[4854]: I0103 06:08:26.197228 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:08:26 crc kubenswrapper[4854]: I0103 06:08:26.213911 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:08:26 crc kubenswrapper[4854]: I0103 06:08:26.233125 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:08:26 crc kubenswrapper[4854]: E0103 06:08:26.233903 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b283d54-42a4-48fb-a5a7-952b74db16b8" containerName="proxy-httpd" Jan 03 06:08:26 crc kubenswrapper[4854]: I0103 06:08:26.233923 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b283d54-42a4-48fb-a5a7-952b74db16b8" containerName="proxy-httpd" Jan 03 06:08:26 crc kubenswrapper[4854]: E0103 06:08:26.234016 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b283d54-42a4-48fb-a5a7-952b74db16b8" containerName="ceilometer-notification-agent" Jan 03 06:08:26 crc kubenswrapper[4854]: I0103 06:08:26.234022 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b283d54-42a4-48fb-a5a7-952b74db16b8" containerName="ceilometer-notification-agent" Jan 03 06:08:26 crc kubenswrapper[4854]: E0103 06:08:26.234036 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b283d54-42a4-48fb-a5a7-952b74db16b8" containerName="sg-core" Jan 03 06:08:26 crc kubenswrapper[4854]: I0103 06:08:26.234042 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b283d54-42a4-48fb-a5a7-952b74db16b8" containerName="sg-core" Jan 03 06:08:26 crc kubenswrapper[4854]: E0103 06:08:26.234069 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b283d54-42a4-48fb-a5a7-952b74db16b8" containerName="ceilometer-central-agent" Jan 03 06:08:26 crc kubenswrapper[4854]: I0103 06:08:26.234089 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b283d54-42a4-48fb-a5a7-952b74db16b8" containerName="ceilometer-central-agent" Jan 03 06:08:26 crc kubenswrapper[4854]: I0103 06:08:26.234478 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b283d54-42a4-48fb-a5a7-952b74db16b8" containerName="ceilometer-notification-agent" Jan 03 06:08:26 crc kubenswrapper[4854]: I0103 06:08:26.234503 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b283d54-42a4-48fb-a5a7-952b74db16b8" containerName="proxy-httpd" Jan 03 06:08:26 crc kubenswrapper[4854]: I0103 06:08:26.234513 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b283d54-42a4-48fb-a5a7-952b74db16b8" containerName="ceilometer-central-agent" Jan 03 06:08:26 crc kubenswrapper[4854]: I0103 06:08:26.234524 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b283d54-42a4-48fb-a5a7-952b74db16b8" containerName="sg-core" Jan 03 06:08:26 crc kubenswrapper[4854]: I0103 06:08:26.237287 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 06:08:26 crc kubenswrapper[4854]: I0103 06:08:26.241405 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 03 06:08:26 crc kubenswrapper[4854]: I0103 06:08:26.241632 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 03 06:08:26 crc kubenswrapper[4854]: I0103 06:08:26.241828 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 03 06:08:26 crc kubenswrapper[4854]: I0103 06:08:26.244349 4854 scope.go:117] "RemoveContainer" containerID="4cf10882e15fed543760e137cf2cd41e094d7b423415676d1b6ba961764d7ab7" Jan 03 06:08:26 crc kubenswrapper[4854]: I0103 06:08:26.251996 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:08:26 crc kubenswrapper[4854]: I0103 06:08:26.284014 4854 scope.go:117] "RemoveContainer" containerID="db58d72d35553c6937243dceacb27de2ba2b335c91c86fa72db32f876385b442" Jan 03 06:08:26 crc kubenswrapper[4854]: I0103 06:08:26.318940 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0909f3e-f3d7-4539-82d8-2a1af3c015aa-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c0909f3e-f3d7-4539-82d8-2a1af3c015aa\") " pod="openstack/ceilometer-0" Jan 03 06:08:26 crc kubenswrapper[4854]: I0103 06:08:26.319344 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0909f3e-f3d7-4539-82d8-2a1af3c015aa-scripts\") pod \"ceilometer-0\" (UID: \"c0909f3e-f3d7-4539-82d8-2a1af3c015aa\") " pod="openstack/ceilometer-0" Jan 03 06:08:26 crc kubenswrapper[4854]: I0103 06:08:26.319571 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0909f3e-f3d7-4539-82d8-2a1af3c015aa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c0909f3e-f3d7-4539-82d8-2a1af3c015aa\") " pod="openstack/ceilometer-0" Jan 03 06:08:26 crc kubenswrapper[4854]: I0103 06:08:26.319764 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0909f3e-f3d7-4539-82d8-2a1af3c015aa-config-data\") pod \"ceilometer-0\" (UID: \"c0909f3e-f3d7-4539-82d8-2a1af3c015aa\") " pod="openstack/ceilometer-0" Jan 03 06:08:26 crc kubenswrapper[4854]: I0103 06:08:26.319972 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0909f3e-f3d7-4539-82d8-2a1af3c015aa-run-httpd\") pod \"ceilometer-0\" (UID: \"c0909f3e-f3d7-4539-82d8-2a1af3c015aa\") " pod="openstack/ceilometer-0" Jan 03 06:08:26 crc kubenswrapper[4854]: I0103 06:08:26.320187 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c0909f3e-f3d7-4539-82d8-2a1af3c015aa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c0909f3e-f3d7-4539-82d8-2a1af3c015aa\") " pod="openstack/ceilometer-0" Jan 03 06:08:26 crc kubenswrapper[4854]: I0103 06:08:26.320274 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd9d9\" (UniqueName: \"kubernetes.io/projected/c0909f3e-f3d7-4539-82d8-2a1af3c015aa-kube-api-access-cd9d9\") pod \"ceilometer-0\" (UID: \"c0909f3e-f3d7-4539-82d8-2a1af3c015aa\") " pod="openstack/ceilometer-0" Jan 03 06:08:26 crc kubenswrapper[4854]: I0103 06:08:26.320498 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0909f3e-f3d7-4539-82d8-2a1af3c015aa-log-httpd\") pod \"ceilometer-0\" (UID: \"c0909f3e-f3d7-4539-82d8-2a1af3c015aa\") " pod="openstack/ceilometer-0" Jan 03 06:08:26 crc kubenswrapper[4854]: I0103 06:08:26.422817 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0909f3e-f3d7-4539-82d8-2a1af3c015aa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c0909f3e-f3d7-4539-82d8-2a1af3c015aa\") " pod="openstack/ceilometer-0" Jan 03 06:08:26 crc kubenswrapper[4854]: I0103 06:08:26.423233 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0909f3e-f3d7-4539-82d8-2a1af3c015aa-config-data\") pod \"ceilometer-0\" (UID: \"c0909f3e-f3d7-4539-82d8-2a1af3c015aa\") " pod="openstack/ceilometer-0" Jan 03 06:08:26 crc kubenswrapper[4854]: I0103 06:08:26.423762 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0909f3e-f3d7-4539-82d8-2a1af3c015aa-run-httpd\") pod \"ceilometer-0\" (UID: \"c0909f3e-f3d7-4539-82d8-2a1af3c015aa\") " pod="openstack/ceilometer-0" Jan 03 06:08:26 crc kubenswrapper[4854]: I0103 06:08:26.423860 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c0909f3e-f3d7-4539-82d8-2a1af3c015aa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c0909f3e-f3d7-4539-82d8-2a1af3c015aa\") " pod="openstack/ceilometer-0" Jan 03 06:08:26 crc kubenswrapper[4854]: I0103 06:08:26.423974 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cd9d9\" (UniqueName: \"kubernetes.io/projected/c0909f3e-f3d7-4539-82d8-2a1af3c015aa-kube-api-access-cd9d9\") pod \"ceilometer-0\" (UID: \"c0909f3e-f3d7-4539-82d8-2a1af3c015aa\") " pod="openstack/ceilometer-0" Jan 03 06:08:26 crc kubenswrapper[4854]: I0103 06:08:26.424467 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0909f3e-f3d7-4539-82d8-2a1af3c015aa-log-httpd\") pod \"ceilometer-0\" (UID: \"c0909f3e-f3d7-4539-82d8-2a1af3c015aa\") " pod="openstack/ceilometer-0" Jan 03 06:08:26 crc kubenswrapper[4854]: I0103 06:08:26.424530 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0909f3e-f3d7-4539-82d8-2a1af3c015aa-run-httpd\") pod \"ceilometer-0\" (UID: \"c0909f3e-f3d7-4539-82d8-2a1af3c015aa\") " pod="openstack/ceilometer-0" Jan 03 06:08:26 crc kubenswrapper[4854]: I0103 06:08:26.424757 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0909f3e-f3d7-4539-82d8-2a1af3c015aa-log-httpd\") pod \"ceilometer-0\" (UID: \"c0909f3e-f3d7-4539-82d8-2a1af3c015aa\") " pod="openstack/ceilometer-0" Jan 03 06:08:26 crc kubenswrapper[4854]: I0103 06:08:26.424858 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0909f3e-f3d7-4539-82d8-2a1af3c015aa-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c0909f3e-f3d7-4539-82d8-2a1af3c015aa\") " pod="openstack/ceilometer-0" Jan 03 06:08:26 crc kubenswrapper[4854]: I0103 06:08:26.424983 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0909f3e-f3d7-4539-82d8-2a1af3c015aa-scripts\") pod \"ceilometer-0\" (UID: \"c0909f3e-f3d7-4539-82d8-2a1af3c015aa\") " pod="openstack/ceilometer-0" Jan 03 06:08:26 crc kubenswrapper[4854]: I0103 06:08:26.427995 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0909f3e-f3d7-4539-82d8-2a1af3c015aa-config-data\") pod \"ceilometer-0\" (UID: \"c0909f3e-f3d7-4539-82d8-2a1af3c015aa\") " pod="openstack/ceilometer-0" Jan 03 06:08:26 crc kubenswrapper[4854]: I0103 06:08:26.427969 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0909f3e-f3d7-4539-82d8-2a1af3c015aa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c0909f3e-f3d7-4539-82d8-2a1af3c015aa\") " pod="openstack/ceilometer-0" Jan 03 06:08:26 crc kubenswrapper[4854]: I0103 06:08:26.429710 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c0909f3e-f3d7-4539-82d8-2a1af3c015aa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c0909f3e-f3d7-4539-82d8-2a1af3c015aa\") " pod="openstack/ceilometer-0" Jan 03 06:08:26 crc kubenswrapper[4854]: I0103 06:08:26.430676 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0909f3e-f3d7-4539-82d8-2a1af3c015aa-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c0909f3e-f3d7-4539-82d8-2a1af3c015aa\") " pod="openstack/ceilometer-0" Jan 03 06:08:26 crc kubenswrapper[4854]: I0103 06:08:26.440449 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0909f3e-f3d7-4539-82d8-2a1af3c015aa-scripts\") pod \"ceilometer-0\" (UID: \"c0909f3e-f3d7-4539-82d8-2a1af3c015aa\") " pod="openstack/ceilometer-0" Jan 03 06:08:26 crc kubenswrapper[4854]: I0103 06:08:26.441557 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cd9d9\" (UniqueName: \"kubernetes.io/projected/c0909f3e-f3d7-4539-82d8-2a1af3c015aa-kube-api-access-cd9d9\") pod \"ceilometer-0\" (UID: \"c0909f3e-f3d7-4539-82d8-2a1af3c015aa\") " pod="openstack/ceilometer-0" Jan 03 06:08:26 crc kubenswrapper[4854]: I0103 06:08:26.604499 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 06:08:26 crc kubenswrapper[4854]: E0103 06:08:26.812830 4854 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34ba0145_7948_47f0_bec5_7f5fc6cb1150.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2518f81_3d3d_47a6_a157_19c2685f07d2.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2518f81_3d3d_47a6_a157_19c2685f07d2.slice/crio-e38e63c3678040ae4e8bdfad604a4579c1e8551a859f7aa50b44b27d06c18126\": RecentStats: unable to find data in memory cache]" Jan 03 06:08:27 crc kubenswrapper[4854]: W0103 06:08:27.124944 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0909f3e_f3d7_4539_82d8_2a1af3c015aa.slice/crio-d91351465618d32b90cdc9b5de676281c6b4f960630512e96f26a11de138f042 WatchSource:0}: Error finding container d91351465618d32b90cdc9b5de676281c6b4f960630512e96f26a11de138f042: Status 404 returned error can't find the container with id d91351465618d32b90cdc9b5de676281c6b4f960630512e96f26a11de138f042 Jan 03 06:08:27 crc kubenswrapper[4854]: I0103 06:08:27.138397 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:08:27 crc kubenswrapper[4854]: I0103 06:08:27.156410 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0909f3e-f3d7-4539-82d8-2a1af3c015aa","Type":"ContainerStarted","Data":"d91351465618d32b90cdc9b5de676281c6b4f960630512e96f26a11de138f042"} Jan 03 06:08:28 crc kubenswrapper[4854]: I0103 06:08:28.152027 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b283d54-42a4-48fb-a5a7-952b74db16b8" path="/var/lib/kubelet/pods/0b283d54-42a4-48fb-a5a7-952b74db16b8/volumes" Jan 03 06:08:28 crc kubenswrapper[4854]: I0103 06:08:28.291960 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0909f3e-f3d7-4539-82d8-2a1af3c015aa","Type":"ContainerStarted","Data":"f1833f5de57547fabe165b85587da17e832a06ca017f7f36d0429ef14d552a1b"} Jan 03 06:08:29 crc kubenswrapper[4854]: I0103 06:08:29.333145 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0909f3e-f3d7-4539-82d8-2a1af3c015aa","Type":"ContainerStarted","Data":"f606d764a110f6198b5d3de30409756daae46bec06b2bee8fcbb7ef90ea5e19f"} Jan 03 06:08:30 crc kubenswrapper[4854]: I0103 06:08:30.053741 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-9rnh5"] Jan 03 06:08:30 crc kubenswrapper[4854]: I0103 06:08:30.064289 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-9rnh5"] Jan 03 06:08:30 crc kubenswrapper[4854]: I0103 06:08:30.131230 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f46296d-5d5c-4aa8-94e1-e8e5951da088" path="/var/lib/kubelet/pods/8f46296d-5d5c-4aa8-94e1-e8e5951da088/volumes" Jan 03 06:08:30 crc kubenswrapper[4854]: I0103 06:08:30.135638 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-2sprl"] Jan 03 06:08:30 crc kubenswrapper[4854]: I0103 06:08:30.137923 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-2sprl" Jan 03 06:08:30 crc kubenswrapper[4854]: I0103 06:08:30.146647 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-2sprl"] Jan 03 06:08:30 crc kubenswrapper[4854]: I0103 06:08:30.247917 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fa56f84-4a50-4350-b256-5987e5b990bb-config-data\") pod \"heat-db-sync-2sprl\" (UID: \"8fa56f84-4a50-4350-b256-5987e5b990bb\") " pod="openstack/heat-db-sync-2sprl" Jan 03 06:08:30 crc kubenswrapper[4854]: I0103 06:08:30.248195 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fa56f84-4a50-4350-b256-5987e5b990bb-combined-ca-bundle\") pod \"heat-db-sync-2sprl\" (UID: \"8fa56f84-4a50-4350-b256-5987e5b990bb\") " pod="openstack/heat-db-sync-2sprl" Jan 03 06:08:30 crc kubenswrapper[4854]: I0103 06:08:30.248246 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb9p5\" (UniqueName: \"kubernetes.io/projected/8fa56f84-4a50-4350-b256-5987e5b990bb-kube-api-access-wb9p5\") pod \"heat-db-sync-2sprl\" (UID: \"8fa56f84-4a50-4350-b256-5987e5b990bb\") " pod="openstack/heat-db-sync-2sprl" Jan 03 06:08:30 crc kubenswrapper[4854]: I0103 06:08:30.347003 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0909f3e-f3d7-4539-82d8-2a1af3c015aa","Type":"ContainerStarted","Data":"4bb770e2e51976e2c8823a59adef7ec53a945d6c6e9d03eef75da7eba5dd1f0c"} Jan 03 06:08:30 crc kubenswrapper[4854]: I0103 06:08:30.350178 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fa56f84-4a50-4350-b256-5987e5b990bb-config-data\") pod \"heat-db-sync-2sprl\" (UID: \"8fa56f84-4a50-4350-b256-5987e5b990bb\") " pod="openstack/heat-db-sync-2sprl" Jan 03 06:08:30 crc kubenswrapper[4854]: I0103 06:08:30.350302 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fa56f84-4a50-4350-b256-5987e5b990bb-combined-ca-bundle\") pod \"heat-db-sync-2sprl\" (UID: \"8fa56f84-4a50-4350-b256-5987e5b990bb\") " pod="openstack/heat-db-sync-2sprl" Jan 03 06:08:30 crc kubenswrapper[4854]: I0103 06:08:30.350333 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb9p5\" (UniqueName: \"kubernetes.io/projected/8fa56f84-4a50-4350-b256-5987e5b990bb-kube-api-access-wb9p5\") pod \"heat-db-sync-2sprl\" (UID: \"8fa56f84-4a50-4350-b256-5987e5b990bb\") " pod="openstack/heat-db-sync-2sprl" Jan 03 06:08:30 crc kubenswrapper[4854]: I0103 06:08:30.355642 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fa56f84-4a50-4350-b256-5987e5b990bb-combined-ca-bundle\") pod \"heat-db-sync-2sprl\" (UID: \"8fa56f84-4a50-4350-b256-5987e5b990bb\") " pod="openstack/heat-db-sync-2sprl" Jan 03 06:08:30 crc kubenswrapper[4854]: I0103 06:08:30.361351 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fa56f84-4a50-4350-b256-5987e5b990bb-config-data\") pod \"heat-db-sync-2sprl\" (UID: \"8fa56f84-4a50-4350-b256-5987e5b990bb\") " pod="openstack/heat-db-sync-2sprl" Jan 03 06:08:30 crc kubenswrapper[4854]: I0103 06:08:30.379833 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb9p5\" (UniqueName: \"kubernetes.io/projected/8fa56f84-4a50-4350-b256-5987e5b990bb-kube-api-access-wb9p5\") pod \"heat-db-sync-2sprl\" (UID: \"8fa56f84-4a50-4350-b256-5987e5b990bb\") " pod="openstack/heat-db-sync-2sprl" Jan 03 06:08:30 crc kubenswrapper[4854]: I0103 06:08:30.470406 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-2sprl" Jan 03 06:08:31 crc kubenswrapper[4854]: I0103 06:08:31.023801 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-2sprl"] Jan 03 06:08:31 crc kubenswrapper[4854]: W0103 06:08:31.030387 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8fa56f84_4a50_4350_b256_5987e5b990bb.slice/crio-29f79fa4bc7c19f402a1ee8d712203dea643af43b569b7ed132c4af8b9d7be8e WatchSource:0}: Error finding container 29f79fa4bc7c19f402a1ee8d712203dea643af43b569b7ed132c4af8b9d7be8e: Status 404 returned error can't find the container with id 29f79fa4bc7c19f402a1ee8d712203dea643af43b569b7ed132c4af8b9d7be8e Jan 03 06:08:31 crc kubenswrapper[4854]: I0103 06:08:31.359481 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-2sprl" event={"ID":"8fa56f84-4a50-4350-b256-5987e5b990bb","Type":"ContainerStarted","Data":"29f79fa4bc7c19f402a1ee8d712203dea643af43b569b7ed132c4af8b9d7be8e"} Jan 03 06:08:32 crc kubenswrapper[4854]: I0103 06:08:32.382483 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0909f3e-f3d7-4539-82d8-2a1af3c015aa","Type":"ContainerStarted","Data":"6695a30f220b17a6b189176b8b5bfae4f3b9348bd25b12c3c4c19f3146613282"} Jan 03 06:08:32 crc kubenswrapper[4854]: I0103 06:08:32.383427 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 03 06:08:32 crc kubenswrapper[4854]: I0103 06:08:32.415631 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.964577624 podStartE2EDuration="6.415610848s" podCreationTimestamp="2026-01-03 06:08:26 +0000 UTC" firstStartedPulling="2026-01-03 06:08:27.127247022 +0000 UTC m=+1685.453823594" lastFinishedPulling="2026-01-03 06:08:31.578280246 +0000 UTC m=+1689.904856818" observedRunningTime="2026-01-03 06:08:32.413712451 +0000 UTC m=+1690.740289023" watchObservedRunningTime="2026-01-03 06:08:32.415610848 +0000 UTC m=+1690.742187430" Jan 03 06:08:32 crc kubenswrapper[4854]: I0103 06:08:32.906544 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 03 06:08:33 crc kubenswrapper[4854]: I0103 06:08:33.068689 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 03 06:08:33 crc kubenswrapper[4854]: I0103 06:08:33.186198 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:08:33 crc kubenswrapper[4854]: I0103 06:08:33.742446 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 03 06:08:34 crc kubenswrapper[4854]: I0103 06:08:34.413864 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c0909f3e-f3d7-4539-82d8-2a1af3c015aa" containerName="sg-core" containerID="cri-o://4bb770e2e51976e2c8823a59adef7ec53a945d6c6e9d03eef75da7eba5dd1f0c" gracePeriod=30 Jan 03 06:08:34 crc kubenswrapper[4854]: I0103 06:08:34.413937 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c0909f3e-f3d7-4539-82d8-2a1af3c015aa" containerName="ceilometer-notification-agent" containerID="cri-o://f606d764a110f6198b5d3de30409756daae46bec06b2bee8fcbb7ef90ea5e19f" gracePeriod=30 Jan 03 06:08:34 crc kubenswrapper[4854]: I0103 06:08:34.413885 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c0909f3e-f3d7-4539-82d8-2a1af3c015aa" containerName="proxy-httpd" containerID="cri-o://6695a30f220b17a6b189176b8b5bfae4f3b9348bd25b12c3c4c19f3146613282" gracePeriod=30 Jan 03 06:08:34 crc kubenswrapper[4854]: I0103 06:08:34.414131 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c0909f3e-f3d7-4539-82d8-2a1af3c015aa" containerName="ceilometer-central-agent" containerID="cri-o://f1833f5de57547fabe165b85587da17e832a06ca017f7f36d0429ef14d552a1b" gracePeriod=30 Jan 03 06:08:35 crc kubenswrapper[4854]: I0103 06:08:35.435208 4854 generic.go:334] "Generic (PLEG): container finished" podID="c0909f3e-f3d7-4539-82d8-2a1af3c015aa" containerID="6695a30f220b17a6b189176b8b5bfae4f3b9348bd25b12c3c4c19f3146613282" exitCode=0 Jan 03 06:08:35 crc kubenswrapper[4854]: I0103 06:08:35.435576 4854 generic.go:334] "Generic (PLEG): container finished" podID="c0909f3e-f3d7-4539-82d8-2a1af3c015aa" containerID="4bb770e2e51976e2c8823a59adef7ec53a945d6c6e9d03eef75da7eba5dd1f0c" exitCode=2 Jan 03 06:08:35 crc kubenswrapper[4854]: I0103 06:08:35.435590 4854 generic.go:334] "Generic (PLEG): container finished" podID="c0909f3e-f3d7-4539-82d8-2a1af3c015aa" containerID="f606d764a110f6198b5d3de30409756daae46bec06b2bee8fcbb7ef90ea5e19f" exitCode=0 Jan 03 06:08:35 crc kubenswrapper[4854]: I0103 06:08:35.435614 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0909f3e-f3d7-4539-82d8-2a1af3c015aa","Type":"ContainerDied","Data":"6695a30f220b17a6b189176b8b5bfae4f3b9348bd25b12c3c4c19f3146613282"} Jan 03 06:08:35 crc kubenswrapper[4854]: I0103 06:08:35.435654 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0909f3e-f3d7-4539-82d8-2a1af3c015aa","Type":"ContainerDied","Data":"4bb770e2e51976e2c8823a59adef7ec53a945d6c6e9d03eef75da7eba5dd1f0c"} Jan 03 06:08:35 crc kubenswrapper[4854]: I0103 06:08:35.435672 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0909f3e-f3d7-4539-82d8-2a1af3c015aa","Type":"ContainerDied","Data":"f606d764a110f6198b5d3de30409756daae46bec06b2bee8fcbb7ef90ea5e19f"} Jan 03 06:08:37 crc kubenswrapper[4854]: E0103 06:08:37.092073 4854 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2518f81_3d3d_47a6_a157_19c2685f07d2.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2518f81_3d3d_47a6_a157_19c2685f07d2.slice/crio-e38e63c3678040ae4e8bdfad604a4579c1e8551a859f7aa50b44b27d06c18126\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34ba0145_7948_47f0_bec5_7f5fc6cb1150.slice\": RecentStats: unable to find data in memory cache]" Jan 03 06:08:37 crc kubenswrapper[4854]: I0103 06:08:37.896139 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="71288814-2f4e-4e92-8064-8f9ef1920212" containerName="rabbitmq" containerID="cri-o://2c57a63b557f809daa470fa1e3f261e47b9ca7c22a62d29b168b4282d62dc1e2" gracePeriod=604796 Jan 03 06:08:38 crc kubenswrapper[4854]: I0103 06:08:38.205941 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-2" podUID="11d4187f-5938-4054-9eec-4d84f843bd73" containerName="rabbitmq" containerID="cri-o://1692c8acfa3150463e84907272e673ac637c61b8759e684e77f9e6829b387f9e" gracePeriod=604795 Jan 03 06:08:38 crc kubenswrapper[4854]: I0103 06:08:38.470104 4854 generic.go:334] "Generic (PLEG): container finished" podID="c0909f3e-f3d7-4539-82d8-2a1af3c015aa" containerID="f1833f5de57547fabe165b85587da17e832a06ca017f7f36d0429ef14d552a1b" exitCode=0 Jan 03 06:08:38 crc kubenswrapper[4854]: I0103 06:08:38.470119 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0909f3e-f3d7-4539-82d8-2a1af3c015aa","Type":"ContainerDied","Data":"f1833f5de57547fabe165b85587da17e832a06ca017f7f36d0429ef14d552a1b"} Jan 03 06:08:38 crc kubenswrapper[4854]: I0103 06:08:38.470189 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0909f3e-f3d7-4539-82d8-2a1af3c015aa","Type":"ContainerDied","Data":"d91351465618d32b90cdc9b5de676281c6b4f960630512e96f26a11de138f042"} Jan 03 06:08:38 crc kubenswrapper[4854]: I0103 06:08:38.470214 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d91351465618d32b90cdc9b5de676281c6b4f960630512e96f26a11de138f042" Jan 03 06:08:38 crc kubenswrapper[4854]: I0103 06:08:38.520582 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 06:08:38 crc kubenswrapper[4854]: I0103 06:08:38.603214 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0909f3e-f3d7-4539-82d8-2a1af3c015aa-run-httpd\") pod \"c0909f3e-f3d7-4539-82d8-2a1af3c015aa\" (UID: \"c0909f3e-f3d7-4539-82d8-2a1af3c015aa\") " Jan 03 06:08:38 crc kubenswrapper[4854]: I0103 06:08:38.603787 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c0909f3e-f3d7-4539-82d8-2a1af3c015aa-sg-core-conf-yaml\") pod \"c0909f3e-f3d7-4539-82d8-2a1af3c015aa\" (UID: \"c0909f3e-f3d7-4539-82d8-2a1af3c015aa\") " Jan 03 06:08:38 crc kubenswrapper[4854]: I0103 06:08:38.604011 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0909f3e-f3d7-4539-82d8-2a1af3c015aa-ceilometer-tls-certs\") pod \"c0909f3e-f3d7-4539-82d8-2a1af3c015aa\" (UID: \"c0909f3e-f3d7-4539-82d8-2a1af3c015aa\") " Jan 03 06:08:38 crc kubenswrapper[4854]: I0103 06:08:38.604134 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0909f3e-f3d7-4539-82d8-2a1af3c015aa-scripts\") pod \"c0909f3e-f3d7-4539-82d8-2a1af3c015aa\" (UID: \"c0909f3e-f3d7-4539-82d8-2a1af3c015aa\") " Jan 03 06:08:38 crc kubenswrapper[4854]: I0103 06:08:38.604223 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0909f3e-f3d7-4539-82d8-2a1af3c015aa-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c0909f3e-f3d7-4539-82d8-2a1af3c015aa" (UID: "c0909f3e-f3d7-4539-82d8-2a1af3c015aa"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:08:38 crc kubenswrapper[4854]: I0103 06:08:38.604315 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0909f3e-f3d7-4539-82d8-2a1af3c015aa-config-data\") pod \"c0909f3e-f3d7-4539-82d8-2a1af3c015aa\" (UID: \"c0909f3e-f3d7-4539-82d8-2a1af3c015aa\") " Jan 03 06:08:38 crc kubenswrapper[4854]: I0103 06:08:38.604673 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cd9d9\" (UniqueName: \"kubernetes.io/projected/c0909f3e-f3d7-4539-82d8-2a1af3c015aa-kube-api-access-cd9d9\") pod \"c0909f3e-f3d7-4539-82d8-2a1af3c015aa\" (UID: \"c0909f3e-f3d7-4539-82d8-2a1af3c015aa\") " Jan 03 06:08:38 crc kubenswrapper[4854]: I0103 06:08:38.604736 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0909f3e-f3d7-4539-82d8-2a1af3c015aa-combined-ca-bundle\") pod \"c0909f3e-f3d7-4539-82d8-2a1af3c015aa\" (UID: \"c0909f3e-f3d7-4539-82d8-2a1af3c015aa\") " Jan 03 06:08:38 crc kubenswrapper[4854]: I0103 06:08:38.604772 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0909f3e-f3d7-4539-82d8-2a1af3c015aa-log-httpd\") pod \"c0909f3e-f3d7-4539-82d8-2a1af3c015aa\" (UID: \"c0909f3e-f3d7-4539-82d8-2a1af3c015aa\") " Jan 03 06:08:38 crc kubenswrapper[4854]: I0103 06:08:38.606553 4854 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0909f3e-f3d7-4539-82d8-2a1af3c015aa-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 03 06:08:38 crc kubenswrapper[4854]: I0103 06:08:38.608195 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0909f3e-f3d7-4539-82d8-2a1af3c015aa-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c0909f3e-f3d7-4539-82d8-2a1af3c015aa" (UID: "c0909f3e-f3d7-4539-82d8-2a1af3c015aa"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:08:38 crc kubenswrapper[4854]: I0103 06:08:38.615820 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0909f3e-f3d7-4539-82d8-2a1af3c015aa-kube-api-access-cd9d9" (OuterVolumeSpecName: "kube-api-access-cd9d9") pod "c0909f3e-f3d7-4539-82d8-2a1af3c015aa" (UID: "c0909f3e-f3d7-4539-82d8-2a1af3c015aa"). InnerVolumeSpecName "kube-api-access-cd9d9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:08:38 crc kubenswrapper[4854]: I0103 06:08:38.621763 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0909f3e-f3d7-4539-82d8-2a1af3c015aa-scripts" (OuterVolumeSpecName: "scripts") pod "c0909f3e-f3d7-4539-82d8-2a1af3c015aa" (UID: "c0909f3e-f3d7-4539-82d8-2a1af3c015aa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:08:38 crc kubenswrapper[4854]: I0103 06:08:38.645722 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0909f3e-f3d7-4539-82d8-2a1af3c015aa-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c0909f3e-f3d7-4539-82d8-2a1af3c015aa" (UID: "c0909f3e-f3d7-4539-82d8-2a1af3c015aa"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:08:38 crc kubenswrapper[4854]: I0103 06:08:38.697392 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0909f3e-f3d7-4539-82d8-2a1af3c015aa-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "c0909f3e-f3d7-4539-82d8-2a1af3c015aa" (UID: "c0909f3e-f3d7-4539-82d8-2a1af3c015aa"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:08:38 crc kubenswrapper[4854]: I0103 06:08:38.712448 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cd9d9\" (UniqueName: \"kubernetes.io/projected/c0909f3e-f3d7-4539-82d8-2a1af3c015aa-kube-api-access-cd9d9\") on node \"crc\" DevicePath \"\"" Jan 03 06:08:38 crc kubenswrapper[4854]: I0103 06:08:38.712486 4854 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0909f3e-f3d7-4539-82d8-2a1af3c015aa-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 03 06:08:38 crc kubenswrapper[4854]: I0103 06:08:38.712577 4854 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c0909f3e-f3d7-4539-82d8-2a1af3c015aa-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 03 06:08:38 crc kubenswrapper[4854]: I0103 06:08:38.712597 4854 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0909f3e-f3d7-4539-82d8-2a1af3c015aa-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 03 06:08:38 crc kubenswrapper[4854]: I0103 06:08:38.712610 4854 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0909f3e-f3d7-4539-82d8-2a1af3c015aa-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:08:38 crc kubenswrapper[4854]: I0103 06:08:38.714162 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0909f3e-f3d7-4539-82d8-2a1af3c015aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c0909f3e-f3d7-4539-82d8-2a1af3c015aa" (UID: "c0909f3e-f3d7-4539-82d8-2a1af3c015aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:08:38 crc kubenswrapper[4854]: I0103 06:08:38.751146 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0909f3e-f3d7-4539-82d8-2a1af3c015aa-config-data" (OuterVolumeSpecName: "config-data") pod "c0909f3e-f3d7-4539-82d8-2a1af3c015aa" (UID: "c0909f3e-f3d7-4539-82d8-2a1af3c015aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:08:38 crc kubenswrapper[4854]: I0103 06:08:38.815449 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0909f3e-f3d7-4539-82d8-2a1af3c015aa-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 06:08:38 crc kubenswrapper[4854]: I0103 06:08:38.815722 4854 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0909f3e-f3d7-4539-82d8-2a1af3c015aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:08:39 crc kubenswrapper[4854]: I0103 06:08:39.060689 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="11d4187f-5938-4054-9eec-4d84f843bd73" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Jan 03 06:08:39 crc kubenswrapper[4854]: E0103 06:08:39.096132 4854 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34ba0145_7948_47f0_bec5_7f5fc6cb1150.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2518f81_3d3d_47a6_a157_19c2685f07d2.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2518f81_3d3d_47a6_a157_19c2685f07d2.slice/crio-e38e63c3678040ae4e8bdfad604a4579c1e8551a859f7aa50b44b27d06c18126\": RecentStats: unable to find data in memory cache]" Jan 03 06:08:39 crc kubenswrapper[4854]: I0103 06:08:39.393109 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="71288814-2f4e-4e92-8064-8f9ef1920212" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.132:5671: connect: connection refused" Jan 03 06:08:39 crc kubenswrapper[4854]: I0103 06:08:39.480390 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 06:08:39 crc kubenswrapper[4854]: I0103 06:08:39.534300 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:08:39 crc kubenswrapper[4854]: I0103 06:08:39.555715 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:08:39 crc kubenswrapper[4854]: I0103 06:08:39.569470 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:08:39 crc kubenswrapper[4854]: E0103 06:08:39.570139 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0909f3e-f3d7-4539-82d8-2a1af3c015aa" containerName="ceilometer-notification-agent" Jan 03 06:08:39 crc kubenswrapper[4854]: I0103 06:08:39.570162 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0909f3e-f3d7-4539-82d8-2a1af3c015aa" containerName="ceilometer-notification-agent" Jan 03 06:08:39 crc kubenswrapper[4854]: E0103 06:08:39.570183 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0909f3e-f3d7-4539-82d8-2a1af3c015aa" containerName="sg-core" Jan 03 06:08:39 crc kubenswrapper[4854]: I0103 06:08:39.570191 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0909f3e-f3d7-4539-82d8-2a1af3c015aa" containerName="sg-core" Jan 03 06:08:39 crc kubenswrapper[4854]: E0103 06:08:39.570223 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0909f3e-f3d7-4539-82d8-2a1af3c015aa" containerName="ceilometer-central-agent" Jan 03 06:08:39 crc kubenswrapper[4854]: I0103 06:08:39.570233 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0909f3e-f3d7-4539-82d8-2a1af3c015aa" containerName="ceilometer-central-agent" Jan 03 06:08:39 crc kubenswrapper[4854]: E0103 06:08:39.570274 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0909f3e-f3d7-4539-82d8-2a1af3c015aa" containerName="proxy-httpd" Jan 03 06:08:39 crc kubenswrapper[4854]: I0103 06:08:39.570314 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0909f3e-f3d7-4539-82d8-2a1af3c015aa" containerName="proxy-httpd" Jan 03 06:08:39 crc kubenswrapper[4854]: I0103 06:08:39.570622 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0909f3e-f3d7-4539-82d8-2a1af3c015aa" containerName="proxy-httpd" Jan 03 06:08:39 crc kubenswrapper[4854]: I0103 06:08:39.570659 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0909f3e-f3d7-4539-82d8-2a1af3c015aa" containerName="sg-core" Jan 03 06:08:39 crc kubenswrapper[4854]: I0103 06:08:39.570678 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0909f3e-f3d7-4539-82d8-2a1af3c015aa" containerName="ceilometer-central-agent" Jan 03 06:08:39 crc kubenswrapper[4854]: I0103 06:08:39.570699 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0909f3e-f3d7-4539-82d8-2a1af3c015aa" containerName="ceilometer-notification-agent" Jan 03 06:08:39 crc kubenswrapper[4854]: I0103 06:08:39.573050 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 06:08:39 crc kubenswrapper[4854]: I0103 06:08:39.578454 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 03 06:08:39 crc kubenswrapper[4854]: I0103 06:08:39.578644 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 03 06:08:39 crc kubenswrapper[4854]: I0103 06:08:39.579260 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 03 06:08:39 crc kubenswrapper[4854]: I0103 06:08:39.583875 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:08:39 crc kubenswrapper[4854]: I0103 06:08:39.635983 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca1d3e35-8df0-4b19-891d-3f2aecc401ab-run-httpd\") pod \"ceilometer-0\" (UID: \"ca1d3e35-8df0-4b19-891d-3f2aecc401ab\") " pod="openstack/ceilometer-0" Jan 03 06:08:39 crc kubenswrapper[4854]: I0103 06:08:39.636034 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca1d3e35-8df0-4b19-891d-3f2aecc401ab-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ca1d3e35-8df0-4b19-891d-3f2aecc401ab\") " pod="openstack/ceilometer-0" Jan 03 06:08:39 crc kubenswrapper[4854]: I0103 06:08:39.636098 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca1d3e35-8df0-4b19-891d-3f2aecc401ab-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ca1d3e35-8df0-4b19-891d-3f2aecc401ab\") " pod="openstack/ceilometer-0" Jan 03 06:08:39 crc kubenswrapper[4854]: I0103 06:08:39.636119 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5bsp\" (UniqueName: \"kubernetes.io/projected/ca1d3e35-8df0-4b19-891d-3f2aecc401ab-kube-api-access-f5bsp\") pod \"ceilometer-0\" (UID: \"ca1d3e35-8df0-4b19-891d-3f2aecc401ab\") " pod="openstack/ceilometer-0" Jan 03 06:08:39 crc kubenswrapper[4854]: I0103 06:08:39.636150 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca1d3e35-8df0-4b19-891d-3f2aecc401ab-log-httpd\") pod \"ceilometer-0\" (UID: \"ca1d3e35-8df0-4b19-891d-3f2aecc401ab\") " pod="openstack/ceilometer-0" Jan 03 06:08:39 crc kubenswrapper[4854]: I0103 06:08:39.636184 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ca1d3e35-8df0-4b19-891d-3f2aecc401ab-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ca1d3e35-8df0-4b19-891d-3f2aecc401ab\") " pod="openstack/ceilometer-0" Jan 03 06:08:39 crc kubenswrapper[4854]: I0103 06:08:39.636247 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca1d3e35-8df0-4b19-891d-3f2aecc401ab-config-data\") pod \"ceilometer-0\" (UID: \"ca1d3e35-8df0-4b19-891d-3f2aecc401ab\") " pod="openstack/ceilometer-0" Jan 03 06:08:39 crc kubenswrapper[4854]: I0103 06:08:39.636305 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca1d3e35-8df0-4b19-891d-3f2aecc401ab-scripts\") pod \"ceilometer-0\" (UID: \"ca1d3e35-8df0-4b19-891d-3f2aecc401ab\") " pod="openstack/ceilometer-0" Jan 03 06:08:39 crc kubenswrapper[4854]: I0103 06:08:39.738007 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca1d3e35-8df0-4b19-891d-3f2aecc401ab-config-data\") pod \"ceilometer-0\" (UID: \"ca1d3e35-8df0-4b19-891d-3f2aecc401ab\") " pod="openstack/ceilometer-0" Jan 03 06:08:39 crc kubenswrapper[4854]: I0103 06:08:39.738140 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca1d3e35-8df0-4b19-891d-3f2aecc401ab-scripts\") pod \"ceilometer-0\" (UID: \"ca1d3e35-8df0-4b19-891d-3f2aecc401ab\") " pod="openstack/ceilometer-0" Jan 03 06:08:39 crc kubenswrapper[4854]: I0103 06:08:39.738231 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca1d3e35-8df0-4b19-891d-3f2aecc401ab-run-httpd\") pod \"ceilometer-0\" (UID: \"ca1d3e35-8df0-4b19-891d-3f2aecc401ab\") " pod="openstack/ceilometer-0" Jan 03 06:08:39 crc kubenswrapper[4854]: I0103 06:08:39.738257 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca1d3e35-8df0-4b19-891d-3f2aecc401ab-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ca1d3e35-8df0-4b19-891d-3f2aecc401ab\") " pod="openstack/ceilometer-0" Jan 03 06:08:39 crc kubenswrapper[4854]: I0103 06:08:39.738292 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca1d3e35-8df0-4b19-891d-3f2aecc401ab-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ca1d3e35-8df0-4b19-891d-3f2aecc401ab\") " pod="openstack/ceilometer-0" Jan 03 06:08:39 crc kubenswrapper[4854]: I0103 06:08:39.738311 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5bsp\" (UniqueName: \"kubernetes.io/projected/ca1d3e35-8df0-4b19-891d-3f2aecc401ab-kube-api-access-f5bsp\") pod \"ceilometer-0\" (UID: \"ca1d3e35-8df0-4b19-891d-3f2aecc401ab\") " pod="openstack/ceilometer-0" Jan 03 06:08:39 crc kubenswrapper[4854]: I0103 06:08:39.738343 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca1d3e35-8df0-4b19-891d-3f2aecc401ab-log-httpd\") pod \"ceilometer-0\" (UID: \"ca1d3e35-8df0-4b19-891d-3f2aecc401ab\") " pod="openstack/ceilometer-0" Jan 03 06:08:39 crc kubenswrapper[4854]: I0103 06:08:39.738372 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ca1d3e35-8df0-4b19-891d-3f2aecc401ab-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ca1d3e35-8df0-4b19-891d-3f2aecc401ab\") " pod="openstack/ceilometer-0" Jan 03 06:08:39 crc kubenswrapper[4854]: I0103 06:08:39.740106 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca1d3e35-8df0-4b19-891d-3f2aecc401ab-log-httpd\") pod \"ceilometer-0\" (UID: \"ca1d3e35-8df0-4b19-891d-3f2aecc401ab\") " pod="openstack/ceilometer-0" Jan 03 06:08:39 crc kubenswrapper[4854]: I0103 06:08:39.740130 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca1d3e35-8df0-4b19-891d-3f2aecc401ab-run-httpd\") pod \"ceilometer-0\" (UID: \"ca1d3e35-8df0-4b19-891d-3f2aecc401ab\") " pod="openstack/ceilometer-0" Jan 03 06:08:39 crc kubenswrapper[4854]: I0103 06:08:39.743305 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca1d3e35-8df0-4b19-891d-3f2aecc401ab-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ca1d3e35-8df0-4b19-891d-3f2aecc401ab\") " pod="openstack/ceilometer-0" Jan 03 06:08:39 crc kubenswrapper[4854]: I0103 06:08:39.743942 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca1d3e35-8df0-4b19-891d-3f2aecc401ab-config-data\") pod \"ceilometer-0\" (UID: \"ca1d3e35-8df0-4b19-891d-3f2aecc401ab\") " pod="openstack/ceilometer-0" Jan 03 06:08:39 crc kubenswrapper[4854]: I0103 06:08:39.745887 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ca1d3e35-8df0-4b19-891d-3f2aecc401ab-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ca1d3e35-8df0-4b19-891d-3f2aecc401ab\") " pod="openstack/ceilometer-0" Jan 03 06:08:39 crc kubenswrapper[4854]: I0103 06:08:39.747026 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca1d3e35-8df0-4b19-891d-3f2aecc401ab-scripts\") pod \"ceilometer-0\" (UID: \"ca1d3e35-8df0-4b19-891d-3f2aecc401ab\") " pod="openstack/ceilometer-0" Jan 03 06:08:39 crc kubenswrapper[4854]: I0103 06:08:39.747278 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca1d3e35-8df0-4b19-891d-3f2aecc401ab-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ca1d3e35-8df0-4b19-891d-3f2aecc401ab\") " pod="openstack/ceilometer-0" Jan 03 06:08:39 crc kubenswrapper[4854]: I0103 06:08:39.763112 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5bsp\" (UniqueName: \"kubernetes.io/projected/ca1d3e35-8df0-4b19-891d-3f2aecc401ab-kube-api-access-f5bsp\") pod \"ceilometer-0\" (UID: \"ca1d3e35-8df0-4b19-891d-3f2aecc401ab\") " pod="openstack/ceilometer-0" Jan 03 06:08:39 crc kubenswrapper[4854]: I0103 06:08:39.893071 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 06:08:40 crc kubenswrapper[4854]: I0103 06:08:40.133948 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0909f3e-f3d7-4539-82d8-2a1af3c015aa" path="/var/lib/kubelet/pods/c0909f3e-f3d7-4539-82d8-2a1af3c015aa/volumes" Jan 03 06:08:41 crc kubenswrapper[4854]: I0103 06:08:41.755850 4854 patch_prober.go:28] interesting pod/machine-config-daemon-qdhfx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 06:08:41 crc kubenswrapper[4854]: I0103 06:08:41.756383 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 06:08:41 crc kubenswrapper[4854]: I0103 06:08:41.756441 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" Jan 03 06:08:41 crc kubenswrapper[4854]: I0103 06:08:41.757516 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1db592b68f62b6c2ad08a01037bad35421ca86a46de4654a3ad5f8ce76bb6f1b"} pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 03 06:08:41 crc kubenswrapper[4854]: I0103 06:08:41.757582 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" containerID="cri-o://1db592b68f62b6c2ad08a01037bad35421ca86a46de4654a3ad5f8ce76bb6f1b" gracePeriod=600 Jan 03 06:08:42 crc kubenswrapper[4854]: I0103 06:08:42.521844 4854 generic.go:334] "Generic (PLEG): container finished" podID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerID="1db592b68f62b6c2ad08a01037bad35421ca86a46de4654a3ad5f8ce76bb6f1b" exitCode=0 Jan 03 06:08:42 crc kubenswrapper[4854]: I0103 06:08:42.521883 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" event={"ID":"e8c88d7d-092b-44f7-b4c8-3540be3c0e8b","Type":"ContainerDied","Data":"1db592b68f62b6c2ad08a01037bad35421ca86a46de4654a3ad5f8ce76bb6f1b"} Jan 03 06:08:42 crc kubenswrapper[4854]: I0103 06:08:42.521944 4854 scope.go:117] "RemoveContainer" containerID="9b4c3aaa2ac11419adcfab21b6c1450ea5c292a92e0be09a3fba503318e11474" Jan 03 06:08:43 crc kubenswrapper[4854]: E0103 06:08:43.299803 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:08:43 crc kubenswrapper[4854]: I0103 06:08:43.536263 4854 scope.go:117] "RemoveContainer" containerID="1db592b68f62b6c2ad08a01037bad35421ca86a46de4654a3ad5f8ce76bb6f1b" Jan 03 06:08:43 crc kubenswrapper[4854]: E0103 06:08:43.536699 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:08:44 crc kubenswrapper[4854]: I0103 06:08:44.553327 4854 generic.go:334] "Generic (PLEG): container finished" podID="71288814-2f4e-4e92-8064-8f9ef1920212" containerID="2c57a63b557f809daa470fa1e3f261e47b9ca7c22a62d29b168b4282d62dc1e2" exitCode=0 Jan 03 06:08:44 crc kubenswrapper[4854]: I0103 06:08:44.553667 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"71288814-2f4e-4e92-8064-8f9ef1920212","Type":"ContainerDied","Data":"2c57a63b557f809daa470fa1e3f261e47b9ca7c22a62d29b168b4282d62dc1e2"} Jan 03 06:08:46 crc kubenswrapper[4854]: I0103 06:08:46.870072 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-q6szn"] Jan 03 06:08:46 crc kubenswrapper[4854]: I0103 06:08:46.882147 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-q6szn" Jan 03 06:08:46 crc kubenswrapper[4854]: I0103 06:08:46.892698 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 03 06:08:46 crc kubenswrapper[4854]: I0103 06:08:46.939626 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/056cdc21-0e18-423c-8fac-6ace074c15d3-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-q6szn\" (UID: \"056cdc21-0e18-423c-8fac-6ace074c15d3\") " pod="openstack/dnsmasq-dns-7d84b4d45c-q6szn" Jan 03 06:08:46 crc kubenswrapper[4854]: I0103 06:08:46.939808 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/056cdc21-0e18-423c-8fac-6ace074c15d3-config\") pod \"dnsmasq-dns-7d84b4d45c-q6szn\" (UID: \"056cdc21-0e18-423c-8fac-6ace074c15d3\") " pod="openstack/dnsmasq-dns-7d84b4d45c-q6szn" Jan 03 06:08:46 crc kubenswrapper[4854]: I0103 06:08:46.939874 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/056cdc21-0e18-423c-8fac-6ace074c15d3-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-q6szn\" (UID: \"056cdc21-0e18-423c-8fac-6ace074c15d3\") " pod="openstack/dnsmasq-dns-7d84b4d45c-q6szn" Jan 03 06:08:46 crc kubenswrapper[4854]: I0103 06:08:46.939916 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/056cdc21-0e18-423c-8fac-6ace074c15d3-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-q6szn\" (UID: \"056cdc21-0e18-423c-8fac-6ace074c15d3\") " pod="openstack/dnsmasq-dns-7d84b4d45c-q6szn" Jan 03 06:08:46 crc kubenswrapper[4854]: I0103 06:08:46.939995 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/056cdc21-0e18-423c-8fac-6ace074c15d3-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-q6szn\" (UID: \"056cdc21-0e18-423c-8fac-6ace074c15d3\") " pod="openstack/dnsmasq-dns-7d84b4d45c-q6szn" Jan 03 06:08:46 crc kubenswrapper[4854]: I0103 06:08:46.940062 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/056cdc21-0e18-423c-8fac-6ace074c15d3-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-q6szn\" (UID: \"056cdc21-0e18-423c-8fac-6ace074c15d3\") " pod="openstack/dnsmasq-dns-7d84b4d45c-q6szn" Jan 03 06:08:46 crc kubenswrapper[4854]: I0103 06:08:46.940307 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clb4x\" (UniqueName: \"kubernetes.io/projected/056cdc21-0e18-423c-8fac-6ace074c15d3-kube-api-access-clb4x\") pod \"dnsmasq-dns-7d84b4d45c-q6szn\" (UID: \"056cdc21-0e18-423c-8fac-6ace074c15d3\") " pod="openstack/dnsmasq-dns-7d84b4d45c-q6szn" Jan 03 06:08:46 crc kubenswrapper[4854]: I0103 06:08:46.959725 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-q6szn"] Jan 03 06:08:47 crc kubenswrapper[4854]: I0103 06:08:47.043893 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clb4x\" (UniqueName: \"kubernetes.io/projected/056cdc21-0e18-423c-8fac-6ace074c15d3-kube-api-access-clb4x\") pod \"dnsmasq-dns-7d84b4d45c-q6szn\" (UID: \"056cdc21-0e18-423c-8fac-6ace074c15d3\") " pod="openstack/dnsmasq-dns-7d84b4d45c-q6szn" Jan 03 06:08:47 crc kubenswrapper[4854]: I0103 06:08:47.044032 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/056cdc21-0e18-423c-8fac-6ace074c15d3-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-q6szn\" (UID: \"056cdc21-0e18-423c-8fac-6ace074c15d3\") " pod="openstack/dnsmasq-dns-7d84b4d45c-q6szn" Jan 03 06:08:47 crc kubenswrapper[4854]: I0103 06:08:47.044154 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/056cdc21-0e18-423c-8fac-6ace074c15d3-config\") pod \"dnsmasq-dns-7d84b4d45c-q6szn\" (UID: \"056cdc21-0e18-423c-8fac-6ace074c15d3\") " pod="openstack/dnsmasq-dns-7d84b4d45c-q6szn" Jan 03 06:08:47 crc kubenswrapper[4854]: I0103 06:08:47.044191 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/056cdc21-0e18-423c-8fac-6ace074c15d3-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-q6szn\" (UID: \"056cdc21-0e18-423c-8fac-6ace074c15d3\") " pod="openstack/dnsmasq-dns-7d84b4d45c-q6szn" Jan 03 06:08:47 crc kubenswrapper[4854]: I0103 06:08:47.044216 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/056cdc21-0e18-423c-8fac-6ace074c15d3-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-q6szn\" (UID: \"056cdc21-0e18-423c-8fac-6ace074c15d3\") " pod="openstack/dnsmasq-dns-7d84b4d45c-q6szn" Jan 03 06:08:47 crc kubenswrapper[4854]: I0103 06:08:47.044270 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/056cdc21-0e18-423c-8fac-6ace074c15d3-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-q6szn\" (UID: \"056cdc21-0e18-423c-8fac-6ace074c15d3\") " pod="openstack/dnsmasq-dns-7d84b4d45c-q6szn" Jan 03 06:08:47 crc kubenswrapper[4854]: I0103 06:08:47.044322 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/056cdc21-0e18-423c-8fac-6ace074c15d3-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-q6szn\" (UID: \"056cdc21-0e18-423c-8fac-6ace074c15d3\") " pod="openstack/dnsmasq-dns-7d84b4d45c-q6szn" Jan 03 06:08:47 crc kubenswrapper[4854]: I0103 06:08:47.045059 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/056cdc21-0e18-423c-8fac-6ace074c15d3-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-q6szn\" (UID: \"056cdc21-0e18-423c-8fac-6ace074c15d3\") " pod="openstack/dnsmasq-dns-7d84b4d45c-q6szn" Jan 03 06:08:47 crc kubenswrapper[4854]: I0103 06:08:47.045662 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/056cdc21-0e18-423c-8fac-6ace074c15d3-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-q6szn\" (UID: \"056cdc21-0e18-423c-8fac-6ace074c15d3\") " pod="openstack/dnsmasq-dns-7d84b4d45c-q6szn" Jan 03 06:08:47 crc kubenswrapper[4854]: I0103 06:08:47.046063 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/056cdc21-0e18-423c-8fac-6ace074c15d3-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-q6szn\" (UID: \"056cdc21-0e18-423c-8fac-6ace074c15d3\") " pod="openstack/dnsmasq-dns-7d84b4d45c-q6szn" Jan 03 06:08:47 crc kubenswrapper[4854]: I0103 06:08:47.046584 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/056cdc21-0e18-423c-8fac-6ace074c15d3-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-q6szn\" (UID: \"056cdc21-0e18-423c-8fac-6ace074c15d3\") " pod="openstack/dnsmasq-dns-7d84b4d45c-q6szn" Jan 03 06:08:47 crc kubenswrapper[4854]: I0103 06:08:47.046606 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/056cdc21-0e18-423c-8fac-6ace074c15d3-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-q6szn\" (UID: \"056cdc21-0e18-423c-8fac-6ace074c15d3\") " pod="openstack/dnsmasq-dns-7d84b4d45c-q6szn" Jan 03 06:08:47 crc kubenswrapper[4854]: I0103 06:08:47.047105 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/056cdc21-0e18-423c-8fac-6ace074c15d3-config\") pod \"dnsmasq-dns-7d84b4d45c-q6szn\" (UID: \"056cdc21-0e18-423c-8fac-6ace074c15d3\") " pod="openstack/dnsmasq-dns-7d84b4d45c-q6szn" Jan 03 06:08:47 crc kubenswrapper[4854]: I0103 06:08:47.090226 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clb4x\" (UniqueName: \"kubernetes.io/projected/056cdc21-0e18-423c-8fac-6ace074c15d3-kube-api-access-clb4x\") pod \"dnsmasq-dns-7d84b4d45c-q6szn\" (UID: \"056cdc21-0e18-423c-8fac-6ace074c15d3\") " pod="openstack/dnsmasq-dns-7d84b4d45c-q6szn" Jan 03 06:08:47 crc kubenswrapper[4854]: I0103 06:08:47.219368 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-q6szn" Jan 03 06:08:47 crc kubenswrapper[4854]: E0103 06:08:47.431841 4854 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2518f81_3d3d_47a6_a157_19c2685f07d2.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34ba0145_7948_47f0_bec5_7f5fc6cb1150.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2518f81_3d3d_47a6_a157_19c2685f07d2.slice/crio-e38e63c3678040ae4e8bdfad604a4579c1e8551a859f7aa50b44b27d06c18126\": RecentStats: unable to find data in memory cache]" Jan 03 06:08:47 crc kubenswrapper[4854]: I0103 06:08:47.604645 4854 generic.go:334] "Generic (PLEG): container finished" podID="11d4187f-5938-4054-9eec-4d84f843bd73" containerID="1692c8acfa3150463e84907272e673ac637c61b8759e684e77f9e6829b387f9e" exitCode=0 Jan 03 06:08:47 crc kubenswrapper[4854]: I0103 06:08:47.604733 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"11d4187f-5938-4054-9eec-4d84f843bd73","Type":"ContainerDied","Data":"1692c8acfa3150463e84907272e673ac637c61b8759e684e77f9e6829b387f9e"} Jan 03 06:08:48 crc kubenswrapper[4854]: E0103 06:08:48.104929 4854 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34ba0145_7948_47f0_bec5_7f5fc6cb1150.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2518f81_3d3d_47a6_a157_19c2685f07d2.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2518f81_3d3d_47a6_a157_19c2685f07d2.slice/crio-e38e63c3678040ae4e8bdfad604a4579c1e8551a859f7aa50b44b27d06c18126\": RecentStats: unable to find data in memory cache]" Jan 03 06:08:48 crc kubenswrapper[4854]: E0103 06:08:48.105967 4854 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2518f81_3d3d_47a6_a157_19c2685f07d2.slice/crio-e38e63c3678040ae4e8bdfad604a4579c1e8551a859f7aa50b44b27d06c18126\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34ba0145_7948_47f0_bec5_7f5fc6cb1150.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2518f81_3d3d_47a6_a157_19c2685f07d2.slice\": RecentStats: unable to find data in memory cache]" Jan 03 06:08:49 crc kubenswrapper[4854]: I0103 06:08:49.060419 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="11d4187f-5938-4054-9eec-4d84f843bd73" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Jan 03 06:08:50 crc kubenswrapper[4854]: I0103 06:08:50.544258 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:08:50 crc kubenswrapper[4854]: I0103 06:08:50.688223 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/71288814-2f4e-4e92-8064-8f9ef1920212-plugins-conf\") pod \"71288814-2f4e-4e92-8064-8f9ef1920212\" (UID: \"71288814-2f4e-4e92-8064-8f9ef1920212\") " Jan 03 06:08:50 crc kubenswrapper[4854]: I0103 06:08:50.688303 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/71288814-2f4e-4e92-8064-8f9ef1920212-rabbitmq-erlang-cookie\") pod \"71288814-2f4e-4e92-8064-8f9ef1920212\" (UID: \"71288814-2f4e-4e92-8064-8f9ef1920212\") " Jan 03 06:08:50 crc kubenswrapper[4854]: I0103 06:08:50.688402 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/71288814-2f4e-4e92-8064-8f9ef1920212-server-conf\") pod \"71288814-2f4e-4e92-8064-8f9ef1920212\" (UID: \"71288814-2f4e-4e92-8064-8f9ef1920212\") " Jan 03 06:08:50 crc kubenswrapper[4854]: I0103 06:08:50.688453 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/71288814-2f4e-4e92-8064-8f9ef1920212-rabbitmq-tls\") pod \"71288814-2f4e-4e92-8064-8f9ef1920212\" (UID: \"71288814-2f4e-4e92-8064-8f9ef1920212\") " Jan 03 06:08:50 crc kubenswrapper[4854]: I0103 06:08:50.688557 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/71288814-2f4e-4e92-8064-8f9ef1920212-rabbitmq-plugins\") pod \"71288814-2f4e-4e92-8064-8f9ef1920212\" (UID: \"71288814-2f4e-4e92-8064-8f9ef1920212\") " Jan 03 06:08:50 crc kubenswrapper[4854]: I0103 06:08:50.688583 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/71288814-2f4e-4e92-8064-8f9ef1920212-rabbitmq-confd\") pod \"71288814-2f4e-4e92-8064-8f9ef1920212\" (UID: \"71288814-2f4e-4e92-8064-8f9ef1920212\") " Jan 03 06:08:50 crc kubenswrapper[4854]: I0103 06:08:50.688616 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wn7fn\" (UniqueName: \"kubernetes.io/projected/71288814-2f4e-4e92-8064-8f9ef1920212-kube-api-access-wn7fn\") pod \"71288814-2f4e-4e92-8064-8f9ef1920212\" (UID: \"71288814-2f4e-4e92-8064-8f9ef1920212\") " Jan 03 06:08:50 crc kubenswrapper[4854]: I0103 06:08:50.688698 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/71288814-2f4e-4e92-8064-8f9ef1920212-config-data\") pod \"71288814-2f4e-4e92-8064-8f9ef1920212\" (UID: \"71288814-2f4e-4e92-8064-8f9ef1920212\") " Jan 03 06:08:50 crc kubenswrapper[4854]: I0103 06:08:50.688761 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/71288814-2f4e-4e92-8064-8f9ef1920212-erlang-cookie-secret\") pod \"71288814-2f4e-4e92-8064-8f9ef1920212\" (UID: \"71288814-2f4e-4e92-8064-8f9ef1920212\") " Jan 03 06:08:50 crc kubenswrapper[4854]: I0103 06:08:50.689296 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-73b672f2-f1c0-42e6-a01e-e7d83d6d9b11\") pod \"71288814-2f4e-4e92-8064-8f9ef1920212\" (UID: \"71288814-2f4e-4e92-8064-8f9ef1920212\") " Jan 03 06:08:50 crc kubenswrapper[4854]: I0103 06:08:50.689347 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/71288814-2f4e-4e92-8064-8f9ef1920212-pod-info\") pod \"71288814-2f4e-4e92-8064-8f9ef1920212\" (UID: \"71288814-2f4e-4e92-8064-8f9ef1920212\") " Jan 03 06:08:50 crc kubenswrapper[4854]: I0103 06:08:50.699050 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71288814-2f4e-4e92-8064-8f9ef1920212-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "71288814-2f4e-4e92-8064-8f9ef1920212" (UID: "71288814-2f4e-4e92-8064-8f9ef1920212"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:08:50 crc kubenswrapper[4854]: I0103 06:08:50.699626 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71288814-2f4e-4e92-8064-8f9ef1920212-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "71288814-2f4e-4e92-8064-8f9ef1920212" (UID: "71288814-2f4e-4e92-8064-8f9ef1920212"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:08:50 crc kubenswrapper[4854]: I0103 06:08:50.711423 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71288814-2f4e-4e92-8064-8f9ef1920212-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "71288814-2f4e-4e92-8064-8f9ef1920212" (UID: "71288814-2f4e-4e92-8064-8f9ef1920212"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:08:50 crc kubenswrapper[4854]: I0103 06:08:50.713382 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/71288814-2f4e-4e92-8064-8f9ef1920212-pod-info" (OuterVolumeSpecName: "pod-info") pod "71288814-2f4e-4e92-8064-8f9ef1920212" (UID: "71288814-2f4e-4e92-8064-8f9ef1920212"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 03 06:08:50 crc kubenswrapper[4854]: I0103 06:08:50.721687 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71288814-2f4e-4e92-8064-8f9ef1920212-kube-api-access-wn7fn" (OuterVolumeSpecName: "kube-api-access-wn7fn") pod "71288814-2f4e-4e92-8064-8f9ef1920212" (UID: "71288814-2f4e-4e92-8064-8f9ef1920212"). InnerVolumeSpecName "kube-api-access-wn7fn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:08:50 crc kubenswrapper[4854]: I0103 06:08:50.729835 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71288814-2f4e-4e92-8064-8f9ef1920212-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "71288814-2f4e-4e92-8064-8f9ef1920212" (UID: "71288814-2f4e-4e92-8064-8f9ef1920212"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:08:50 crc kubenswrapper[4854]: I0103 06:08:50.751422 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"71288814-2f4e-4e92-8064-8f9ef1920212","Type":"ContainerDied","Data":"f5b0295f2709e3a9e8f2abe44fb70699ba1907f3f15e6f0b9cdf8dceffbd0927"} Jan 03 06:08:50 crc kubenswrapper[4854]: I0103 06:08:50.751481 4854 scope.go:117] "RemoveContainer" containerID="2c57a63b557f809daa470fa1e3f261e47b9ca7c22a62d29b168b4282d62dc1e2" Jan 03 06:08:50 crc kubenswrapper[4854]: I0103 06:08:50.751648 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:08:50 crc kubenswrapper[4854]: I0103 06:08:50.755225 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71288814-2f4e-4e92-8064-8f9ef1920212-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "71288814-2f4e-4e92-8064-8f9ef1920212" (UID: "71288814-2f4e-4e92-8064-8f9ef1920212"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:08:50 crc kubenswrapper[4854]: I0103 06:08:50.776478 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71288814-2f4e-4e92-8064-8f9ef1920212-config-data" (OuterVolumeSpecName: "config-data") pod "71288814-2f4e-4e92-8064-8f9ef1920212" (UID: "71288814-2f4e-4e92-8064-8f9ef1920212"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:08:50 crc kubenswrapper[4854]: I0103 06:08:50.791954 4854 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/71288814-2f4e-4e92-8064-8f9ef1920212-pod-info\") on node \"crc\" DevicePath \"\"" Jan 03 06:08:50 crc kubenswrapper[4854]: I0103 06:08:50.791986 4854 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/71288814-2f4e-4e92-8064-8f9ef1920212-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 03 06:08:50 crc kubenswrapper[4854]: I0103 06:08:50.791997 4854 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/71288814-2f4e-4e92-8064-8f9ef1920212-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 03 06:08:50 crc kubenswrapper[4854]: I0103 06:08:50.792006 4854 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/71288814-2f4e-4e92-8064-8f9ef1920212-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 03 06:08:50 crc kubenswrapper[4854]: I0103 06:08:50.792014 4854 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/71288814-2f4e-4e92-8064-8f9ef1920212-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 03 06:08:50 crc kubenswrapper[4854]: I0103 06:08:50.792023 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wn7fn\" (UniqueName: \"kubernetes.io/projected/71288814-2f4e-4e92-8064-8f9ef1920212-kube-api-access-wn7fn\") on node \"crc\" DevicePath \"\"" Jan 03 06:08:50 crc kubenswrapper[4854]: I0103 06:08:50.792031 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/71288814-2f4e-4e92-8064-8f9ef1920212-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 06:08:50 crc kubenswrapper[4854]: I0103 06:08:50.792040 4854 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/71288814-2f4e-4e92-8064-8f9ef1920212-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 03 06:08:50 crc kubenswrapper[4854]: I0103 06:08:50.986653 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-73b672f2-f1c0-42e6-a01e-e7d83d6d9b11" (OuterVolumeSpecName: "persistence") pod "71288814-2f4e-4e92-8064-8f9ef1920212" (UID: "71288814-2f4e-4e92-8064-8f9ef1920212"). InnerVolumeSpecName "pvc-73b672f2-f1c0-42e6-a01e-e7d83d6d9b11". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.006388 4854 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-73b672f2-f1c0-42e6-a01e-e7d83d6d9b11\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-73b672f2-f1c0-42e6-a01e-e7d83d6d9b11\") on node \"crc\" " Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.009905 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71288814-2f4e-4e92-8064-8f9ef1920212-server-conf" (OuterVolumeSpecName: "server-conf") pod "71288814-2f4e-4e92-8064-8f9ef1920212" (UID: "71288814-2f4e-4e92-8064-8f9ef1920212"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.072065 4854 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.072587 4854 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-73b672f2-f1c0-42e6-a01e-e7d83d6d9b11" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-73b672f2-f1c0-42e6-a01e-e7d83d6d9b11") on node "crc" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.087109 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71288814-2f4e-4e92-8064-8f9ef1920212-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "71288814-2f4e-4e92-8064-8f9ef1920212" (UID: "71288814-2f4e-4e92-8064-8f9ef1920212"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.110728 4854 reconciler_common.go:293] "Volume detached for volume \"pvc-73b672f2-f1c0-42e6-a01e-e7d83d6d9b11\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-73b672f2-f1c0-42e6-a01e-e7d83d6d9b11\") on node \"crc\" DevicePath \"\"" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.110776 4854 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/71288814-2f4e-4e92-8064-8f9ef1920212-server-conf\") on node \"crc\" DevicePath \"\"" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.110790 4854 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/71288814-2f4e-4e92-8064-8f9ef1920212-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.264418 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.468142 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.483435 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.500191 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 03 06:08:51 crc kubenswrapper[4854]: E0103 06:08:51.500714 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71288814-2f4e-4e92-8064-8f9ef1920212" containerName="rabbitmq" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.500732 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="71288814-2f4e-4e92-8064-8f9ef1920212" containerName="rabbitmq" Jan 03 06:08:51 crc kubenswrapper[4854]: E0103 06:08:51.500773 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71288814-2f4e-4e92-8064-8f9ef1920212" containerName="setup-container" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.500782 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="71288814-2f4e-4e92-8064-8f9ef1920212" containerName="setup-container" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.501043 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="71288814-2f4e-4e92-8064-8f9ef1920212" containerName="rabbitmq" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.502470 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.504506 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.507059 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.507697 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-2l8qk" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.507923 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.508226 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.508417 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.508456 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.519732 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.623630 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.623703 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.623747 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.623767 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.623796 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk2xg\" (UniqueName: \"kubernetes.io/projected/7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1-kube-api-access-xk2xg\") pod \"rabbitmq-cell1-server-0\" (UID: \"7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.623820 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.623877 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.623938 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-73b672f2-f1c0-42e6-a01e-e7d83d6d9b11\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-73b672f2-f1c0-42e6-a01e-e7d83d6d9b11\") pod \"rabbitmq-cell1-server-0\" (UID: \"7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.623981 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.624012 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.624027 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:08:51 crc kubenswrapper[4854]: W0103 06:08:51.699944 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca1d3e35_8df0_4b19_891d_3f2aecc401ab.slice/crio-a52aafe9c232646899bfb0ad5cd7041209f7ed8e30744fc0d3075b071e5b1d39 WatchSource:0}: Error finding container a52aafe9c232646899bfb0ad5cd7041209f7ed8e30744fc0d3075b071e5b1d39: Status 404 returned error can't find the container with id a52aafe9c232646899bfb0ad5cd7041209f7ed8e30744fc0d3075b071e5b1d39 Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.727476 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.727578 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.727618 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.727698 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk2xg\" (UniqueName: \"kubernetes.io/projected/7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1-kube-api-access-xk2xg\") pod \"rabbitmq-cell1-server-0\" (UID: \"7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.727751 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.727930 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.728032 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.728418 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-73b672f2-f1c0-42e6-a01e-e7d83d6d9b11\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-73b672f2-f1c0-42e6-a01e-e7d83d6d9b11\") pod \"rabbitmq-cell1-server-0\" (UID: \"7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.728544 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.728583 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.728611 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.728646 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.728718 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.729288 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.731006 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.731368 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.732871 4854 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.732901 4854 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-73b672f2-f1c0-42e6-a01e-e7d83d6d9b11\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-73b672f2-f1c0-42e6-a01e-e7d83d6d9b11\") pod \"rabbitmq-cell1-server-0\" (UID: \"7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/eea12b79154e29fc712e0c8a941340ba71a472a92b568d3e8a62025798d2edd7/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.734056 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.735020 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.735274 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.738314 4854 scope.go:117] "RemoveContainer" containerID="5d753224579da962547240b2ab8650f6accf03025a8696720fb08f50815571ce" Jan 03 06:08:51 crc kubenswrapper[4854]: E0103 06:08:51.741733 4854 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Jan 03 06:08:51 crc kubenswrapper[4854]: E0103 06:08:51.741784 4854 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Jan 03 06:08:51 crc kubenswrapper[4854]: E0103 06:08:51.741969 4854 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wb9p5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-2sprl_openstack(8fa56f84-4a50-4350-b256-5987e5b990bb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.742137 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:08:51 crc kubenswrapper[4854]: E0103 06:08:51.751213 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-2sprl" podUID="8fa56f84-4a50-4350-b256-5987e5b990bb" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.755265 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk2xg\" (UniqueName: \"kubernetes.io/projected/7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1-kube-api-access-xk2xg\") pod \"rabbitmq-cell1-server-0\" (UID: \"7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.771114 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"11d4187f-5938-4054-9eec-4d84f843bd73","Type":"ContainerDied","Data":"27c7b927832c03c7ba640994748c7296335ccf34d5985ffe88f86de2f25e7391"} Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.771165 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27c7b927832c03c7ba640994748c7296335ccf34d5985ffe88f86de2f25e7391" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.772552 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca1d3e35-8df0-4b19-891d-3f2aecc401ab","Type":"ContainerStarted","Data":"a52aafe9c232646899bfb0ad5cd7041209f7ed8e30744fc0d3075b071e5b1d39"} Jan 03 06:08:51 crc kubenswrapper[4854]: E0103 06:08:51.778830 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-2sprl" podUID="8fa56f84-4a50-4350-b256-5987e5b990bb" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.790930 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-73b672f2-f1c0-42e6-a01e-e7d83d6d9b11\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-73b672f2-f1c0-42e6-a01e-e7d83d6d9b11\") pod \"rabbitmq-cell1-server-0\" (UID: \"7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.822997 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.839986 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.936040 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/11d4187f-5938-4054-9eec-4d84f843bd73-rabbitmq-plugins\") pod \"11d4187f-5938-4054-9eec-4d84f843bd73\" (UID: \"11d4187f-5938-4054-9eec-4d84f843bd73\") " Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.936198 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/11d4187f-5938-4054-9eec-4d84f843bd73-erlang-cookie-secret\") pod \"11d4187f-5938-4054-9eec-4d84f843bd73\" (UID: \"11d4187f-5938-4054-9eec-4d84f843bd73\") " Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.936287 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/11d4187f-5938-4054-9eec-4d84f843bd73-plugins-conf\") pod \"11d4187f-5938-4054-9eec-4d84f843bd73\" (UID: \"11d4187f-5938-4054-9eec-4d84f843bd73\") " Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.936312 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/11d4187f-5938-4054-9eec-4d84f843bd73-rabbitmq-confd\") pod \"11d4187f-5938-4054-9eec-4d84f843bd73\" (UID: \"11d4187f-5938-4054-9eec-4d84f843bd73\") " Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.936374 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/11d4187f-5938-4054-9eec-4d84f843bd73-rabbitmq-erlang-cookie\") pod \"11d4187f-5938-4054-9eec-4d84f843bd73\" (UID: \"11d4187f-5938-4054-9eec-4d84f843bd73\") " Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.936427 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdxk9\" (UniqueName: \"kubernetes.io/projected/11d4187f-5938-4054-9eec-4d84f843bd73-kube-api-access-mdxk9\") pod \"11d4187f-5938-4054-9eec-4d84f843bd73\" (UID: \"11d4187f-5938-4054-9eec-4d84f843bd73\") " Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.936559 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/11d4187f-5938-4054-9eec-4d84f843bd73-config-data\") pod \"11d4187f-5938-4054-9eec-4d84f843bd73\" (UID: \"11d4187f-5938-4054-9eec-4d84f843bd73\") " Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.936656 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/11d4187f-5938-4054-9eec-4d84f843bd73-pod-info\") pod \"11d4187f-5938-4054-9eec-4d84f843bd73\" (UID: \"11d4187f-5938-4054-9eec-4d84f843bd73\") " Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.937708 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ee918d9a-3b30-49a0-833a-596dc7301cae\") pod \"11d4187f-5938-4054-9eec-4d84f843bd73\" (UID: \"11d4187f-5938-4054-9eec-4d84f843bd73\") " Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.937776 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/11d4187f-5938-4054-9eec-4d84f843bd73-rabbitmq-tls\") pod \"11d4187f-5938-4054-9eec-4d84f843bd73\" (UID: \"11d4187f-5938-4054-9eec-4d84f843bd73\") " Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.937864 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/11d4187f-5938-4054-9eec-4d84f843bd73-server-conf\") pod \"11d4187f-5938-4054-9eec-4d84f843bd73\" (UID: \"11d4187f-5938-4054-9eec-4d84f843bd73\") " Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.942464 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11d4187f-5938-4054-9eec-4d84f843bd73-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "11d4187f-5938-4054-9eec-4d84f843bd73" (UID: "11d4187f-5938-4054-9eec-4d84f843bd73"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.943397 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11d4187f-5938-4054-9eec-4d84f843bd73-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "11d4187f-5938-4054-9eec-4d84f843bd73" (UID: "11d4187f-5938-4054-9eec-4d84f843bd73"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.949833 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11d4187f-5938-4054-9eec-4d84f843bd73-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "11d4187f-5938-4054-9eec-4d84f843bd73" (UID: "11d4187f-5938-4054-9eec-4d84f843bd73"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.969157 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/11d4187f-5938-4054-9eec-4d84f843bd73-pod-info" (OuterVolumeSpecName: "pod-info") pod "11d4187f-5938-4054-9eec-4d84f843bd73" (UID: "11d4187f-5938-4054-9eec-4d84f843bd73"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.970437 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11d4187f-5938-4054-9eec-4d84f843bd73-kube-api-access-mdxk9" (OuterVolumeSpecName: "kube-api-access-mdxk9") pod "11d4187f-5938-4054-9eec-4d84f843bd73" (UID: "11d4187f-5938-4054-9eec-4d84f843bd73"). InnerVolumeSpecName "kube-api-access-mdxk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.972462 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11d4187f-5938-4054-9eec-4d84f843bd73-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "11d4187f-5938-4054-9eec-4d84f843bd73" (UID: "11d4187f-5938-4054-9eec-4d84f843bd73"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.975417 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11d4187f-5938-4054-9eec-4d84f843bd73-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "11d4187f-5938-4054-9eec-4d84f843bd73" (UID: "11d4187f-5938-4054-9eec-4d84f843bd73"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:08:51 crc kubenswrapper[4854]: I0103 06:08:51.980440 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ee918d9a-3b30-49a0-833a-596dc7301cae" (OuterVolumeSpecName: "persistence") pod "11d4187f-5938-4054-9eec-4d84f843bd73" (UID: "11d4187f-5938-4054-9eec-4d84f843bd73"). InnerVolumeSpecName "pvc-ee918d9a-3b30-49a0-833a-596dc7301cae". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 03 06:08:52 crc kubenswrapper[4854]: I0103 06:08:52.008011 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11d4187f-5938-4054-9eec-4d84f843bd73-config-data" (OuterVolumeSpecName: "config-data") pod "11d4187f-5938-4054-9eec-4d84f843bd73" (UID: "11d4187f-5938-4054-9eec-4d84f843bd73"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:08:52 crc kubenswrapper[4854]: I0103 06:08:52.015730 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11d4187f-5938-4054-9eec-4d84f843bd73-server-conf" (OuterVolumeSpecName: "server-conf") pod "11d4187f-5938-4054-9eec-4d84f843bd73" (UID: "11d4187f-5938-4054-9eec-4d84f843bd73"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:08:52 crc kubenswrapper[4854]: I0103 06:08:52.042862 4854 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/11d4187f-5938-4054-9eec-4d84f843bd73-server-conf\") on node \"crc\" DevicePath \"\"" Jan 03 06:08:52 crc kubenswrapper[4854]: I0103 06:08:52.042904 4854 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/11d4187f-5938-4054-9eec-4d84f843bd73-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 03 06:08:52 crc kubenswrapper[4854]: I0103 06:08:52.044037 4854 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/11d4187f-5938-4054-9eec-4d84f843bd73-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 03 06:08:52 crc kubenswrapper[4854]: I0103 06:08:52.044059 4854 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/11d4187f-5938-4054-9eec-4d84f843bd73-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 03 06:08:52 crc kubenswrapper[4854]: I0103 06:08:52.044069 4854 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/11d4187f-5938-4054-9eec-4d84f843bd73-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 03 06:08:52 crc kubenswrapper[4854]: I0103 06:08:52.044097 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdxk9\" (UniqueName: \"kubernetes.io/projected/11d4187f-5938-4054-9eec-4d84f843bd73-kube-api-access-mdxk9\") on node \"crc\" DevicePath \"\"" Jan 03 06:08:52 crc kubenswrapper[4854]: I0103 06:08:52.044109 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/11d4187f-5938-4054-9eec-4d84f843bd73-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 06:08:52 crc kubenswrapper[4854]: I0103 06:08:52.044117 4854 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/11d4187f-5938-4054-9eec-4d84f843bd73-pod-info\") on node \"crc\" DevicePath \"\"" Jan 03 06:08:52 crc kubenswrapper[4854]: I0103 06:08:52.044145 4854 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-ee918d9a-3b30-49a0-833a-596dc7301cae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ee918d9a-3b30-49a0-833a-596dc7301cae\") on node \"crc\" " Jan 03 06:08:52 crc kubenswrapper[4854]: I0103 06:08:52.044172 4854 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/11d4187f-5938-4054-9eec-4d84f843bd73-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 03 06:08:52 crc kubenswrapper[4854]: I0103 06:08:52.081326 4854 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 03 06:08:52 crc kubenswrapper[4854]: I0103 06:08:52.081520 4854 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-ee918d9a-3b30-49a0-833a-596dc7301cae" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ee918d9a-3b30-49a0-833a-596dc7301cae") on node "crc" Jan 03 06:08:52 crc kubenswrapper[4854]: I0103 06:08:52.146667 4854 reconciler_common.go:293] "Volume detached for volume \"pvc-ee918d9a-3b30-49a0-833a-596dc7301cae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ee918d9a-3b30-49a0-833a-596dc7301cae\") on node \"crc\" DevicePath \"\"" Jan 03 06:08:52 crc kubenswrapper[4854]: I0103 06:08:52.182350 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71288814-2f4e-4e92-8064-8f9ef1920212" path="/var/lib/kubelet/pods/71288814-2f4e-4e92-8064-8f9ef1920212/volumes" Jan 03 06:08:52 crc kubenswrapper[4854]: I0103 06:08:52.304455 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11d4187f-5938-4054-9eec-4d84f843bd73-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "11d4187f-5938-4054-9eec-4d84f843bd73" (UID: "11d4187f-5938-4054-9eec-4d84f843bd73"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:08:52 crc kubenswrapper[4854]: I0103 06:08:52.362374 4854 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/11d4187f-5938-4054-9eec-4d84f843bd73-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 03 06:08:52 crc kubenswrapper[4854]: I0103 06:08:52.381897 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-q6szn"] Jan 03 06:08:52 crc kubenswrapper[4854]: I0103 06:08:52.594852 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 03 06:08:52 crc kubenswrapper[4854]: I0103 06:08:52.813450 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1","Type":"ContainerStarted","Data":"5fec98907e11600e565dd788a25f877be111db53db9e1028c3fbd5f7893aadef"} Jan 03 06:08:52 crc kubenswrapper[4854]: I0103 06:08:52.815587 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 03 06:08:52 crc kubenswrapper[4854]: I0103 06:08:52.820422 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-q6szn" event={"ID":"056cdc21-0e18-423c-8fac-6ace074c15d3","Type":"ContainerStarted","Data":"43942b1cef7e735e03152bdb89849cd475d64bf761270795884b869cb3dec685"} Jan 03 06:08:52 crc kubenswrapper[4854]: I0103 06:08:52.931015 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 03 06:08:52 crc kubenswrapper[4854]: I0103 06:08:52.947788 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 03 06:08:52 crc kubenswrapper[4854]: I0103 06:08:52.965621 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Jan 03 06:08:52 crc kubenswrapper[4854]: E0103 06:08:52.966332 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11d4187f-5938-4054-9eec-4d84f843bd73" containerName="setup-container" Jan 03 06:08:52 crc kubenswrapper[4854]: I0103 06:08:52.966355 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="11d4187f-5938-4054-9eec-4d84f843bd73" containerName="setup-container" Jan 03 06:08:52 crc kubenswrapper[4854]: E0103 06:08:52.966407 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11d4187f-5938-4054-9eec-4d84f843bd73" containerName="rabbitmq" Jan 03 06:08:52 crc kubenswrapper[4854]: I0103 06:08:52.966416 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="11d4187f-5938-4054-9eec-4d84f843bd73" containerName="rabbitmq" Jan 03 06:08:52 crc kubenswrapper[4854]: I0103 06:08:52.966643 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="11d4187f-5938-4054-9eec-4d84f843bd73" containerName="rabbitmq" Jan 03 06:08:52 crc kubenswrapper[4854]: I0103 06:08:52.967998 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 03 06:08:52 crc kubenswrapper[4854]: I0103 06:08:52.980490 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 03 06:08:53 crc kubenswrapper[4854]: I0103 06:08:53.079968 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b5584ebd-a44a-4fa9-97cb-df215860d542-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"b5584ebd-a44a-4fa9-97cb-df215860d542\") " pod="openstack/rabbitmq-server-2" Jan 03 06:08:53 crc kubenswrapper[4854]: I0103 06:08:53.080058 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b5584ebd-a44a-4fa9-97cb-df215860d542-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"b5584ebd-a44a-4fa9-97cb-df215860d542\") " pod="openstack/rabbitmq-server-2" Jan 03 06:08:53 crc kubenswrapper[4854]: I0103 06:08:53.080180 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b5584ebd-a44a-4fa9-97cb-df215860d542-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"b5584ebd-a44a-4fa9-97cb-df215860d542\") " pod="openstack/rabbitmq-server-2" Jan 03 06:08:53 crc kubenswrapper[4854]: I0103 06:08:53.080204 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2m8s\" (UniqueName: \"kubernetes.io/projected/b5584ebd-a44a-4fa9-97cb-df215860d542-kube-api-access-r2m8s\") pod \"rabbitmq-server-2\" (UID: \"b5584ebd-a44a-4fa9-97cb-df215860d542\") " pod="openstack/rabbitmq-server-2" Jan 03 06:08:53 crc kubenswrapper[4854]: I0103 06:08:53.080223 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b5584ebd-a44a-4fa9-97cb-df215860d542-config-data\") pod \"rabbitmq-server-2\" (UID: \"b5584ebd-a44a-4fa9-97cb-df215860d542\") " pod="openstack/rabbitmq-server-2" Jan 03 06:08:53 crc kubenswrapper[4854]: I0103 06:08:53.080274 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b5584ebd-a44a-4fa9-97cb-df215860d542-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"b5584ebd-a44a-4fa9-97cb-df215860d542\") " pod="openstack/rabbitmq-server-2" Jan 03 06:08:53 crc kubenswrapper[4854]: I0103 06:08:53.080290 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b5584ebd-a44a-4fa9-97cb-df215860d542-server-conf\") pod \"rabbitmq-server-2\" (UID: \"b5584ebd-a44a-4fa9-97cb-df215860d542\") " pod="openstack/rabbitmq-server-2" Jan 03 06:08:53 crc kubenswrapper[4854]: I0103 06:08:53.080319 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b5584ebd-a44a-4fa9-97cb-df215860d542-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"b5584ebd-a44a-4fa9-97cb-df215860d542\") " pod="openstack/rabbitmq-server-2" Jan 03 06:08:53 crc kubenswrapper[4854]: I0103 06:08:53.080342 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b5584ebd-a44a-4fa9-97cb-df215860d542-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"b5584ebd-a44a-4fa9-97cb-df215860d542\") " pod="openstack/rabbitmq-server-2" Jan 03 06:08:53 crc kubenswrapper[4854]: I0103 06:08:53.080366 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ee918d9a-3b30-49a0-833a-596dc7301cae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ee918d9a-3b30-49a0-833a-596dc7301cae\") pod \"rabbitmq-server-2\" (UID: \"b5584ebd-a44a-4fa9-97cb-df215860d542\") " pod="openstack/rabbitmq-server-2" Jan 03 06:08:53 crc kubenswrapper[4854]: I0103 06:08:53.080387 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b5584ebd-a44a-4fa9-97cb-df215860d542-pod-info\") pod \"rabbitmq-server-2\" (UID: \"b5584ebd-a44a-4fa9-97cb-df215860d542\") " pod="openstack/rabbitmq-server-2" Jan 03 06:08:53 crc kubenswrapper[4854]: I0103 06:08:53.182172 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b5584ebd-a44a-4fa9-97cb-df215860d542-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"b5584ebd-a44a-4fa9-97cb-df215860d542\") " pod="openstack/rabbitmq-server-2" Jan 03 06:08:53 crc kubenswrapper[4854]: I0103 06:08:53.182287 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b5584ebd-a44a-4fa9-97cb-df215860d542-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"b5584ebd-a44a-4fa9-97cb-df215860d542\") " pod="openstack/rabbitmq-server-2" Jan 03 06:08:53 crc kubenswrapper[4854]: I0103 06:08:53.182374 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b5584ebd-a44a-4fa9-97cb-df215860d542-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"b5584ebd-a44a-4fa9-97cb-df215860d542\") " pod="openstack/rabbitmq-server-2" Jan 03 06:08:53 crc kubenswrapper[4854]: I0103 06:08:53.182398 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2m8s\" (UniqueName: \"kubernetes.io/projected/b5584ebd-a44a-4fa9-97cb-df215860d542-kube-api-access-r2m8s\") pod \"rabbitmq-server-2\" (UID: \"b5584ebd-a44a-4fa9-97cb-df215860d542\") " pod="openstack/rabbitmq-server-2" Jan 03 06:08:53 crc kubenswrapper[4854]: I0103 06:08:53.182415 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b5584ebd-a44a-4fa9-97cb-df215860d542-config-data\") pod \"rabbitmq-server-2\" (UID: \"b5584ebd-a44a-4fa9-97cb-df215860d542\") " pod="openstack/rabbitmq-server-2" Jan 03 06:08:53 crc kubenswrapper[4854]: I0103 06:08:53.182476 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b5584ebd-a44a-4fa9-97cb-df215860d542-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"b5584ebd-a44a-4fa9-97cb-df215860d542\") " pod="openstack/rabbitmq-server-2" Jan 03 06:08:53 crc kubenswrapper[4854]: I0103 06:08:53.182496 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b5584ebd-a44a-4fa9-97cb-df215860d542-server-conf\") pod \"rabbitmq-server-2\" (UID: \"b5584ebd-a44a-4fa9-97cb-df215860d542\") " pod="openstack/rabbitmq-server-2" Jan 03 06:08:53 crc kubenswrapper[4854]: I0103 06:08:53.182531 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b5584ebd-a44a-4fa9-97cb-df215860d542-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"b5584ebd-a44a-4fa9-97cb-df215860d542\") " pod="openstack/rabbitmq-server-2" Jan 03 06:08:53 crc kubenswrapper[4854]: I0103 06:08:53.182560 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b5584ebd-a44a-4fa9-97cb-df215860d542-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"b5584ebd-a44a-4fa9-97cb-df215860d542\") " pod="openstack/rabbitmq-server-2" Jan 03 06:08:53 crc kubenswrapper[4854]: I0103 06:08:53.182578 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ee918d9a-3b30-49a0-833a-596dc7301cae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ee918d9a-3b30-49a0-833a-596dc7301cae\") pod \"rabbitmq-server-2\" (UID: \"b5584ebd-a44a-4fa9-97cb-df215860d542\") " pod="openstack/rabbitmq-server-2" Jan 03 06:08:53 crc kubenswrapper[4854]: I0103 06:08:53.182597 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b5584ebd-a44a-4fa9-97cb-df215860d542-pod-info\") pod \"rabbitmq-server-2\" (UID: \"b5584ebd-a44a-4fa9-97cb-df215860d542\") " pod="openstack/rabbitmq-server-2" Jan 03 06:08:53 crc kubenswrapper[4854]: I0103 06:08:53.185964 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b5584ebd-a44a-4fa9-97cb-df215860d542-config-data\") pod \"rabbitmq-server-2\" (UID: \"b5584ebd-a44a-4fa9-97cb-df215860d542\") " pod="openstack/rabbitmq-server-2" Jan 03 06:08:53 crc kubenswrapper[4854]: I0103 06:08:53.185987 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b5584ebd-a44a-4fa9-97cb-df215860d542-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"b5584ebd-a44a-4fa9-97cb-df215860d542\") " pod="openstack/rabbitmq-server-2" Jan 03 06:08:53 crc kubenswrapper[4854]: I0103 06:08:53.186243 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b5584ebd-a44a-4fa9-97cb-df215860d542-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"b5584ebd-a44a-4fa9-97cb-df215860d542\") " pod="openstack/rabbitmq-server-2" Jan 03 06:08:53 crc kubenswrapper[4854]: I0103 06:08:53.186596 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b5584ebd-a44a-4fa9-97cb-df215860d542-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"b5584ebd-a44a-4fa9-97cb-df215860d542\") " pod="openstack/rabbitmq-server-2" Jan 03 06:08:53 crc kubenswrapper[4854]: I0103 06:08:53.186960 4854 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 03 06:08:53 crc kubenswrapper[4854]: I0103 06:08:53.186990 4854 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ee918d9a-3b30-49a0-833a-596dc7301cae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ee918d9a-3b30-49a0-833a-596dc7301cae\") pod \"rabbitmq-server-2\" (UID: \"b5584ebd-a44a-4fa9-97cb-df215860d542\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/211f3cc9bde56467c1ebdea293e17dd39cf39688048069027414643ee5da736e/globalmount\"" pod="openstack/rabbitmq-server-2" Jan 03 06:08:53 crc kubenswrapper[4854]: I0103 06:08:53.190277 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b5584ebd-a44a-4fa9-97cb-df215860d542-pod-info\") pod \"rabbitmq-server-2\" (UID: \"b5584ebd-a44a-4fa9-97cb-df215860d542\") " pod="openstack/rabbitmq-server-2" Jan 03 06:08:53 crc kubenswrapper[4854]: I0103 06:08:53.190481 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b5584ebd-a44a-4fa9-97cb-df215860d542-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"b5584ebd-a44a-4fa9-97cb-df215860d542\") " pod="openstack/rabbitmq-server-2" Jan 03 06:08:53 crc kubenswrapper[4854]: I0103 06:08:53.191639 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b5584ebd-a44a-4fa9-97cb-df215860d542-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"b5584ebd-a44a-4fa9-97cb-df215860d542\") " pod="openstack/rabbitmq-server-2" Jan 03 06:08:53 crc kubenswrapper[4854]: I0103 06:08:53.192510 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b5584ebd-a44a-4fa9-97cb-df215860d542-server-conf\") pod \"rabbitmq-server-2\" (UID: \"b5584ebd-a44a-4fa9-97cb-df215860d542\") " pod="openstack/rabbitmq-server-2" Jan 03 06:08:53 crc kubenswrapper[4854]: I0103 06:08:53.193406 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b5584ebd-a44a-4fa9-97cb-df215860d542-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"b5584ebd-a44a-4fa9-97cb-df215860d542\") " pod="openstack/rabbitmq-server-2" Jan 03 06:08:53 crc kubenswrapper[4854]: I0103 06:08:53.205983 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2m8s\" (UniqueName: \"kubernetes.io/projected/b5584ebd-a44a-4fa9-97cb-df215860d542-kube-api-access-r2m8s\") pod \"rabbitmq-server-2\" (UID: \"b5584ebd-a44a-4fa9-97cb-df215860d542\") " pod="openstack/rabbitmq-server-2" Jan 03 06:08:53 crc kubenswrapper[4854]: I0103 06:08:53.241697 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ee918d9a-3b30-49a0-833a-596dc7301cae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ee918d9a-3b30-49a0-833a-596dc7301cae\") pod \"rabbitmq-server-2\" (UID: \"b5584ebd-a44a-4fa9-97cb-df215860d542\") " pod="openstack/rabbitmq-server-2" Jan 03 06:08:53 crc kubenswrapper[4854]: I0103 06:08:53.285170 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 03 06:08:53 crc kubenswrapper[4854]: I0103 06:08:53.837150 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 03 06:08:53 crc kubenswrapper[4854]: I0103 06:08:53.851584 4854 generic.go:334] "Generic (PLEG): container finished" podID="056cdc21-0e18-423c-8fac-6ace074c15d3" containerID="4e8e82fac1116a8f68ea184a2475dc6bcaaf6e652cd2013d757feb4f34be7a98" exitCode=0 Jan 03 06:08:53 crc kubenswrapper[4854]: I0103 06:08:53.851646 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-q6szn" event={"ID":"056cdc21-0e18-423c-8fac-6ace074c15d3","Type":"ContainerDied","Data":"4e8e82fac1116a8f68ea184a2475dc6bcaaf6e652cd2013d757feb4f34be7a98"} Jan 03 06:08:54 crc kubenswrapper[4854]: I0103 06:08:54.132397 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11d4187f-5938-4054-9eec-4d84f843bd73" path="/var/lib/kubelet/pods/11d4187f-5938-4054-9eec-4d84f843bd73/volumes" Jan 03 06:08:54 crc kubenswrapper[4854]: E0103 06:08:54.346447 4854 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34ba0145_7948_47f0_bec5_7f5fc6cb1150.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2518f81_3d3d_47a6_a157_19c2685f07d2.slice/crio-e38e63c3678040ae4e8bdfad604a4579c1e8551a859f7aa50b44b27d06c18126\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2518f81_3d3d_47a6_a157_19c2685f07d2.slice\": RecentStats: unable to find data in memory cache]" Jan 03 06:08:54 crc kubenswrapper[4854]: I0103 06:08:54.393130 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="71288814-2f4e-4e92-8064-8f9ef1920212" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.132:5671: i/o timeout" Jan 03 06:08:54 crc kubenswrapper[4854]: I0103 06:08:54.868162 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1","Type":"ContainerStarted","Data":"d391b7f999650ff1558f289c6ea306c961b37bc3e55d68991e760c7fffb4d5fb"} Jan 03 06:08:55 crc kubenswrapper[4854]: I0103 06:08:55.119540 4854 scope.go:117] "RemoveContainer" containerID="1db592b68f62b6c2ad08a01037bad35421ca86a46de4654a3ad5f8ce76bb6f1b" Jan 03 06:08:55 crc kubenswrapper[4854]: E0103 06:08:55.120610 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:08:55 crc kubenswrapper[4854]: W0103 06:08:55.608949 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5584ebd_a44a_4fa9_97cb_df215860d542.slice/crio-15464fa7653244886af4e8444a6efbc6c330cd8ad7b6a3e113aa3a9d5f9001e2 WatchSource:0}: Error finding container 15464fa7653244886af4e8444a6efbc6c330cd8ad7b6a3e113aa3a9d5f9001e2: Status 404 returned error can't find the container with id 15464fa7653244886af4e8444a6efbc6c330cd8ad7b6a3e113aa3a9d5f9001e2 Jan 03 06:08:55 crc kubenswrapper[4854]: I0103 06:08:55.883525 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-q6szn" event={"ID":"056cdc21-0e18-423c-8fac-6ace074c15d3","Type":"ContainerStarted","Data":"760b345a6b168777786167d57c3abd52a5c873df18665203e60a65b0dda52cae"} Jan 03 06:08:55 crc kubenswrapper[4854]: I0103 06:08:55.883800 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d84b4d45c-q6szn" Jan 03 06:08:55 crc kubenswrapper[4854]: I0103 06:08:55.886577 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"b5584ebd-a44a-4fa9-97cb-df215860d542","Type":"ContainerStarted","Data":"15464fa7653244886af4e8444a6efbc6c330cd8ad7b6a3e113aa3a9d5f9001e2"} Jan 03 06:08:55 crc kubenswrapper[4854]: I0103 06:08:55.907295 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d84b4d45c-q6szn" podStartSLOduration=9.907272635 podStartE2EDuration="9.907272635s" podCreationTimestamp="2026-01-03 06:08:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:08:55.897878041 +0000 UTC m=+1714.224454613" watchObservedRunningTime="2026-01-03 06:08:55.907272635 +0000 UTC m=+1714.233849207" Jan 03 06:08:56 crc kubenswrapper[4854]: I0103 06:08:56.900174 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca1d3e35-8df0-4b19-891d-3f2aecc401ab","Type":"ContainerStarted","Data":"7def2b0ff7c9c47e2ab148307011319889a3bf934eaf31dbecc60b60a9497a0e"} Jan 03 06:08:57 crc kubenswrapper[4854]: E0103 06:08:57.484813 4854 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2518f81_3d3d_47a6_a157_19c2685f07d2.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2518f81_3d3d_47a6_a157_19c2685f07d2.slice/crio-e38e63c3678040ae4e8bdfad604a4579c1e8551a859f7aa50b44b27d06c18126\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34ba0145_7948_47f0_bec5_7f5fc6cb1150.slice\": RecentStats: unable to find data in memory cache]" Jan 03 06:08:58 crc kubenswrapper[4854]: I0103 06:08:58.479238 4854 scope.go:117] "RemoveContainer" containerID="966be0870b3703258629a86592d60328dd3141e73e827e626ef8c5cda9a46c3a" Jan 03 06:08:58 crc kubenswrapper[4854]: I0103 06:08:58.869265 4854 scope.go:117] "RemoveContainer" containerID="b860c1238393526732cdb6b943711bfb2efdd77463816c7accde43211155cbcb" Jan 03 06:08:58 crc kubenswrapper[4854]: I0103 06:08:58.905782 4854 scope.go:117] "RemoveContainer" containerID="8b3905b680a06f0704057089d6a59eff8699f94da007aad0b4bf2bf6b922a256" Jan 03 06:08:58 crc kubenswrapper[4854]: I0103 06:08:58.946287 4854 scope.go:117] "RemoveContainer" containerID="eb1c1ac7cb3199760c4745b40d40c4521c1ba8316ddd5c2440ae23e398cda87c" Jan 03 06:08:59 crc kubenswrapper[4854]: I0103 06:08:59.149547 4854 scope.go:117] "RemoveContainer" containerID="4b8f08606528c73dd507f22ae3ae9bca8bbbafe4a53b066c34f385c2f9d2c78f" Jan 03 06:09:00 crc kubenswrapper[4854]: I0103 06:09:00.057233 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"b5584ebd-a44a-4fa9-97cb-df215860d542","Type":"ContainerStarted","Data":"1f8ddf18daa3d95200c04a97c9dc467706ad4aa97502d14cd29b03f46c07f467"} Jan 03 06:09:00 crc kubenswrapper[4854]: I0103 06:09:00.072971 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca1d3e35-8df0-4b19-891d-3f2aecc401ab","Type":"ContainerStarted","Data":"aae61ed9f182e8e29eef552d76390e004d87bb0bd9362e48602c571c0c1ce11e"} Jan 03 06:09:01 crc kubenswrapper[4854]: I0103 06:09:01.085275 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca1d3e35-8df0-4b19-891d-3f2aecc401ab","Type":"ContainerStarted","Data":"1a772472bda0bd83a0686fb5c46dd06624d554cb7005669d6131cd275f4d654d"} Jan 03 06:09:02 crc kubenswrapper[4854]: I0103 06:09:02.104441 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca1d3e35-8df0-4b19-891d-3f2aecc401ab","Type":"ContainerStarted","Data":"2c0685c51559d572f3197dd8105f8a3a7d53eccdda8a3d791678eee0c10780ad"} Jan 03 06:09:02 crc kubenswrapper[4854]: I0103 06:09:02.105042 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 03 06:09:02 crc kubenswrapper[4854]: I0103 06:09:02.137833 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=13.14325277 podStartE2EDuration="23.137816126s" podCreationTimestamp="2026-01-03 06:08:39 +0000 UTC" firstStartedPulling="2026-01-03 06:08:51.704103104 +0000 UTC m=+1710.030679676" lastFinishedPulling="2026-01-03 06:09:01.69866646 +0000 UTC m=+1720.025243032" observedRunningTime="2026-01-03 06:09:02.129905429 +0000 UTC m=+1720.456482031" watchObservedRunningTime="2026-01-03 06:09:02.137816126 +0000 UTC m=+1720.464392698" Jan 03 06:09:02 crc kubenswrapper[4854]: I0103 06:09:02.220954 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d84b4d45c-q6szn" Jan 03 06:09:02 crc kubenswrapper[4854]: I0103 06:09:02.335380 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-9g7b2"] Jan 03 06:09:02 crc kubenswrapper[4854]: I0103 06:09:02.335612 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7bbf7cf9-9g7b2" podUID="d521510c-fc2f-4928-a2c8-45155c352562" containerName="dnsmasq-dns" containerID="cri-o://5c22bce98bc4a32ece7dd30b876deb7e801b8501ffe66759f8f8501daa90c0d3" gracePeriod=10 Jan 03 06:09:02 crc kubenswrapper[4854]: I0103 06:09:02.599130 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-b824x"] Jan 03 06:09:02 crc kubenswrapper[4854]: I0103 06:09:02.601768 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6df4f56c-b824x" Jan 03 06:09:02 crc kubenswrapper[4854]: I0103 06:09:02.626951 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-b824x"] Jan 03 06:09:02 crc kubenswrapper[4854]: I0103 06:09:02.807025 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/61f72f24-1aea-49d0-b209-bf4556b49da8-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-b824x\" (UID: \"61f72f24-1aea-49d0-b209-bf4556b49da8\") " pod="openstack/dnsmasq-dns-6f6df4f56c-b824x" Jan 03 06:09:02 crc kubenswrapper[4854]: I0103 06:09:02.807079 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/61f72f24-1aea-49d0-b209-bf4556b49da8-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-b824x\" (UID: \"61f72f24-1aea-49d0-b209-bf4556b49da8\") " pod="openstack/dnsmasq-dns-6f6df4f56c-b824x" Jan 03 06:09:02 crc kubenswrapper[4854]: I0103 06:09:02.807126 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxl4v\" (UniqueName: \"kubernetes.io/projected/61f72f24-1aea-49d0-b209-bf4556b49da8-kube-api-access-wxl4v\") pod \"dnsmasq-dns-6f6df4f56c-b824x\" (UID: \"61f72f24-1aea-49d0-b209-bf4556b49da8\") " pod="openstack/dnsmasq-dns-6f6df4f56c-b824x" Jan 03 06:09:02 crc kubenswrapper[4854]: I0103 06:09:02.807257 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61f72f24-1aea-49d0-b209-bf4556b49da8-config\") pod \"dnsmasq-dns-6f6df4f56c-b824x\" (UID: \"61f72f24-1aea-49d0-b209-bf4556b49da8\") " pod="openstack/dnsmasq-dns-6f6df4f56c-b824x" Jan 03 06:09:02 crc kubenswrapper[4854]: I0103 06:09:02.807290 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/61f72f24-1aea-49d0-b209-bf4556b49da8-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-b824x\" (UID: \"61f72f24-1aea-49d0-b209-bf4556b49da8\") " pod="openstack/dnsmasq-dns-6f6df4f56c-b824x" Jan 03 06:09:02 crc kubenswrapper[4854]: I0103 06:09:02.807349 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/61f72f24-1aea-49d0-b209-bf4556b49da8-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-b824x\" (UID: \"61f72f24-1aea-49d0-b209-bf4556b49da8\") " pod="openstack/dnsmasq-dns-6f6df4f56c-b824x" Jan 03 06:09:02 crc kubenswrapper[4854]: I0103 06:09:02.807527 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/61f72f24-1aea-49d0-b209-bf4556b49da8-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-b824x\" (UID: \"61f72f24-1aea-49d0-b209-bf4556b49da8\") " pod="openstack/dnsmasq-dns-6f6df4f56c-b824x" Jan 03 06:09:02 crc kubenswrapper[4854]: I0103 06:09:02.908888 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/61f72f24-1aea-49d0-b209-bf4556b49da8-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-b824x\" (UID: \"61f72f24-1aea-49d0-b209-bf4556b49da8\") " pod="openstack/dnsmasq-dns-6f6df4f56c-b824x" Jan 03 06:09:02 crc kubenswrapper[4854]: I0103 06:09:02.908982 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/61f72f24-1aea-49d0-b209-bf4556b49da8-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-b824x\" (UID: \"61f72f24-1aea-49d0-b209-bf4556b49da8\") " pod="openstack/dnsmasq-dns-6f6df4f56c-b824x" Jan 03 06:09:02 crc kubenswrapper[4854]: I0103 06:09:02.909006 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/61f72f24-1aea-49d0-b209-bf4556b49da8-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-b824x\" (UID: \"61f72f24-1aea-49d0-b209-bf4556b49da8\") " pod="openstack/dnsmasq-dns-6f6df4f56c-b824x" Jan 03 06:09:02 crc kubenswrapper[4854]: I0103 06:09:02.909031 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxl4v\" (UniqueName: \"kubernetes.io/projected/61f72f24-1aea-49d0-b209-bf4556b49da8-kube-api-access-wxl4v\") pod \"dnsmasq-dns-6f6df4f56c-b824x\" (UID: \"61f72f24-1aea-49d0-b209-bf4556b49da8\") " pod="openstack/dnsmasq-dns-6f6df4f56c-b824x" Jan 03 06:09:02 crc kubenswrapper[4854]: I0103 06:09:02.909117 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61f72f24-1aea-49d0-b209-bf4556b49da8-config\") pod \"dnsmasq-dns-6f6df4f56c-b824x\" (UID: \"61f72f24-1aea-49d0-b209-bf4556b49da8\") " pod="openstack/dnsmasq-dns-6f6df4f56c-b824x" Jan 03 06:09:02 crc kubenswrapper[4854]: I0103 06:09:02.909142 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/61f72f24-1aea-49d0-b209-bf4556b49da8-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-b824x\" (UID: \"61f72f24-1aea-49d0-b209-bf4556b49da8\") " pod="openstack/dnsmasq-dns-6f6df4f56c-b824x" Jan 03 06:09:02 crc kubenswrapper[4854]: I0103 06:09:02.909192 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/61f72f24-1aea-49d0-b209-bf4556b49da8-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-b824x\" (UID: \"61f72f24-1aea-49d0-b209-bf4556b49da8\") " pod="openstack/dnsmasq-dns-6f6df4f56c-b824x" Jan 03 06:09:02 crc kubenswrapper[4854]: I0103 06:09:02.910169 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/61f72f24-1aea-49d0-b209-bf4556b49da8-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-b824x\" (UID: \"61f72f24-1aea-49d0-b209-bf4556b49da8\") " pod="openstack/dnsmasq-dns-6f6df4f56c-b824x" Jan 03 06:09:02 crc kubenswrapper[4854]: I0103 06:09:02.910672 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/61f72f24-1aea-49d0-b209-bf4556b49da8-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-b824x\" (UID: \"61f72f24-1aea-49d0-b209-bf4556b49da8\") " pod="openstack/dnsmasq-dns-6f6df4f56c-b824x" Jan 03 06:09:02 crc kubenswrapper[4854]: I0103 06:09:02.911206 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/61f72f24-1aea-49d0-b209-bf4556b49da8-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-b824x\" (UID: \"61f72f24-1aea-49d0-b209-bf4556b49da8\") " pod="openstack/dnsmasq-dns-6f6df4f56c-b824x" Jan 03 06:09:02 crc kubenswrapper[4854]: I0103 06:09:02.911746 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/61f72f24-1aea-49d0-b209-bf4556b49da8-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-b824x\" (UID: \"61f72f24-1aea-49d0-b209-bf4556b49da8\") " pod="openstack/dnsmasq-dns-6f6df4f56c-b824x" Jan 03 06:09:02 crc kubenswrapper[4854]: I0103 06:09:02.912743 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61f72f24-1aea-49d0-b209-bf4556b49da8-config\") pod \"dnsmasq-dns-6f6df4f56c-b824x\" (UID: \"61f72f24-1aea-49d0-b209-bf4556b49da8\") " pod="openstack/dnsmasq-dns-6f6df4f56c-b824x" Jan 03 06:09:02 crc kubenswrapper[4854]: I0103 06:09:02.912939 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/61f72f24-1aea-49d0-b209-bf4556b49da8-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-b824x\" (UID: \"61f72f24-1aea-49d0-b209-bf4556b49da8\") " pod="openstack/dnsmasq-dns-6f6df4f56c-b824x" Jan 03 06:09:02 crc kubenswrapper[4854]: I0103 06:09:02.932934 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxl4v\" (UniqueName: \"kubernetes.io/projected/61f72f24-1aea-49d0-b209-bf4556b49da8-kube-api-access-wxl4v\") pod \"dnsmasq-dns-6f6df4f56c-b824x\" (UID: \"61f72f24-1aea-49d0-b209-bf4556b49da8\") " pod="openstack/dnsmasq-dns-6f6df4f56c-b824x" Jan 03 06:09:03 crc kubenswrapper[4854]: I0103 06:09:03.128690 4854 generic.go:334] "Generic (PLEG): container finished" podID="d521510c-fc2f-4928-a2c8-45155c352562" containerID="5c22bce98bc4a32ece7dd30b876deb7e801b8501ffe66759f8f8501daa90c0d3" exitCode=0 Jan 03 06:09:03 crc kubenswrapper[4854]: I0103 06:09:03.130454 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-9g7b2" event={"ID":"d521510c-fc2f-4928-a2c8-45155c352562","Type":"ContainerDied","Data":"5c22bce98bc4a32ece7dd30b876deb7e801b8501ffe66759f8f8501daa90c0d3"} Jan 03 06:09:03 crc kubenswrapper[4854]: I0103 06:09:03.130542 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-9g7b2" event={"ID":"d521510c-fc2f-4928-a2c8-45155c352562","Type":"ContainerDied","Data":"a1069a5bfa9842fb8bcc20ed8ab01d001701bff8b8d9c69c9ab0afd09ca76275"} Jan 03 06:09:03 crc kubenswrapper[4854]: I0103 06:09:03.130553 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1069a5bfa9842fb8bcc20ed8ab01d001701bff8b8d9c69c9ab0afd09ca76275" Jan 03 06:09:03 crc kubenswrapper[4854]: I0103 06:09:03.229502 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6df4f56c-b824x" Jan 03 06:09:03 crc kubenswrapper[4854]: I0103 06:09:03.234766 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-9g7b2" Jan 03 06:09:03 crc kubenswrapper[4854]: I0103 06:09:03.350126 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8qr9\" (UniqueName: \"kubernetes.io/projected/d521510c-fc2f-4928-a2c8-45155c352562-kube-api-access-h8qr9\") pod \"d521510c-fc2f-4928-a2c8-45155c352562\" (UID: \"d521510c-fc2f-4928-a2c8-45155c352562\") " Jan 03 06:09:03 crc kubenswrapper[4854]: I0103 06:09:03.350328 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d521510c-fc2f-4928-a2c8-45155c352562-dns-svc\") pod \"d521510c-fc2f-4928-a2c8-45155c352562\" (UID: \"d521510c-fc2f-4928-a2c8-45155c352562\") " Jan 03 06:09:03 crc kubenswrapper[4854]: I0103 06:09:03.350490 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d521510c-fc2f-4928-a2c8-45155c352562-ovsdbserver-sb\") pod \"d521510c-fc2f-4928-a2c8-45155c352562\" (UID: \"d521510c-fc2f-4928-a2c8-45155c352562\") " Jan 03 06:09:03 crc kubenswrapper[4854]: I0103 06:09:03.350530 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d521510c-fc2f-4928-a2c8-45155c352562-ovsdbserver-nb\") pod \"d521510c-fc2f-4928-a2c8-45155c352562\" (UID: \"d521510c-fc2f-4928-a2c8-45155c352562\") " Jan 03 06:09:03 crc kubenswrapper[4854]: I0103 06:09:03.350556 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d521510c-fc2f-4928-a2c8-45155c352562-dns-swift-storage-0\") pod \"d521510c-fc2f-4928-a2c8-45155c352562\" (UID: \"d521510c-fc2f-4928-a2c8-45155c352562\") " Jan 03 06:09:03 crc kubenswrapper[4854]: I0103 06:09:03.350598 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d521510c-fc2f-4928-a2c8-45155c352562-config\") pod \"d521510c-fc2f-4928-a2c8-45155c352562\" (UID: \"d521510c-fc2f-4928-a2c8-45155c352562\") " Jan 03 06:09:03 crc kubenswrapper[4854]: I0103 06:09:03.389706 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d521510c-fc2f-4928-a2c8-45155c352562-kube-api-access-h8qr9" (OuterVolumeSpecName: "kube-api-access-h8qr9") pod "d521510c-fc2f-4928-a2c8-45155c352562" (UID: "d521510c-fc2f-4928-a2c8-45155c352562"). InnerVolumeSpecName "kube-api-access-h8qr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:09:03 crc kubenswrapper[4854]: I0103 06:09:03.458943 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8qr9\" (UniqueName: \"kubernetes.io/projected/d521510c-fc2f-4928-a2c8-45155c352562-kube-api-access-h8qr9\") on node \"crc\" DevicePath \"\"" Jan 03 06:09:03 crc kubenswrapper[4854]: I0103 06:09:03.484806 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d521510c-fc2f-4928-a2c8-45155c352562-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d521510c-fc2f-4928-a2c8-45155c352562" (UID: "d521510c-fc2f-4928-a2c8-45155c352562"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:09:03 crc kubenswrapper[4854]: I0103 06:09:03.542855 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d521510c-fc2f-4928-a2c8-45155c352562-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d521510c-fc2f-4928-a2c8-45155c352562" (UID: "d521510c-fc2f-4928-a2c8-45155c352562"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:09:03 crc kubenswrapper[4854]: I0103 06:09:03.569910 4854 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d521510c-fc2f-4928-a2c8-45155c352562-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 03 06:09:03 crc kubenswrapper[4854]: I0103 06:09:03.571184 4854 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d521510c-fc2f-4928-a2c8-45155c352562-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 03 06:09:03 crc kubenswrapper[4854]: I0103 06:09:03.654113 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d521510c-fc2f-4928-a2c8-45155c352562-config" (OuterVolumeSpecName: "config") pod "d521510c-fc2f-4928-a2c8-45155c352562" (UID: "d521510c-fc2f-4928-a2c8-45155c352562"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:09:03 crc kubenswrapper[4854]: I0103 06:09:03.674777 4854 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d521510c-fc2f-4928-a2c8-45155c352562-config\") on node \"crc\" DevicePath \"\"" Jan 03 06:09:03 crc kubenswrapper[4854]: I0103 06:09:03.682593 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d521510c-fc2f-4928-a2c8-45155c352562-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d521510c-fc2f-4928-a2c8-45155c352562" (UID: "d521510c-fc2f-4928-a2c8-45155c352562"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:09:03 crc kubenswrapper[4854]: I0103 06:09:03.749898 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d521510c-fc2f-4928-a2c8-45155c352562-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d521510c-fc2f-4928-a2c8-45155c352562" (UID: "d521510c-fc2f-4928-a2c8-45155c352562"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:09:03 crc kubenswrapper[4854]: I0103 06:09:03.777112 4854 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d521510c-fc2f-4928-a2c8-45155c352562-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 03 06:09:03 crc kubenswrapper[4854]: I0103 06:09:03.777135 4854 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d521510c-fc2f-4928-a2c8-45155c352562-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 03 06:09:04 crc kubenswrapper[4854]: I0103 06:09:04.047908 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-b824x"] Jan 03 06:09:04 crc kubenswrapper[4854]: I0103 06:09:04.144934 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-9g7b2" Jan 03 06:09:04 crc kubenswrapper[4854]: I0103 06:09:04.145559 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-b824x" event={"ID":"61f72f24-1aea-49d0-b209-bf4556b49da8","Type":"ContainerStarted","Data":"ee9047c3cc73c2d9e972da9036594896d38f84a4eb873d6e3ca9e78c0caa7ad2"} Jan 03 06:09:04 crc kubenswrapper[4854]: I0103 06:09:04.343113 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-9g7b2"] Jan 03 06:09:04 crc kubenswrapper[4854]: I0103 06:09:04.353374 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-9g7b2"] Jan 03 06:09:05 crc kubenswrapper[4854]: I0103 06:09:05.160197 4854 generic.go:334] "Generic (PLEG): container finished" podID="61f72f24-1aea-49d0-b209-bf4556b49da8" containerID="61a0714e7912b7ec19dc69d266beff1689638cd5b48cfda92403af33502b08ef" exitCode=0 Jan 03 06:09:05 crc kubenswrapper[4854]: I0103 06:09:05.160244 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-b824x" event={"ID":"61f72f24-1aea-49d0-b209-bf4556b49da8","Type":"ContainerDied","Data":"61a0714e7912b7ec19dc69d266beff1689638cd5b48cfda92403af33502b08ef"} Jan 03 06:09:06 crc kubenswrapper[4854]: I0103 06:09:06.145212 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d521510c-fc2f-4928-a2c8-45155c352562" path="/var/lib/kubelet/pods/d521510c-fc2f-4928-a2c8-45155c352562/volumes" Jan 03 06:09:06 crc kubenswrapper[4854]: I0103 06:09:06.176744 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-b824x" event={"ID":"61f72f24-1aea-49d0-b209-bf4556b49da8","Type":"ContainerStarted","Data":"d470c6b4cebd86f40369c0dc9e3faadeb1a26db13c64c76656304c1ac015340e"} Jan 03 06:09:06 crc kubenswrapper[4854]: I0103 06:09:06.177625 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f6df4f56c-b824x" Jan 03 06:09:06 crc kubenswrapper[4854]: I0103 06:09:06.214655 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f6df4f56c-b824x" podStartSLOduration=4.214640183 podStartE2EDuration="4.214640183s" podCreationTimestamp="2026-01-03 06:09:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:09:06.208236793 +0000 UTC m=+1724.534813445" watchObservedRunningTime="2026-01-03 06:09:06.214640183 +0000 UTC m=+1724.541216755" Jan 03 06:09:07 crc kubenswrapper[4854]: I0103 06:09:07.119185 4854 scope.go:117] "RemoveContainer" containerID="1db592b68f62b6c2ad08a01037bad35421ca86a46de4654a3ad5f8ce76bb6f1b" Jan 03 06:09:07 crc kubenswrapper[4854]: E0103 06:09:07.119846 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:09:07 crc kubenswrapper[4854]: I0103 06:09:07.191829 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-2sprl" event={"ID":"8fa56f84-4a50-4350-b256-5987e5b990bb","Type":"ContainerStarted","Data":"99a9bb4cb21e4def9aa5c449ac0640a93bcfd2962989bb1009d492734be07cad"} Jan 03 06:09:07 crc kubenswrapper[4854]: E0103 06:09:07.625180 4854 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34ba0145_7948_47f0_bec5_7f5fc6cb1150.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2518f81_3d3d_47a6_a157_19c2685f07d2.slice/crio-e38e63c3678040ae4e8bdfad604a4579c1e8551a859f7aa50b44b27d06c18126\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2518f81_3d3d_47a6_a157_19c2685f07d2.slice\": RecentStats: unable to find data in memory cache]" Jan 03 06:09:09 crc kubenswrapper[4854]: I0103 06:09:09.229711 4854 generic.go:334] "Generic (PLEG): container finished" podID="8fa56f84-4a50-4350-b256-5987e5b990bb" containerID="99a9bb4cb21e4def9aa5c449ac0640a93bcfd2962989bb1009d492734be07cad" exitCode=0 Jan 03 06:09:09 crc kubenswrapper[4854]: I0103 06:09:09.229790 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-2sprl" event={"ID":"8fa56f84-4a50-4350-b256-5987e5b990bb","Type":"ContainerDied","Data":"99a9bb4cb21e4def9aa5c449ac0640a93bcfd2962989bb1009d492734be07cad"} Jan 03 06:09:09 crc kubenswrapper[4854]: E0103 06:09:09.336594 4854 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2518f81_3d3d_47a6_a157_19c2685f07d2.slice/crio-e38e63c3678040ae4e8bdfad604a4579c1e8551a859f7aa50b44b27d06c18126\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2518f81_3d3d_47a6_a157_19c2685f07d2.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34ba0145_7948_47f0_bec5_7f5fc6cb1150.slice\": RecentStats: unable to find data in memory cache]" Jan 03 06:09:10 crc kubenswrapper[4854]: I0103 06:09:10.896049 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-2sprl" Jan 03 06:09:10 crc kubenswrapper[4854]: I0103 06:09:10.978466 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fa56f84-4a50-4350-b256-5987e5b990bb-combined-ca-bundle\") pod \"8fa56f84-4a50-4350-b256-5987e5b990bb\" (UID: \"8fa56f84-4a50-4350-b256-5987e5b990bb\") " Jan 03 06:09:10 crc kubenswrapper[4854]: I0103 06:09:10.978539 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fa56f84-4a50-4350-b256-5987e5b990bb-config-data\") pod \"8fa56f84-4a50-4350-b256-5987e5b990bb\" (UID: \"8fa56f84-4a50-4350-b256-5987e5b990bb\") " Jan 03 06:09:10 crc kubenswrapper[4854]: I0103 06:09:10.979294 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wb9p5\" (UniqueName: \"kubernetes.io/projected/8fa56f84-4a50-4350-b256-5987e5b990bb-kube-api-access-wb9p5\") pod \"8fa56f84-4a50-4350-b256-5987e5b990bb\" (UID: \"8fa56f84-4a50-4350-b256-5987e5b990bb\") " Jan 03 06:09:10 crc kubenswrapper[4854]: I0103 06:09:10.991440 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fa56f84-4a50-4350-b256-5987e5b990bb-kube-api-access-wb9p5" (OuterVolumeSpecName: "kube-api-access-wb9p5") pod "8fa56f84-4a50-4350-b256-5987e5b990bb" (UID: "8fa56f84-4a50-4350-b256-5987e5b990bb"). InnerVolumeSpecName "kube-api-access-wb9p5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:09:11 crc kubenswrapper[4854]: I0103 06:09:11.014763 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fa56f84-4a50-4350-b256-5987e5b990bb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8fa56f84-4a50-4350-b256-5987e5b990bb" (UID: "8fa56f84-4a50-4350-b256-5987e5b990bb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:09:11 crc kubenswrapper[4854]: I0103 06:09:11.071202 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fa56f84-4a50-4350-b256-5987e5b990bb-config-data" (OuterVolumeSpecName: "config-data") pod "8fa56f84-4a50-4350-b256-5987e5b990bb" (UID: "8fa56f84-4a50-4350-b256-5987e5b990bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:09:11 crc kubenswrapper[4854]: I0103 06:09:11.082149 4854 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fa56f84-4a50-4350-b256-5987e5b990bb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:09:11 crc kubenswrapper[4854]: I0103 06:09:11.082177 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fa56f84-4a50-4350-b256-5987e5b990bb-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 06:09:11 crc kubenswrapper[4854]: I0103 06:09:11.082187 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wb9p5\" (UniqueName: \"kubernetes.io/projected/8fa56f84-4a50-4350-b256-5987e5b990bb-kube-api-access-wb9p5\") on node \"crc\" DevicePath \"\"" Jan 03 06:09:11 crc kubenswrapper[4854]: I0103 06:09:11.256024 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-2sprl" event={"ID":"8fa56f84-4a50-4350-b256-5987e5b990bb","Type":"ContainerDied","Data":"29f79fa4bc7c19f402a1ee8d712203dea643af43b569b7ed132c4af8b9d7be8e"} Jan 03 06:09:11 crc kubenswrapper[4854]: I0103 06:09:11.256069 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29f79fa4bc7c19f402a1ee8d712203dea643af43b569b7ed132c4af8b9d7be8e" Jan 03 06:09:11 crc kubenswrapper[4854]: I0103 06:09:11.256149 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-2sprl" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.507126 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-64d84f65b5-cnzjg"] Jan 03 06:09:12 crc kubenswrapper[4854]: E0103 06:09:12.507712 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d521510c-fc2f-4928-a2c8-45155c352562" containerName="init" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.507727 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="d521510c-fc2f-4928-a2c8-45155c352562" containerName="init" Jan 03 06:09:12 crc kubenswrapper[4854]: E0103 06:09:12.507750 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d521510c-fc2f-4928-a2c8-45155c352562" containerName="dnsmasq-dns" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.507768 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="d521510c-fc2f-4928-a2c8-45155c352562" containerName="dnsmasq-dns" Jan 03 06:09:12 crc kubenswrapper[4854]: E0103 06:09:12.507786 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fa56f84-4a50-4350-b256-5987e5b990bb" containerName="heat-db-sync" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.507792 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fa56f84-4a50-4350-b256-5987e5b990bb" containerName="heat-db-sync" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.508034 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fa56f84-4a50-4350-b256-5987e5b990bb" containerName="heat-db-sync" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.508061 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="d521510c-fc2f-4928-a2c8-45155c352562" containerName="dnsmasq-dns" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.508900 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-64d84f65b5-cnzjg" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.597725 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm2pm\" (UniqueName: \"kubernetes.io/projected/f0006564-0566-4941-983d-8e5c58889f7f-kube-api-access-wm2pm\") pod \"heat-engine-64d84f65b5-cnzjg\" (UID: \"f0006564-0566-4941-983d-8e5c58889f7f\") " pod="openstack/heat-engine-64d84f65b5-cnzjg" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.598005 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0006564-0566-4941-983d-8e5c58889f7f-combined-ca-bundle\") pod \"heat-engine-64d84f65b5-cnzjg\" (UID: \"f0006564-0566-4941-983d-8e5c58889f7f\") " pod="openstack/heat-engine-64d84f65b5-cnzjg" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.598174 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0006564-0566-4941-983d-8e5c58889f7f-config-data\") pod \"heat-engine-64d84f65b5-cnzjg\" (UID: \"f0006564-0566-4941-983d-8e5c58889f7f\") " pod="openstack/heat-engine-64d84f65b5-cnzjg" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.598413 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-64d84f65b5-cnzjg"] Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.599886 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f0006564-0566-4941-983d-8e5c58889f7f-config-data-custom\") pod \"heat-engine-64d84f65b5-cnzjg\" (UID: \"f0006564-0566-4941-983d-8e5c58889f7f\") " pod="openstack/heat-engine-64d84f65b5-cnzjg" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.635386 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-678d8c789d-4cfwq"] Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.646970 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-678d8c789d-4cfwq" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.653017 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-5b8559b4dd-7xq2s"] Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.655065 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5b8559b4dd-7xq2s" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.669703 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-678d8c789d-4cfwq"] Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.686095 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5b8559b4dd-7xq2s"] Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.709243 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/667c29ce-e696-4ad7-97f1-4b43f3eba910-combined-ca-bundle\") pod \"heat-cfnapi-5b8559b4dd-7xq2s\" (UID: \"667c29ce-e696-4ad7-97f1-4b43f3eba910\") " pod="openstack/heat-cfnapi-5b8559b4dd-7xq2s" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.709303 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql9jw\" (UniqueName: \"kubernetes.io/projected/667c29ce-e696-4ad7-97f1-4b43f3eba910-kube-api-access-ql9jw\") pod \"heat-cfnapi-5b8559b4dd-7xq2s\" (UID: \"667c29ce-e696-4ad7-97f1-4b43f3eba910\") " pod="openstack/heat-cfnapi-5b8559b4dd-7xq2s" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.709321 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/667c29ce-e696-4ad7-97f1-4b43f3eba910-public-tls-certs\") pod \"heat-cfnapi-5b8559b4dd-7xq2s\" (UID: \"667c29ce-e696-4ad7-97f1-4b43f3eba910\") " pod="openstack/heat-cfnapi-5b8559b4dd-7xq2s" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.709388 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpkh6\" (UniqueName: \"kubernetes.io/projected/725ed672-0c58-4f2c-b6c2-eb51c516a7a9-kube-api-access-kpkh6\") pod \"heat-api-678d8c789d-4cfwq\" (UID: \"725ed672-0c58-4f2c-b6c2-eb51c516a7a9\") " pod="openstack/heat-api-678d8c789d-4cfwq" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.709419 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/667c29ce-e696-4ad7-97f1-4b43f3eba910-internal-tls-certs\") pod \"heat-cfnapi-5b8559b4dd-7xq2s\" (UID: \"667c29ce-e696-4ad7-97f1-4b43f3eba910\") " pod="openstack/heat-cfnapi-5b8559b4dd-7xq2s" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.709458 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f0006564-0566-4941-983d-8e5c58889f7f-config-data-custom\") pod \"heat-engine-64d84f65b5-cnzjg\" (UID: \"f0006564-0566-4941-983d-8e5c58889f7f\") " pod="openstack/heat-engine-64d84f65b5-cnzjg" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.709486 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/725ed672-0c58-4f2c-b6c2-eb51c516a7a9-combined-ca-bundle\") pod \"heat-api-678d8c789d-4cfwq\" (UID: \"725ed672-0c58-4f2c-b6c2-eb51c516a7a9\") " pod="openstack/heat-api-678d8c789d-4cfwq" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.709504 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/725ed672-0c58-4f2c-b6c2-eb51c516a7a9-config-data-custom\") pod \"heat-api-678d8c789d-4cfwq\" (UID: \"725ed672-0c58-4f2c-b6c2-eb51c516a7a9\") " pod="openstack/heat-api-678d8c789d-4cfwq" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.709549 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/725ed672-0c58-4f2c-b6c2-eb51c516a7a9-public-tls-certs\") pod \"heat-api-678d8c789d-4cfwq\" (UID: \"725ed672-0c58-4f2c-b6c2-eb51c516a7a9\") " pod="openstack/heat-api-678d8c789d-4cfwq" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.709574 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/667c29ce-e696-4ad7-97f1-4b43f3eba910-config-data\") pod \"heat-cfnapi-5b8559b4dd-7xq2s\" (UID: \"667c29ce-e696-4ad7-97f1-4b43f3eba910\") " pod="openstack/heat-cfnapi-5b8559b4dd-7xq2s" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.709597 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/725ed672-0c58-4f2c-b6c2-eb51c516a7a9-config-data\") pod \"heat-api-678d8c789d-4cfwq\" (UID: \"725ed672-0c58-4f2c-b6c2-eb51c516a7a9\") " pod="openstack/heat-api-678d8c789d-4cfwq" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.709655 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm2pm\" (UniqueName: \"kubernetes.io/projected/f0006564-0566-4941-983d-8e5c58889f7f-kube-api-access-wm2pm\") pod \"heat-engine-64d84f65b5-cnzjg\" (UID: \"f0006564-0566-4941-983d-8e5c58889f7f\") " pod="openstack/heat-engine-64d84f65b5-cnzjg" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.709676 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0006564-0566-4941-983d-8e5c58889f7f-combined-ca-bundle\") pod \"heat-engine-64d84f65b5-cnzjg\" (UID: \"f0006564-0566-4941-983d-8e5c58889f7f\") " pod="openstack/heat-engine-64d84f65b5-cnzjg" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.709706 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/725ed672-0c58-4f2c-b6c2-eb51c516a7a9-internal-tls-certs\") pod \"heat-api-678d8c789d-4cfwq\" (UID: \"725ed672-0c58-4f2c-b6c2-eb51c516a7a9\") " pod="openstack/heat-api-678d8c789d-4cfwq" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.709734 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/667c29ce-e696-4ad7-97f1-4b43f3eba910-config-data-custom\") pod \"heat-cfnapi-5b8559b4dd-7xq2s\" (UID: \"667c29ce-e696-4ad7-97f1-4b43f3eba910\") " pod="openstack/heat-cfnapi-5b8559b4dd-7xq2s" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.709770 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0006564-0566-4941-983d-8e5c58889f7f-config-data\") pod \"heat-engine-64d84f65b5-cnzjg\" (UID: \"f0006564-0566-4941-983d-8e5c58889f7f\") " pod="openstack/heat-engine-64d84f65b5-cnzjg" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.718481 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0006564-0566-4941-983d-8e5c58889f7f-combined-ca-bundle\") pod \"heat-engine-64d84f65b5-cnzjg\" (UID: \"f0006564-0566-4941-983d-8e5c58889f7f\") " pod="openstack/heat-engine-64d84f65b5-cnzjg" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.719037 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f0006564-0566-4941-983d-8e5c58889f7f-config-data-custom\") pod \"heat-engine-64d84f65b5-cnzjg\" (UID: \"f0006564-0566-4941-983d-8e5c58889f7f\") " pod="openstack/heat-engine-64d84f65b5-cnzjg" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.726054 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0006564-0566-4941-983d-8e5c58889f7f-config-data\") pod \"heat-engine-64d84f65b5-cnzjg\" (UID: \"f0006564-0566-4941-983d-8e5c58889f7f\") " pod="openstack/heat-engine-64d84f65b5-cnzjg" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.729096 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm2pm\" (UniqueName: \"kubernetes.io/projected/f0006564-0566-4941-983d-8e5c58889f7f-kube-api-access-wm2pm\") pod \"heat-engine-64d84f65b5-cnzjg\" (UID: \"f0006564-0566-4941-983d-8e5c58889f7f\") " pod="openstack/heat-engine-64d84f65b5-cnzjg" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.811277 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/725ed672-0c58-4f2c-b6c2-eb51c516a7a9-internal-tls-certs\") pod \"heat-api-678d8c789d-4cfwq\" (UID: \"725ed672-0c58-4f2c-b6c2-eb51c516a7a9\") " pod="openstack/heat-api-678d8c789d-4cfwq" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.811333 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/667c29ce-e696-4ad7-97f1-4b43f3eba910-config-data-custom\") pod \"heat-cfnapi-5b8559b4dd-7xq2s\" (UID: \"667c29ce-e696-4ad7-97f1-4b43f3eba910\") " pod="openstack/heat-cfnapi-5b8559b4dd-7xq2s" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.811403 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/667c29ce-e696-4ad7-97f1-4b43f3eba910-combined-ca-bundle\") pod \"heat-cfnapi-5b8559b4dd-7xq2s\" (UID: \"667c29ce-e696-4ad7-97f1-4b43f3eba910\") " pod="openstack/heat-cfnapi-5b8559b4dd-7xq2s" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.811434 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ql9jw\" (UniqueName: \"kubernetes.io/projected/667c29ce-e696-4ad7-97f1-4b43f3eba910-kube-api-access-ql9jw\") pod \"heat-cfnapi-5b8559b4dd-7xq2s\" (UID: \"667c29ce-e696-4ad7-97f1-4b43f3eba910\") " pod="openstack/heat-cfnapi-5b8559b4dd-7xq2s" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.811457 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/667c29ce-e696-4ad7-97f1-4b43f3eba910-public-tls-certs\") pod \"heat-cfnapi-5b8559b4dd-7xq2s\" (UID: \"667c29ce-e696-4ad7-97f1-4b43f3eba910\") " pod="openstack/heat-cfnapi-5b8559b4dd-7xq2s" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.811522 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpkh6\" (UniqueName: \"kubernetes.io/projected/725ed672-0c58-4f2c-b6c2-eb51c516a7a9-kube-api-access-kpkh6\") pod \"heat-api-678d8c789d-4cfwq\" (UID: \"725ed672-0c58-4f2c-b6c2-eb51c516a7a9\") " pod="openstack/heat-api-678d8c789d-4cfwq" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.811545 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/667c29ce-e696-4ad7-97f1-4b43f3eba910-internal-tls-certs\") pod \"heat-cfnapi-5b8559b4dd-7xq2s\" (UID: \"667c29ce-e696-4ad7-97f1-4b43f3eba910\") " pod="openstack/heat-cfnapi-5b8559b4dd-7xq2s" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.811598 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/725ed672-0c58-4f2c-b6c2-eb51c516a7a9-combined-ca-bundle\") pod \"heat-api-678d8c789d-4cfwq\" (UID: \"725ed672-0c58-4f2c-b6c2-eb51c516a7a9\") " pod="openstack/heat-api-678d8c789d-4cfwq" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.811625 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/725ed672-0c58-4f2c-b6c2-eb51c516a7a9-config-data-custom\") pod \"heat-api-678d8c789d-4cfwq\" (UID: \"725ed672-0c58-4f2c-b6c2-eb51c516a7a9\") " pod="openstack/heat-api-678d8c789d-4cfwq" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.811680 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/725ed672-0c58-4f2c-b6c2-eb51c516a7a9-public-tls-certs\") pod \"heat-api-678d8c789d-4cfwq\" (UID: \"725ed672-0c58-4f2c-b6c2-eb51c516a7a9\") " pod="openstack/heat-api-678d8c789d-4cfwq" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.811713 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/667c29ce-e696-4ad7-97f1-4b43f3eba910-config-data\") pod \"heat-cfnapi-5b8559b4dd-7xq2s\" (UID: \"667c29ce-e696-4ad7-97f1-4b43f3eba910\") " pod="openstack/heat-cfnapi-5b8559b4dd-7xq2s" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.811741 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/725ed672-0c58-4f2c-b6c2-eb51c516a7a9-config-data\") pod \"heat-api-678d8c789d-4cfwq\" (UID: \"725ed672-0c58-4f2c-b6c2-eb51c516a7a9\") " pod="openstack/heat-api-678d8c789d-4cfwq" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.816194 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/725ed672-0c58-4f2c-b6c2-eb51c516a7a9-config-data\") pod \"heat-api-678d8c789d-4cfwq\" (UID: \"725ed672-0c58-4f2c-b6c2-eb51c516a7a9\") " pod="openstack/heat-api-678d8c789d-4cfwq" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.816671 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/725ed672-0c58-4f2c-b6c2-eb51c516a7a9-internal-tls-certs\") pod \"heat-api-678d8c789d-4cfwq\" (UID: \"725ed672-0c58-4f2c-b6c2-eb51c516a7a9\") " pod="openstack/heat-api-678d8c789d-4cfwq" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.817822 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/667c29ce-e696-4ad7-97f1-4b43f3eba910-combined-ca-bundle\") pod \"heat-cfnapi-5b8559b4dd-7xq2s\" (UID: \"667c29ce-e696-4ad7-97f1-4b43f3eba910\") " pod="openstack/heat-cfnapi-5b8559b4dd-7xq2s" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.818250 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/725ed672-0c58-4f2c-b6c2-eb51c516a7a9-config-data-custom\") pod \"heat-api-678d8c789d-4cfwq\" (UID: \"725ed672-0c58-4f2c-b6c2-eb51c516a7a9\") " pod="openstack/heat-api-678d8c789d-4cfwq" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.819502 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/725ed672-0c58-4f2c-b6c2-eb51c516a7a9-public-tls-certs\") pod \"heat-api-678d8c789d-4cfwq\" (UID: \"725ed672-0c58-4f2c-b6c2-eb51c516a7a9\") " pod="openstack/heat-api-678d8c789d-4cfwq" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.823055 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/667c29ce-e696-4ad7-97f1-4b43f3eba910-config-data\") pod \"heat-cfnapi-5b8559b4dd-7xq2s\" (UID: \"667c29ce-e696-4ad7-97f1-4b43f3eba910\") " pod="openstack/heat-cfnapi-5b8559b4dd-7xq2s" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.826351 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/667c29ce-e696-4ad7-97f1-4b43f3eba910-internal-tls-certs\") pod \"heat-cfnapi-5b8559b4dd-7xq2s\" (UID: \"667c29ce-e696-4ad7-97f1-4b43f3eba910\") " pod="openstack/heat-cfnapi-5b8559b4dd-7xq2s" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.827171 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/725ed672-0c58-4f2c-b6c2-eb51c516a7a9-combined-ca-bundle\") pod \"heat-api-678d8c789d-4cfwq\" (UID: \"725ed672-0c58-4f2c-b6c2-eb51c516a7a9\") " pod="openstack/heat-api-678d8c789d-4cfwq" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.828931 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/667c29ce-e696-4ad7-97f1-4b43f3eba910-config-data-custom\") pod \"heat-cfnapi-5b8559b4dd-7xq2s\" (UID: \"667c29ce-e696-4ad7-97f1-4b43f3eba910\") " pod="openstack/heat-cfnapi-5b8559b4dd-7xq2s" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.831061 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpkh6\" (UniqueName: \"kubernetes.io/projected/725ed672-0c58-4f2c-b6c2-eb51c516a7a9-kube-api-access-kpkh6\") pod \"heat-api-678d8c789d-4cfwq\" (UID: \"725ed672-0c58-4f2c-b6c2-eb51c516a7a9\") " pod="openstack/heat-api-678d8c789d-4cfwq" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.836696 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ql9jw\" (UniqueName: \"kubernetes.io/projected/667c29ce-e696-4ad7-97f1-4b43f3eba910-kube-api-access-ql9jw\") pod \"heat-cfnapi-5b8559b4dd-7xq2s\" (UID: \"667c29ce-e696-4ad7-97f1-4b43f3eba910\") " pod="openstack/heat-cfnapi-5b8559b4dd-7xq2s" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.837583 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/667c29ce-e696-4ad7-97f1-4b43f3eba910-public-tls-certs\") pod \"heat-cfnapi-5b8559b4dd-7xq2s\" (UID: \"667c29ce-e696-4ad7-97f1-4b43f3eba910\") " pod="openstack/heat-cfnapi-5b8559b4dd-7xq2s" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.894072 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-64d84f65b5-cnzjg" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.987793 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5b8559b4dd-7xq2s" Jan 03 06:09:12 crc kubenswrapper[4854]: I0103 06:09:12.988515 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-678d8c789d-4cfwq" Jan 03 06:09:13 crc kubenswrapper[4854]: I0103 06:09:13.232262 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f6df4f56c-b824x" Jan 03 06:09:13 crc kubenswrapper[4854]: I0103 06:09:13.508975 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-q6szn"] Jan 03 06:09:13 crc kubenswrapper[4854]: I0103 06:09:13.509467 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d84b4d45c-q6szn" podUID="056cdc21-0e18-423c-8fac-6ace074c15d3" containerName="dnsmasq-dns" containerID="cri-o://760b345a6b168777786167d57c3abd52a5c873df18665203e60a65b0dda52cae" gracePeriod=10 Jan 03 06:09:13 crc kubenswrapper[4854]: I0103 06:09:13.546304 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-64d84f65b5-cnzjg"] Jan 03 06:09:13 crc kubenswrapper[4854]: I0103 06:09:13.846783 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-678d8c789d-4cfwq"] Jan 03 06:09:13 crc kubenswrapper[4854]: I0103 06:09:13.880239 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5b8559b4dd-7xq2s"] Jan 03 06:09:14 crc kubenswrapper[4854]: I0103 06:09:14.274823 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-q6szn" Jan 03 06:09:14 crc kubenswrapper[4854]: I0103 06:09:14.400430 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/056cdc21-0e18-423c-8fac-6ace074c15d3-dns-swift-storage-0\") pod \"056cdc21-0e18-423c-8fac-6ace074c15d3\" (UID: \"056cdc21-0e18-423c-8fac-6ace074c15d3\") " Jan 03 06:09:14 crc kubenswrapper[4854]: I0103 06:09:14.400629 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/056cdc21-0e18-423c-8fac-6ace074c15d3-config\") pod \"056cdc21-0e18-423c-8fac-6ace074c15d3\" (UID: \"056cdc21-0e18-423c-8fac-6ace074c15d3\") " Jan 03 06:09:14 crc kubenswrapper[4854]: I0103 06:09:14.400817 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/056cdc21-0e18-423c-8fac-6ace074c15d3-ovsdbserver-sb\") pod \"056cdc21-0e18-423c-8fac-6ace074c15d3\" (UID: \"056cdc21-0e18-423c-8fac-6ace074c15d3\") " Jan 03 06:09:14 crc kubenswrapper[4854]: I0103 06:09:14.400943 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/056cdc21-0e18-423c-8fac-6ace074c15d3-openstack-edpm-ipam\") pod \"056cdc21-0e18-423c-8fac-6ace074c15d3\" (UID: \"056cdc21-0e18-423c-8fac-6ace074c15d3\") " Jan 03 06:09:14 crc kubenswrapper[4854]: I0103 06:09:14.401071 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/056cdc21-0e18-423c-8fac-6ace074c15d3-ovsdbserver-nb\") pod \"056cdc21-0e18-423c-8fac-6ace074c15d3\" (UID: \"056cdc21-0e18-423c-8fac-6ace074c15d3\") " Jan 03 06:09:14 crc kubenswrapper[4854]: I0103 06:09:14.401140 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clb4x\" (UniqueName: \"kubernetes.io/projected/056cdc21-0e18-423c-8fac-6ace074c15d3-kube-api-access-clb4x\") pod \"056cdc21-0e18-423c-8fac-6ace074c15d3\" (UID: \"056cdc21-0e18-423c-8fac-6ace074c15d3\") " Jan 03 06:09:14 crc kubenswrapper[4854]: I0103 06:09:14.401213 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/056cdc21-0e18-423c-8fac-6ace074c15d3-dns-svc\") pod \"056cdc21-0e18-423c-8fac-6ace074c15d3\" (UID: \"056cdc21-0e18-423c-8fac-6ace074c15d3\") " Jan 03 06:09:14 crc kubenswrapper[4854]: I0103 06:09:14.469798 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/056cdc21-0e18-423c-8fac-6ace074c15d3-kube-api-access-clb4x" (OuterVolumeSpecName: "kube-api-access-clb4x") pod "056cdc21-0e18-423c-8fac-6ace074c15d3" (UID: "056cdc21-0e18-423c-8fac-6ace074c15d3"). InnerVolumeSpecName "kube-api-access-clb4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:09:14 crc kubenswrapper[4854]: I0103 06:09:14.509778 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clb4x\" (UniqueName: \"kubernetes.io/projected/056cdc21-0e18-423c-8fac-6ace074c15d3-kube-api-access-clb4x\") on node \"crc\" DevicePath \"\"" Jan 03 06:09:14 crc kubenswrapper[4854]: I0103 06:09:14.550083 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/056cdc21-0e18-423c-8fac-6ace074c15d3-config" (OuterVolumeSpecName: "config") pod "056cdc21-0e18-423c-8fac-6ace074c15d3" (UID: "056cdc21-0e18-423c-8fac-6ace074c15d3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:09:14 crc kubenswrapper[4854]: I0103 06:09:14.555248 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-678d8c789d-4cfwq" event={"ID":"725ed672-0c58-4f2c-b6c2-eb51c516a7a9","Type":"ContainerStarted","Data":"9a7898362cd76c55bae38e92e7233a687752e9337434fbbc5817a4e877d99afd"} Jan 03 06:09:14 crc kubenswrapper[4854]: I0103 06:09:14.559800 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5b8559b4dd-7xq2s" event={"ID":"667c29ce-e696-4ad7-97f1-4b43f3eba910","Type":"ContainerStarted","Data":"1d9516c289b2f281b406d0ae14749c6c929efc7cc4d143c211ae09bb86c5f386"} Jan 03 06:09:14 crc kubenswrapper[4854]: I0103 06:09:14.571723 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/056cdc21-0e18-423c-8fac-6ace074c15d3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "056cdc21-0e18-423c-8fac-6ace074c15d3" (UID: "056cdc21-0e18-423c-8fac-6ace074c15d3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:09:14 crc kubenswrapper[4854]: I0103 06:09:14.572572 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/056cdc21-0e18-423c-8fac-6ace074c15d3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "056cdc21-0e18-423c-8fac-6ace074c15d3" (UID: "056cdc21-0e18-423c-8fac-6ace074c15d3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:09:14 crc kubenswrapper[4854]: I0103 06:09:14.575050 4854 generic.go:334] "Generic (PLEG): container finished" podID="056cdc21-0e18-423c-8fac-6ace074c15d3" containerID="760b345a6b168777786167d57c3abd52a5c873df18665203e60a65b0dda52cae" exitCode=0 Jan 03 06:09:14 crc kubenswrapper[4854]: I0103 06:09:14.575206 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-q6szn" Jan 03 06:09:14 crc kubenswrapper[4854]: I0103 06:09:14.575225 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-q6szn" event={"ID":"056cdc21-0e18-423c-8fac-6ace074c15d3","Type":"ContainerDied","Data":"760b345a6b168777786167d57c3abd52a5c873df18665203e60a65b0dda52cae"} Jan 03 06:09:14 crc kubenswrapper[4854]: I0103 06:09:14.575359 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-q6szn" event={"ID":"056cdc21-0e18-423c-8fac-6ace074c15d3","Type":"ContainerDied","Data":"43942b1cef7e735e03152bdb89849cd475d64bf761270795884b869cb3dec685"} Jan 03 06:09:14 crc kubenswrapper[4854]: I0103 06:09:14.575388 4854 scope.go:117] "RemoveContainer" containerID="760b345a6b168777786167d57c3abd52a5c873df18665203e60a65b0dda52cae" Jan 03 06:09:14 crc kubenswrapper[4854]: I0103 06:09:14.579056 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/056cdc21-0e18-423c-8fac-6ace074c15d3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "056cdc21-0e18-423c-8fac-6ace074c15d3" (UID: "056cdc21-0e18-423c-8fac-6ace074c15d3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:09:14 crc kubenswrapper[4854]: I0103 06:09:14.583167 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-64d84f65b5-cnzjg" event={"ID":"f0006564-0566-4941-983d-8e5c58889f7f","Type":"ContainerStarted","Data":"5b3c9925f5f7e6441dbf0a0ee55b9643562516ab15c2af0af5a2c0f0efe1c5ae"} Jan 03 06:09:14 crc kubenswrapper[4854]: I0103 06:09:14.583224 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-64d84f65b5-cnzjg" event={"ID":"f0006564-0566-4941-983d-8e5c58889f7f","Type":"ContainerStarted","Data":"9a029eff02dca59775d3375ab27e46e3177527bab92a5304866e58f56de5d7ae"} Jan 03 06:09:14 crc kubenswrapper[4854]: I0103 06:09:14.585974 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-64d84f65b5-cnzjg" Jan 03 06:09:14 crc kubenswrapper[4854]: I0103 06:09:14.600143 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/056cdc21-0e18-423c-8fac-6ace074c15d3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "056cdc21-0e18-423c-8fac-6ace074c15d3" (UID: "056cdc21-0e18-423c-8fac-6ace074c15d3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:09:14 crc kubenswrapper[4854]: I0103 06:09:14.605721 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/056cdc21-0e18-423c-8fac-6ace074c15d3-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "056cdc21-0e18-423c-8fac-6ace074c15d3" (UID: "056cdc21-0e18-423c-8fac-6ace074c15d3"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:09:14 crc kubenswrapper[4854]: I0103 06:09:14.617098 4854 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/056cdc21-0e18-423c-8fac-6ace074c15d3-config\") on node \"crc\" DevicePath \"\"" Jan 03 06:09:14 crc kubenswrapper[4854]: I0103 06:09:14.617145 4854 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/056cdc21-0e18-423c-8fac-6ace074c15d3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 03 06:09:14 crc kubenswrapper[4854]: I0103 06:09:14.617156 4854 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/056cdc21-0e18-423c-8fac-6ace074c15d3-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 03 06:09:14 crc kubenswrapper[4854]: I0103 06:09:14.617166 4854 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/056cdc21-0e18-423c-8fac-6ace074c15d3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 03 06:09:14 crc kubenswrapper[4854]: I0103 06:09:14.617174 4854 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/056cdc21-0e18-423c-8fac-6ace074c15d3-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 03 06:09:14 crc kubenswrapper[4854]: I0103 06:09:14.617182 4854 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/056cdc21-0e18-423c-8fac-6ace074c15d3-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 03 06:09:14 crc kubenswrapper[4854]: I0103 06:09:14.619241 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-64d84f65b5-cnzjg" podStartSLOduration=2.619212713 podStartE2EDuration="2.619212713s" podCreationTimestamp="2026-01-03 06:09:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:09:14.608793073 +0000 UTC m=+1732.935369645" watchObservedRunningTime="2026-01-03 06:09:14.619212713 +0000 UTC m=+1732.945789305" Jan 03 06:09:14 crc kubenswrapper[4854]: I0103 06:09:14.629634 4854 scope.go:117] "RemoveContainer" containerID="4e8e82fac1116a8f68ea184a2475dc6bcaaf6e652cd2013d757feb4f34be7a98" Jan 03 06:09:14 crc kubenswrapper[4854]: I0103 06:09:14.679179 4854 scope.go:117] "RemoveContainer" containerID="760b345a6b168777786167d57c3abd52a5c873df18665203e60a65b0dda52cae" Jan 03 06:09:14 crc kubenswrapper[4854]: E0103 06:09:14.679688 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"760b345a6b168777786167d57c3abd52a5c873df18665203e60a65b0dda52cae\": container with ID starting with 760b345a6b168777786167d57c3abd52a5c873df18665203e60a65b0dda52cae not found: ID does not exist" containerID="760b345a6b168777786167d57c3abd52a5c873df18665203e60a65b0dda52cae" Jan 03 06:09:14 crc kubenswrapper[4854]: I0103 06:09:14.679820 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"760b345a6b168777786167d57c3abd52a5c873df18665203e60a65b0dda52cae"} err="failed to get container status \"760b345a6b168777786167d57c3abd52a5c873df18665203e60a65b0dda52cae\": rpc error: code = NotFound desc = could not find container \"760b345a6b168777786167d57c3abd52a5c873df18665203e60a65b0dda52cae\": container with ID starting with 760b345a6b168777786167d57c3abd52a5c873df18665203e60a65b0dda52cae not found: ID does not exist" Jan 03 06:09:14 crc kubenswrapper[4854]: I0103 06:09:14.679852 4854 scope.go:117] "RemoveContainer" containerID="4e8e82fac1116a8f68ea184a2475dc6bcaaf6e652cd2013d757feb4f34be7a98" Jan 03 06:09:14 crc kubenswrapper[4854]: E0103 06:09:14.680318 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e8e82fac1116a8f68ea184a2475dc6bcaaf6e652cd2013d757feb4f34be7a98\": container with ID starting with 4e8e82fac1116a8f68ea184a2475dc6bcaaf6e652cd2013d757feb4f34be7a98 not found: ID does not exist" containerID="4e8e82fac1116a8f68ea184a2475dc6bcaaf6e652cd2013d757feb4f34be7a98" Jan 03 06:09:14 crc kubenswrapper[4854]: I0103 06:09:14.680346 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e8e82fac1116a8f68ea184a2475dc6bcaaf6e652cd2013d757feb4f34be7a98"} err="failed to get container status \"4e8e82fac1116a8f68ea184a2475dc6bcaaf6e652cd2013d757feb4f34be7a98\": rpc error: code = NotFound desc = could not find container \"4e8e82fac1116a8f68ea184a2475dc6bcaaf6e652cd2013d757feb4f34be7a98\": container with ID starting with 4e8e82fac1116a8f68ea184a2475dc6bcaaf6e652cd2013d757feb4f34be7a98 not found: ID does not exist" Jan 03 06:09:14 crc kubenswrapper[4854]: I0103 06:09:14.922203 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-q6szn"] Jan 03 06:09:14 crc kubenswrapper[4854]: I0103 06:09:14.938597 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-q6szn"] Jan 03 06:09:16 crc kubenswrapper[4854]: I0103 06:09:16.152453 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="056cdc21-0e18-423c-8fac-6ace074c15d3" path="/var/lib/kubelet/pods/056cdc21-0e18-423c-8fac-6ace074c15d3/volumes" Jan 03 06:09:16 crc kubenswrapper[4854]: I0103 06:09:16.606943 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5b8559b4dd-7xq2s" event={"ID":"667c29ce-e696-4ad7-97f1-4b43f3eba910","Type":"ContainerStarted","Data":"76a6057c3787df4d701077971b2c5fb11c90b5e40ebfb9e984923dcc0694055d"} Jan 03 06:09:16 crc kubenswrapper[4854]: I0103 06:09:16.607391 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-5b8559b4dd-7xq2s" Jan 03 06:09:16 crc kubenswrapper[4854]: I0103 06:09:16.609193 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-678d8c789d-4cfwq" event={"ID":"725ed672-0c58-4f2c-b6c2-eb51c516a7a9","Type":"ContainerStarted","Data":"a52bef7ddf30bdb231a32b899c6c9fcaa20dea1141d76dac1d0b1215d279e397"} Jan 03 06:09:16 crc kubenswrapper[4854]: I0103 06:09:16.650020 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-5b8559b4dd-7xq2s" podStartSLOduration=2.638396311 podStartE2EDuration="4.649988856s" podCreationTimestamp="2026-01-03 06:09:12 +0000 UTC" firstStartedPulling="2026-01-03 06:09:13.866841947 +0000 UTC m=+1732.193418519" lastFinishedPulling="2026-01-03 06:09:15.878434492 +0000 UTC m=+1734.205011064" observedRunningTime="2026-01-03 06:09:16.628753777 +0000 UTC m=+1734.955330349" watchObservedRunningTime="2026-01-03 06:09:16.649988856 +0000 UTC m=+1734.976565418" Jan 03 06:09:16 crc kubenswrapper[4854]: I0103 06:09:16.655604 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-678d8c789d-4cfwq" podStartSLOduration=2.6356091619999997 podStartE2EDuration="4.655590526s" podCreationTimestamp="2026-01-03 06:09:12 +0000 UTC" firstStartedPulling="2026-01-03 06:09:13.855020402 +0000 UTC m=+1732.181596974" lastFinishedPulling="2026-01-03 06:09:15.875001766 +0000 UTC m=+1734.201578338" observedRunningTime="2026-01-03 06:09:16.64814955 +0000 UTC m=+1734.974726132" watchObservedRunningTime="2026-01-03 06:09:16.655590526 +0000 UTC m=+1734.982167098" Jan 03 06:09:17 crc kubenswrapper[4854]: I0103 06:09:17.620061 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-678d8c789d-4cfwq" Jan 03 06:09:18 crc kubenswrapper[4854]: E0103 06:09:18.017062 4854 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2518f81_3d3d_47a6_a157_19c2685f07d2.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34ba0145_7948_47f0_bec5_7f5fc6cb1150.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2518f81_3d3d_47a6_a157_19c2685f07d2.slice/crio-e38e63c3678040ae4e8bdfad604a4579c1e8551a859f7aa50b44b27d06c18126\": RecentStats: unable to find data in memory cache]" Jan 03 06:09:19 crc kubenswrapper[4854]: I0103 06:09:19.118408 4854 scope.go:117] "RemoveContainer" containerID="1db592b68f62b6c2ad08a01037bad35421ca86a46de4654a3ad5f8ce76bb6f1b" Jan 03 06:09:19 crc kubenswrapper[4854]: E0103 06:09:19.119125 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:09:22 crc kubenswrapper[4854]: E0103 06:09:22.084346 4854 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/82c1e085d607a255aff769877ed8efab15f38b25a168aebb4f5e4e6b9e70520e/diff" to get inode usage: stat /var/lib/containers/storage/overlay/82c1e085d607a255aff769877ed8efab15f38b25a168aebb4f5e4e6b9e70520e/diff: no such file or directory, extraDiskErr: Jan 03 06:09:22 crc kubenswrapper[4854]: I0103 06:09:22.397678 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k79vj"] Jan 03 06:09:22 crc kubenswrapper[4854]: E0103 06:09:22.398356 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="056cdc21-0e18-423c-8fac-6ace074c15d3" containerName="dnsmasq-dns" Jan 03 06:09:22 crc kubenswrapper[4854]: I0103 06:09:22.398374 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="056cdc21-0e18-423c-8fac-6ace074c15d3" containerName="dnsmasq-dns" Jan 03 06:09:22 crc kubenswrapper[4854]: E0103 06:09:22.398435 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="056cdc21-0e18-423c-8fac-6ace074c15d3" containerName="init" Jan 03 06:09:22 crc kubenswrapper[4854]: I0103 06:09:22.398444 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="056cdc21-0e18-423c-8fac-6ace074c15d3" containerName="init" Jan 03 06:09:22 crc kubenswrapper[4854]: I0103 06:09:22.398712 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="056cdc21-0e18-423c-8fac-6ace074c15d3" containerName="dnsmasq-dns" Jan 03 06:09:22 crc kubenswrapper[4854]: I0103 06:09:22.399879 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k79vj" Jan 03 06:09:22 crc kubenswrapper[4854]: I0103 06:09:22.402913 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 03 06:09:22 crc kubenswrapper[4854]: I0103 06:09:22.404340 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 03 06:09:22 crc kubenswrapper[4854]: I0103 06:09:22.404494 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4bl62" Jan 03 06:09:22 crc kubenswrapper[4854]: I0103 06:09:22.404736 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 03 06:09:22 crc kubenswrapper[4854]: I0103 06:09:22.415441 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k79vj"] Jan 03 06:09:22 crc kubenswrapper[4854]: I0103 06:09:22.518579 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/67938fba-f337-493f-82f0-3076f30fd0fd-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-k79vj\" (UID: \"67938fba-f337-493f-82f0-3076f30fd0fd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k79vj" Jan 03 06:09:22 crc kubenswrapper[4854]: I0103 06:09:22.518718 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67938fba-f337-493f-82f0-3076f30fd0fd-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-k79vj\" (UID: \"67938fba-f337-493f-82f0-3076f30fd0fd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k79vj" Jan 03 06:09:22 crc kubenswrapper[4854]: I0103 06:09:22.518908 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67938fba-f337-493f-82f0-3076f30fd0fd-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-k79vj\" (UID: \"67938fba-f337-493f-82f0-3076f30fd0fd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k79vj" Jan 03 06:09:22 crc kubenswrapper[4854]: I0103 06:09:22.518941 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkbnx\" (UniqueName: \"kubernetes.io/projected/67938fba-f337-493f-82f0-3076f30fd0fd-kube-api-access-kkbnx\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-k79vj\" (UID: \"67938fba-f337-493f-82f0-3076f30fd0fd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k79vj" Jan 03 06:09:22 crc kubenswrapper[4854]: I0103 06:09:22.621494 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67938fba-f337-493f-82f0-3076f30fd0fd-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-k79vj\" (UID: \"67938fba-f337-493f-82f0-3076f30fd0fd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k79vj" Jan 03 06:09:22 crc kubenswrapper[4854]: I0103 06:09:22.621683 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67938fba-f337-493f-82f0-3076f30fd0fd-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-k79vj\" (UID: \"67938fba-f337-493f-82f0-3076f30fd0fd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k79vj" Jan 03 06:09:22 crc kubenswrapper[4854]: I0103 06:09:22.621713 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkbnx\" (UniqueName: \"kubernetes.io/projected/67938fba-f337-493f-82f0-3076f30fd0fd-kube-api-access-kkbnx\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-k79vj\" (UID: \"67938fba-f337-493f-82f0-3076f30fd0fd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k79vj" Jan 03 06:09:22 crc kubenswrapper[4854]: I0103 06:09:22.622825 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/67938fba-f337-493f-82f0-3076f30fd0fd-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-k79vj\" (UID: \"67938fba-f337-493f-82f0-3076f30fd0fd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k79vj" Jan 03 06:09:22 crc kubenswrapper[4854]: I0103 06:09:22.627024 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67938fba-f337-493f-82f0-3076f30fd0fd-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-k79vj\" (UID: \"67938fba-f337-493f-82f0-3076f30fd0fd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k79vj" Jan 03 06:09:22 crc kubenswrapper[4854]: I0103 06:09:22.627403 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/67938fba-f337-493f-82f0-3076f30fd0fd-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-k79vj\" (UID: \"67938fba-f337-493f-82f0-3076f30fd0fd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k79vj" Jan 03 06:09:22 crc kubenswrapper[4854]: I0103 06:09:22.641061 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67938fba-f337-493f-82f0-3076f30fd0fd-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-k79vj\" (UID: \"67938fba-f337-493f-82f0-3076f30fd0fd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k79vj" Jan 03 06:09:22 crc kubenswrapper[4854]: I0103 06:09:22.642815 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkbnx\" (UniqueName: \"kubernetes.io/projected/67938fba-f337-493f-82f0-3076f30fd0fd-kube-api-access-kkbnx\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-k79vj\" (UID: \"67938fba-f337-493f-82f0-3076f30fd0fd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k79vj" Jan 03 06:09:22 crc kubenswrapper[4854]: I0103 06:09:22.743304 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k79vj" Jan 03 06:09:23 crc kubenswrapper[4854]: I0103 06:09:23.385815 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k79vj"] Jan 03 06:09:23 crc kubenswrapper[4854]: W0103 06:09:23.390661 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67938fba_f337_493f_82f0_3076f30fd0fd.slice/crio-83745cdeda2bbb32e2441f88212a3fe94ce2e150423d3f83b3c49a6b60455e0d WatchSource:0}: Error finding container 83745cdeda2bbb32e2441f88212a3fe94ce2e150423d3f83b3c49a6b60455e0d: Status 404 returned error can't find the container with id 83745cdeda2bbb32e2441f88212a3fe94ce2e150423d3f83b3c49a6b60455e0d Jan 03 06:09:23 crc kubenswrapper[4854]: I0103 06:09:23.703476 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k79vj" event={"ID":"67938fba-f337-493f-82f0-3076f30fd0fd","Type":"ContainerStarted","Data":"83745cdeda2bbb32e2441f88212a3fe94ce2e150423d3f83b3c49a6b60455e0d"} Jan 03 06:09:24 crc kubenswrapper[4854]: I0103 06:09:24.430756 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-5b8559b4dd-7xq2s" Jan 03 06:09:24 crc kubenswrapper[4854]: I0103 06:09:24.516398 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-678d8c789d-4cfwq" Jan 03 06:09:24 crc kubenswrapper[4854]: I0103 06:09:24.548731 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-94fd9f97f-bcw2n"] Jan 03 06:09:24 crc kubenswrapper[4854]: I0103 06:09:24.548992 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-94fd9f97f-bcw2n" podUID="b49c2220-2581-4c4f-a034-10f34ddc8f80" containerName="heat-cfnapi" containerID="cri-o://1836a121cfa508a6a36da14b8065b7815a3f0a82cd94413993def51986cb2c3d" gracePeriod=60 Jan 03 06:09:24 crc kubenswrapper[4854]: I0103 06:09:24.657797 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-c795c8675-ng42x"] Jan 03 06:09:24 crc kubenswrapper[4854]: I0103 06:09:24.658069 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-c795c8675-ng42x" podUID="a1af8fd7-a6f1-40f2-b5bc-e0d15e865698" containerName="heat-api" containerID="cri-o://547285aff9ff3cd0b00a39340f508011d742c064d6f1bc64f70bcd294cc71028" gracePeriod=60 Jan 03 06:09:27 crc kubenswrapper[4854]: I0103 06:09:27.757689 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-94fd9f97f-bcw2n" podUID="b49c2220-2581-4c4f-a034-10f34ddc8f80" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.220:8000/healthcheck\": read tcp 10.217.0.2:54372->10.217.0.220:8000: read: connection reset by peer" Jan 03 06:09:27 crc kubenswrapper[4854]: I0103 06:09:27.780972 4854 generic.go:334] "Generic (PLEG): container finished" podID="7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1" containerID="d391b7f999650ff1558f289c6ea306c961b37bc3e55d68991e760c7fffb4d5fb" exitCode=0 Jan 03 06:09:27 crc kubenswrapper[4854]: I0103 06:09:27.781022 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1","Type":"ContainerDied","Data":"d391b7f999650ff1558f289c6ea306c961b37bc3e55d68991e760c7fffb4d5fb"} Jan 03 06:09:27 crc kubenswrapper[4854]: I0103 06:09:27.884135 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-c795c8675-ng42x" podUID="a1af8fd7-a6f1-40f2-b5bc-e0d15e865698" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.0.219:8004/healthcheck\": read tcp 10.217.0.2:55338->10.217.0.219:8004: read: connection reset by peer" Jan 03 06:09:28 crc kubenswrapper[4854]: I0103 06:09:28.795603 4854 generic.go:334] "Generic (PLEG): container finished" podID="a1af8fd7-a6f1-40f2-b5bc-e0d15e865698" containerID="547285aff9ff3cd0b00a39340f508011d742c064d6f1bc64f70bcd294cc71028" exitCode=0 Jan 03 06:09:28 crc kubenswrapper[4854]: I0103 06:09:28.795697 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-c795c8675-ng42x" event={"ID":"a1af8fd7-a6f1-40f2-b5bc-e0d15e865698","Type":"ContainerDied","Data":"547285aff9ff3cd0b00a39340f508011d742c064d6f1bc64f70bcd294cc71028"} Jan 03 06:09:28 crc kubenswrapper[4854]: I0103 06:09:28.798756 4854 generic.go:334] "Generic (PLEG): container finished" podID="b49c2220-2581-4c4f-a034-10f34ddc8f80" containerID="1836a121cfa508a6a36da14b8065b7815a3f0a82cd94413993def51986cb2c3d" exitCode=0 Jan 03 06:09:28 crc kubenswrapper[4854]: I0103 06:09:28.798800 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-94fd9f97f-bcw2n" event={"ID":"b49c2220-2581-4c4f-a034-10f34ddc8f80","Type":"ContainerDied","Data":"1836a121cfa508a6a36da14b8065b7815a3f0a82cd94413993def51986cb2c3d"} Jan 03 06:09:31 crc kubenswrapper[4854]: I0103 06:09:31.839662 4854 generic.go:334] "Generic (PLEG): container finished" podID="b5584ebd-a44a-4fa9-97cb-df215860d542" containerID="1f8ddf18daa3d95200c04a97c9dc467706ad4aa97502d14cd29b03f46c07f467" exitCode=0 Jan 03 06:09:31 crc kubenswrapper[4854]: I0103 06:09:31.839743 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"b5584ebd-a44a-4fa9-97cb-df215860d542","Type":"ContainerDied","Data":"1f8ddf18daa3d95200c04a97c9dc467706ad4aa97502d14cd29b03f46c07f467"} Jan 03 06:09:32 crc kubenswrapper[4854]: I0103 06:09:32.132768 4854 scope.go:117] "RemoveContainer" containerID="1db592b68f62b6c2ad08a01037bad35421ca86a46de4654a3ad5f8ce76bb6f1b" Jan 03 06:09:32 crc kubenswrapper[4854]: E0103 06:09:32.133694 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:09:32 crc kubenswrapper[4854]: I0103 06:09:32.491194 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-c795c8675-ng42x" podUID="a1af8fd7-a6f1-40f2-b5bc-e0d15e865698" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.0.219:8004/healthcheck\": dial tcp 10.217.0.219:8004: connect: connection refused" Jan 03 06:09:32 crc kubenswrapper[4854]: I0103 06:09:32.594910 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-94fd9f97f-bcw2n" podUID="b49c2220-2581-4c4f-a034-10f34ddc8f80" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.220:8000/healthcheck\": dial tcp 10.217.0.220:8000: connect: connection refused" Jan 03 06:09:32 crc kubenswrapper[4854]: I0103 06:09:32.943982 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-64d84f65b5-cnzjg" Jan 03 06:09:33 crc kubenswrapper[4854]: I0103 06:09:33.018599 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-6749994886-zsx65"] Jan 03 06:09:33 crc kubenswrapper[4854]: I0103 06:09:33.019111 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-6749994886-zsx65" podUID="d346baaf-3040-4209-9049-e92c7b033015" containerName="heat-engine" containerID="cri-o://d59a70be9d0375e47163a3d0b8327ed47cd9ba0c8844425faf08091c6be4990b" gracePeriod=60 Jan 03 06:09:34 crc kubenswrapper[4854]: I0103 06:09:34.296645 4854 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podd521510c-fc2f-4928-a2c8-45155c352562"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podd521510c-fc2f-4928-a2c8-45155c352562] : Timed out while waiting for systemd to remove kubepods-besteffort-podd521510c_fc2f_4928_a2c8_45155c352562.slice" Jan 03 06:09:34 crc kubenswrapper[4854]: I0103 06:09:34.858632 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-94fd9f97f-bcw2n" Jan 03 06:09:34 crc kubenswrapper[4854]: I0103 06:09:34.892857 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-94fd9f97f-bcw2n" event={"ID":"b49c2220-2581-4c4f-a034-10f34ddc8f80","Type":"ContainerDied","Data":"d71884cffe7d197e9abf639f17d8e8a438b16a078f4eb76df293628fee061bcf"} Jan 03 06:09:34 crc kubenswrapper[4854]: I0103 06:09:34.892932 4854 scope.go:117] "RemoveContainer" containerID="1836a121cfa508a6a36da14b8065b7815a3f0a82cd94413993def51986cb2c3d" Jan 03 06:09:34 crc kubenswrapper[4854]: I0103 06:09:34.893181 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-94fd9f97f-bcw2n" Jan 03 06:09:34 crc kubenswrapper[4854]: I0103 06:09:34.909067 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b49c2220-2581-4c4f-a034-10f34ddc8f80-config-data-custom\") pod \"b49c2220-2581-4c4f-a034-10f34ddc8f80\" (UID: \"b49c2220-2581-4c4f-a034-10f34ddc8f80\") " Jan 03 06:09:34 crc kubenswrapper[4854]: I0103 06:09:34.909184 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b49c2220-2581-4c4f-a034-10f34ddc8f80-internal-tls-certs\") pod \"b49c2220-2581-4c4f-a034-10f34ddc8f80\" (UID: \"b49c2220-2581-4c4f-a034-10f34ddc8f80\") " Jan 03 06:09:34 crc kubenswrapper[4854]: I0103 06:09:34.909216 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c722r\" (UniqueName: \"kubernetes.io/projected/b49c2220-2581-4c4f-a034-10f34ddc8f80-kube-api-access-c722r\") pod \"b49c2220-2581-4c4f-a034-10f34ddc8f80\" (UID: \"b49c2220-2581-4c4f-a034-10f34ddc8f80\") " Jan 03 06:09:34 crc kubenswrapper[4854]: I0103 06:09:34.909342 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b49c2220-2581-4c4f-a034-10f34ddc8f80-public-tls-certs\") pod \"b49c2220-2581-4c4f-a034-10f34ddc8f80\" (UID: \"b49c2220-2581-4c4f-a034-10f34ddc8f80\") " Jan 03 06:09:34 crc kubenswrapper[4854]: I0103 06:09:34.909492 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b49c2220-2581-4c4f-a034-10f34ddc8f80-combined-ca-bundle\") pod \"b49c2220-2581-4c4f-a034-10f34ddc8f80\" (UID: \"b49c2220-2581-4c4f-a034-10f34ddc8f80\") " Jan 03 06:09:34 crc kubenswrapper[4854]: I0103 06:09:34.909509 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b49c2220-2581-4c4f-a034-10f34ddc8f80-config-data\") pod \"b49c2220-2581-4c4f-a034-10f34ddc8f80\" (UID: \"b49c2220-2581-4c4f-a034-10f34ddc8f80\") " Jan 03 06:09:34 crc kubenswrapper[4854]: I0103 06:09:34.939068 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b49c2220-2581-4c4f-a034-10f34ddc8f80-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b49c2220-2581-4c4f-a034-10f34ddc8f80" (UID: "b49c2220-2581-4c4f-a034-10f34ddc8f80"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:09:34 crc kubenswrapper[4854]: I0103 06:09:34.946396 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b49c2220-2581-4c4f-a034-10f34ddc8f80-kube-api-access-c722r" (OuterVolumeSpecName: "kube-api-access-c722r") pod "b49c2220-2581-4c4f-a034-10f34ddc8f80" (UID: "b49c2220-2581-4c4f-a034-10f34ddc8f80"). InnerVolumeSpecName "kube-api-access-c722r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:09:34 crc kubenswrapper[4854]: I0103 06:09:34.948515 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"b5584ebd-a44a-4fa9-97cb-df215860d542","Type":"ContainerStarted","Data":"80d8026159353bab74e19b39f65ef42011f68265910beb73e38b8f476d21e743"} Jan 03 06:09:34 crc kubenswrapper[4854]: I0103 06:09:34.948580 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7315bc9d-6a1f-4328-8ef4-4cc84c27b3a1","Type":"ContainerStarted","Data":"c45eec953da039fcb328de18ba2120cd24391069c65128e7e002f14bf97fdb90"} Jan 03 06:09:34 crc kubenswrapper[4854]: I0103 06:09:34.948853 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:09:34 crc kubenswrapper[4854]: I0103 06:09:34.987000 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=43.986979076 podStartE2EDuration="43.986979076s" podCreationTimestamp="2026-01-03 06:08:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:09:34.969480831 +0000 UTC m=+1753.296057393" watchObservedRunningTime="2026-01-03 06:09:34.986979076 +0000 UTC m=+1753.313555648" Jan 03 06:09:35 crc kubenswrapper[4854]: I0103 06:09:35.012251 4854 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b49c2220-2581-4c4f-a034-10f34ddc8f80-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 03 06:09:35 crc kubenswrapper[4854]: I0103 06:09:35.012283 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c722r\" (UniqueName: \"kubernetes.io/projected/b49c2220-2581-4c4f-a034-10f34ddc8f80-kube-api-access-c722r\") on node \"crc\" DevicePath \"\"" Jan 03 06:09:35 crc kubenswrapper[4854]: E0103 06:09:35.073669 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d59a70be9d0375e47163a3d0b8327ed47cd9ba0c8844425faf08091c6be4990b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 03 06:09:35 crc kubenswrapper[4854]: E0103 06:09:35.075418 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d59a70be9d0375e47163a3d0b8327ed47cd9ba0c8844425faf08091c6be4990b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 03 06:09:35 crc kubenswrapper[4854]: I0103 06:09:35.075720 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-c795c8675-ng42x" Jan 03 06:09:35 crc kubenswrapper[4854]: E0103 06:09:35.077290 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d59a70be9d0375e47163a3d0b8327ed47cd9ba0c8844425faf08091c6be4990b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 03 06:09:35 crc kubenswrapper[4854]: E0103 06:09:35.077337 4854 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-6749994886-zsx65" podUID="d346baaf-3040-4209-9049-e92c7b033015" containerName="heat-engine" Jan 03 06:09:35 crc kubenswrapper[4854]: I0103 06:09:35.114062 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxk8z\" (UniqueName: \"kubernetes.io/projected/a1af8fd7-a6f1-40f2-b5bc-e0d15e865698-kube-api-access-rxk8z\") pod \"a1af8fd7-a6f1-40f2-b5bc-e0d15e865698\" (UID: \"a1af8fd7-a6f1-40f2-b5bc-e0d15e865698\") " Jan 03 06:09:35 crc kubenswrapper[4854]: I0103 06:09:35.114367 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1af8fd7-a6f1-40f2-b5bc-e0d15e865698-internal-tls-certs\") pod \"a1af8fd7-a6f1-40f2-b5bc-e0d15e865698\" (UID: \"a1af8fd7-a6f1-40f2-b5bc-e0d15e865698\") " Jan 03 06:09:35 crc kubenswrapper[4854]: I0103 06:09:35.114490 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1af8fd7-a6f1-40f2-b5bc-e0d15e865698-combined-ca-bundle\") pod \"a1af8fd7-a6f1-40f2-b5bc-e0d15e865698\" (UID: \"a1af8fd7-a6f1-40f2-b5bc-e0d15e865698\") " Jan 03 06:09:35 crc kubenswrapper[4854]: I0103 06:09:35.114543 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1af8fd7-a6f1-40f2-b5bc-e0d15e865698-public-tls-certs\") pod \"a1af8fd7-a6f1-40f2-b5bc-e0d15e865698\" (UID: \"a1af8fd7-a6f1-40f2-b5bc-e0d15e865698\") " Jan 03 06:09:35 crc kubenswrapper[4854]: I0103 06:09:35.114605 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1af8fd7-a6f1-40f2-b5bc-e0d15e865698-config-data\") pod \"a1af8fd7-a6f1-40f2-b5bc-e0d15e865698\" (UID: \"a1af8fd7-a6f1-40f2-b5bc-e0d15e865698\") " Jan 03 06:09:35 crc kubenswrapper[4854]: I0103 06:09:35.114631 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a1af8fd7-a6f1-40f2-b5bc-e0d15e865698-config-data-custom\") pod \"a1af8fd7-a6f1-40f2-b5bc-e0d15e865698\" (UID: \"a1af8fd7-a6f1-40f2-b5bc-e0d15e865698\") " Jan 03 06:09:35 crc kubenswrapper[4854]: I0103 06:09:35.130459 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1af8fd7-a6f1-40f2-b5bc-e0d15e865698-kube-api-access-rxk8z" (OuterVolumeSpecName: "kube-api-access-rxk8z") pod "a1af8fd7-a6f1-40f2-b5bc-e0d15e865698" (UID: "a1af8fd7-a6f1-40f2-b5bc-e0d15e865698"). InnerVolumeSpecName "kube-api-access-rxk8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:09:35 crc kubenswrapper[4854]: I0103 06:09:35.140388 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1af8fd7-a6f1-40f2-b5bc-e0d15e865698-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a1af8fd7-a6f1-40f2-b5bc-e0d15e865698" (UID: "a1af8fd7-a6f1-40f2-b5bc-e0d15e865698"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:09:35 crc kubenswrapper[4854]: I0103 06:09:35.212359 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1af8fd7-a6f1-40f2-b5bc-e0d15e865698-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a1af8fd7-a6f1-40f2-b5bc-e0d15e865698" (UID: "a1af8fd7-a6f1-40f2-b5bc-e0d15e865698"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:09:35 crc kubenswrapper[4854]: I0103 06:09:35.215261 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b49c2220-2581-4c4f-a034-10f34ddc8f80-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b49c2220-2581-4c4f-a034-10f34ddc8f80" (UID: "b49c2220-2581-4c4f-a034-10f34ddc8f80"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:09:35 crc kubenswrapper[4854]: I0103 06:09:35.218625 4854 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1af8fd7-a6f1-40f2-b5bc-e0d15e865698-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:09:35 crc kubenswrapper[4854]: I0103 06:09:35.218650 4854 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a1af8fd7-a6f1-40f2-b5bc-e0d15e865698-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 03 06:09:35 crc kubenswrapper[4854]: I0103 06:09:35.218661 4854 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b49c2220-2581-4c4f-a034-10f34ddc8f80-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:09:35 crc kubenswrapper[4854]: I0103 06:09:35.218691 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxk8z\" (UniqueName: \"kubernetes.io/projected/a1af8fd7-a6f1-40f2-b5bc-e0d15e865698-kube-api-access-rxk8z\") on node \"crc\" DevicePath \"\"" Jan 03 06:09:35 crc kubenswrapper[4854]: I0103 06:09:35.236111 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b49c2220-2581-4c4f-a034-10f34ddc8f80-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b49c2220-2581-4c4f-a034-10f34ddc8f80" (UID: "b49c2220-2581-4c4f-a034-10f34ddc8f80"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:09:35 crc kubenswrapper[4854]: I0103 06:09:35.236235 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b49c2220-2581-4c4f-a034-10f34ddc8f80-config-data" (OuterVolumeSpecName: "config-data") pod "b49c2220-2581-4c4f-a034-10f34ddc8f80" (UID: "b49c2220-2581-4c4f-a034-10f34ddc8f80"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:09:35 crc kubenswrapper[4854]: I0103 06:09:35.250336 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b49c2220-2581-4c4f-a034-10f34ddc8f80-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b49c2220-2581-4c4f-a034-10f34ddc8f80" (UID: "b49c2220-2581-4c4f-a034-10f34ddc8f80"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:09:35 crc kubenswrapper[4854]: I0103 06:09:35.265900 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1af8fd7-a6f1-40f2-b5bc-e0d15e865698-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a1af8fd7-a6f1-40f2-b5bc-e0d15e865698" (UID: "a1af8fd7-a6f1-40f2-b5bc-e0d15e865698"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:09:35 crc kubenswrapper[4854]: I0103 06:09:35.279284 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1af8fd7-a6f1-40f2-b5bc-e0d15e865698-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a1af8fd7-a6f1-40f2-b5bc-e0d15e865698" (UID: "a1af8fd7-a6f1-40f2-b5bc-e0d15e865698"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:09:35 crc kubenswrapper[4854]: I0103 06:09:35.288090 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1af8fd7-a6f1-40f2-b5bc-e0d15e865698-config-data" (OuterVolumeSpecName: "config-data") pod "a1af8fd7-a6f1-40f2-b5bc-e0d15e865698" (UID: "a1af8fd7-a6f1-40f2-b5bc-e0d15e865698"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:09:35 crc kubenswrapper[4854]: I0103 06:09:35.320771 4854 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b49c2220-2581-4c4f-a034-10f34ddc8f80-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 03 06:09:35 crc kubenswrapper[4854]: I0103 06:09:35.320804 4854 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b49c2220-2581-4c4f-a034-10f34ddc8f80-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 03 06:09:35 crc kubenswrapper[4854]: I0103 06:09:35.320814 4854 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1af8fd7-a6f1-40f2-b5bc-e0d15e865698-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 03 06:09:35 crc kubenswrapper[4854]: I0103 06:09:35.320822 4854 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1af8fd7-a6f1-40f2-b5bc-e0d15e865698-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 03 06:09:35 crc kubenswrapper[4854]: I0103 06:09:35.320836 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1af8fd7-a6f1-40f2-b5bc-e0d15e865698-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 06:09:35 crc kubenswrapper[4854]: I0103 06:09:35.320845 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b49c2220-2581-4c4f-a034-10f34ddc8f80-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 06:09:35 crc kubenswrapper[4854]: I0103 06:09:35.532838 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-94fd9f97f-bcw2n"] Jan 03 06:09:35 crc kubenswrapper[4854]: I0103 06:09:35.552552 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-94fd9f97f-bcw2n"] Jan 03 06:09:35 crc kubenswrapper[4854]: I0103 06:09:35.960494 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k79vj" event={"ID":"67938fba-f337-493f-82f0-3076f30fd0fd","Type":"ContainerStarted","Data":"6bf6765c4ad480a7d1fefba15847b0989d5429b9e2031c53e23fe42fdae07cd4"} Jan 03 06:09:35 crc kubenswrapper[4854]: I0103 06:09:35.962101 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-c795c8675-ng42x" Jan 03 06:09:35 crc kubenswrapper[4854]: I0103 06:09:35.962146 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-c795c8675-ng42x" event={"ID":"a1af8fd7-a6f1-40f2-b5bc-e0d15e865698","Type":"ContainerDied","Data":"e816e8736bfda71b5755ce5a173078db6e7df3cd62f311a37c75d5f199e2952f"} Jan 03 06:09:35 crc kubenswrapper[4854]: I0103 06:09:35.962206 4854 scope.go:117] "RemoveContainer" containerID="547285aff9ff3cd0b00a39340f508011d742c064d6f1bc64f70bcd294cc71028" Jan 03 06:09:35 crc kubenswrapper[4854]: I0103 06:09:35.962638 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Jan 03 06:09:35 crc kubenswrapper[4854]: I0103 06:09:35.980431 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k79vj" podStartSLOduration=2.887430794 podStartE2EDuration="13.980411085s" podCreationTimestamp="2026-01-03 06:09:22 +0000 UTC" firstStartedPulling="2026-01-03 06:09:23.394847645 +0000 UTC m=+1741.721424217" lastFinishedPulling="2026-01-03 06:09:34.487827946 +0000 UTC m=+1752.814404508" observedRunningTime="2026-01-03 06:09:35.980048166 +0000 UTC m=+1754.306624748" watchObservedRunningTime="2026-01-03 06:09:35.980411085 +0000 UTC m=+1754.306987657" Jan 03 06:09:36 crc kubenswrapper[4854]: I0103 06:09:36.016962 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-c795c8675-ng42x"] Jan 03 06:09:36 crc kubenswrapper[4854]: I0103 06:09:36.037591 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-c795c8675-ng42x"] Jan 03 06:09:36 crc kubenswrapper[4854]: I0103 06:09:36.047473 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=44.047448255 podStartE2EDuration="44.047448255s" podCreationTimestamp="2026-01-03 06:08:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:09:36.028704038 +0000 UTC m=+1754.355280610" watchObservedRunningTime="2026-01-03 06:09:36.047448255 +0000 UTC m=+1754.374024827" Jan 03 06:09:36 crc kubenswrapper[4854]: I0103 06:09:36.137406 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1af8fd7-a6f1-40f2-b5bc-e0d15e865698" path="/var/lib/kubelet/pods/a1af8fd7-a6f1-40f2-b5bc-e0d15e865698/volumes" Jan 03 06:09:36 crc kubenswrapper[4854]: I0103 06:09:36.138100 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b49c2220-2581-4c4f-a034-10f34ddc8f80" path="/var/lib/kubelet/pods/b49c2220-2581-4c4f-a034-10f34ddc8f80/volumes" Jan 03 06:09:39 crc kubenswrapper[4854]: I0103 06:09:39.921837 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 03 06:09:40 crc kubenswrapper[4854]: I0103 06:09:40.314675 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-6hxsh"] Jan 03 06:09:40 crc kubenswrapper[4854]: I0103 06:09:40.331067 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-6hxsh"] Jan 03 06:09:40 crc kubenswrapper[4854]: I0103 06:09:40.429615 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-mjxfd"] Jan 03 06:09:40 crc kubenswrapper[4854]: E0103 06:09:40.430174 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1af8fd7-a6f1-40f2-b5bc-e0d15e865698" containerName="heat-api" Jan 03 06:09:40 crc kubenswrapper[4854]: I0103 06:09:40.430194 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1af8fd7-a6f1-40f2-b5bc-e0d15e865698" containerName="heat-api" Jan 03 06:09:40 crc kubenswrapper[4854]: E0103 06:09:40.430237 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b49c2220-2581-4c4f-a034-10f34ddc8f80" containerName="heat-cfnapi" Jan 03 06:09:40 crc kubenswrapper[4854]: I0103 06:09:40.430245 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="b49c2220-2581-4c4f-a034-10f34ddc8f80" containerName="heat-cfnapi" Jan 03 06:09:40 crc kubenswrapper[4854]: I0103 06:09:40.430455 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1af8fd7-a6f1-40f2-b5bc-e0d15e865698" containerName="heat-api" Jan 03 06:09:40 crc kubenswrapper[4854]: I0103 06:09:40.430481 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="b49c2220-2581-4c4f-a034-10f34ddc8f80" containerName="heat-cfnapi" Jan 03 06:09:40 crc kubenswrapper[4854]: I0103 06:09:40.431258 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-mjxfd" Jan 03 06:09:40 crc kubenswrapper[4854]: I0103 06:09:40.442946 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 03 06:09:40 crc kubenswrapper[4854]: I0103 06:09:40.443837 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-mjxfd"] Jan 03 06:09:40 crc kubenswrapper[4854]: I0103 06:09:40.551964 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/523cd360-c0b6-4711-b501-031ad4b8ed4f-config-data\") pod \"aodh-db-sync-mjxfd\" (UID: \"523cd360-c0b6-4711-b501-031ad4b8ed4f\") " pod="openstack/aodh-db-sync-mjxfd" Jan 03 06:09:40 crc kubenswrapper[4854]: I0103 06:09:40.552018 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97vcs\" (UniqueName: \"kubernetes.io/projected/523cd360-c0b6-4711-b501-031ad4b8ed4f-kube-api-access-97vcs\") pod \"aodh-db-sync-mjxfd\" (UID: \"523cd360-c0b6-4711-b501-031ad4b8ed4f\") " pod="openstack/aodh-db-sync-mjxfd" Jan 03 06:09:40 crc kubenswrapper[4854]: I0103 06:09:40.552054 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/523cd360-c0b6-4711-b501-031ad4b8ed4f-combined-ca-bundle\") pod \"aodh-db-sync-mjxfd\" (UID: \"523cd360-c0b6-4711-b501-031ad4b8ed4f\") " pod="openstack/aodh-db-sync-mjxfd" Jan 03 06:09:40 crc kubenswrapper[4854]: I0103 06:09:40.552136 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/523cd360-c0b6-4711-b501-031ad4b8ed4f-scripts\") pod \"aodh-db-sync-mjxfd\" (UID: \"523cd360-c0b6-4711-b501-031ad4b8ed4f\") " pod="openstack/aodh-db-sync-mjxfd" Jan 03 06:09:40 crc kubenswrapper[4854]: I0103 06:09:40.654010 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/523cd360-c0b6-4711-b501-031ad4b8ed4f-config-data\") pod \"aodh-db-sync-mjxfd\" (UID: \"523cd360-c0b6-4711-b501-031ad4b8ed4f\") " pod="openstack/aodh-db-sync-mjxfd" Jan 03 06:09:40 crc kubenswrapper[4854]: I0103 06:09:40.654090 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97vcs\" (UniqueName: \"kubernetes.io/projected/523cd360-c0b6-4711-b501-031ad4b8ed4f-kube-api-access-97vcs\") pod \"aodh-db-sync-mjxfd\" (UID: \"523cd360-c0b6-4711-b501-031ad4b8ed4f\") " pod="openstack/aodh-db-sync-mjxfd" Jan 03 06:09:40 crc kubenswrapper[4854]: I0103 06:09:40.654126 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/523cd360-c0b6-4711-b501-031ad4b8ed4f-combined-ca-bundle\") pod \"aodh-db-sync-mjxfd\" (UID: \"523cd360-c0b6-4711-b501-031ad4b8ed4f\") " pod="openstack/aodh-db-sync-mjxfd" Jan 03 06:09:40 crc kubenswrapper[4854]: I0103 06:09:40.654165 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/523cd360-c0b6-4711-b501-031ad4b8ed4f-scripts\") pod \"aodh-db-sync-mjxfd\" (UID: \"523cd360-c0b6-4711-b501-031ad4b8ed4f\") " pod="openstack/aodh-db-sync-mjxfd" Jan 03 06:09:40 crc kubenswrapper[4854]: I0103 06:09:40.662008 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/523cd360-c0b6-4711-b501-031ad4b8ed4f-combined-ca-bundle\") pod \"aodh-db-sync-mjxfd\" (UID: \"523cd360-c0b6-4711-b501-031ad4b8ed4f\") " pod="openstack/aodh-db-sync-mjxfd" Jan 03 06:09:40 crc kubenswrapper[4854]: I0103 06:09:40.663389 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/523cd360-c0b6-4711-b501-031ad4b8ed4f-config-data\") pod \"aodh-db-sync-mjxfd\" (UID: \"523cd360-c0b6-4711-b501-031ad4b8ed4f\") " pod="openstack/aodh-db-sync-mjxfd" Jan 03 06:09:40 crc kubenswrapper[4854]: I0103 06:09:40.672277 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/523cd360-c0b6-4711-b501-031ad4b8ed4f-scripts\") pod \"aodh-db-sync-mjxfd\" (UID: \"523cd360-c0b6-4711-b501-031ad4b8ed4f\") " pod="openstack/aodh-db-sync-mjxfd" Jan 03 06:09:40 crc kubenswrapper[4854]: I0103 06:09:40.675761 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97vcs\" (UniqueName: \"kubernetes.io/projected/523cd360-c0b6-4711-b501-031ad4b8ed4f-kube-api-access-97vcs\") pod \"aodh-db-sync-mjxfd\" (UID: \"523cd360-c0b6-4711-b501-031ad4b8ed4f\") " pod="openstack/aodh-db-sync-mjxfd" Jan 03 06:09:40 crc kubenswrapper[4854]: I0103 06:09:40.755814 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-mjxfd" Jan 03 06:09:41 crc kubenswrapper[4854]: I0103 06:09:41.260989 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-mjxfd"] Jan 03 06:09:42 crc kubenswrapper[4854]: I0103 06:09:42.053197 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-mjxfd" event={"ID":"523cd360-c0b6-4711-b501-031ad4b8ed4f","Type":"ContainerStarted","Data":"b43bfdb706e273f8ef92f99febe174dac9fd12135b64dc3f167c55c17484f895"} Jan 03 06:09:42 crc kubenswrapper[4854]: I0103 06:09:42.144319 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43d451de-7824-46ed-9709-d884d2df08e0" path="/var/lib/kubelet/pods/43d451de-7824-46ed-9709-d884d2df08e0/volumes" Jan 03 06:09:45 crc kubenswrapper[4854]: E0103 06:09:45.072014 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d59a70be9d0375e47163a3d0b8327ed47cd9ba0c8844425faf08091c6be4990b is running failed: container process not found" containerID="d59a70be9d0375e47163a3d0b8327ed47cd9ba0c8844425faf08091c6be4990b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 03 06:09:45 crc kubenswrapper[4854]: E0103 06:09:45.074331 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d59a70be9d0375e47163a3d0b8327ed47cd9ba0c8844425faf08091c6be4990b is running failed: container process not found" containerID="d59a70be9d0375e47163a3d0b8327ed47cd9ba0c8844425faf08091c6be4990b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 03 06:09:45 crc kubenswrapper[4854]: E0103 06:09:45.074957 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d59a70be9d0375e47163a3d0b8327ed47cd9ba0c8844425faf08091c6be4990b is running failed: container process not found" containerID="d59a70be9d0375e47163a3d0b8327ed47cd9ba0c8844425faf08091c6be4990b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 03 06:09:45 crc kubenswrapper[4854]: E0103 06:09:45.075005 4854 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d59a70be9d0375e47163a3d0b8327ed47cd9ba0c8844425faf08091c6be4990b is running failed: container process not found" probeType="Readiness" pod="openstack/heat-engine-6749994886-zsx65" podUID="d346baaf-3040-4209-9049-e92c7b033015" containerName="heat-engine" Jan 03 06:09:45 crc kubenswrapper[4854]: I0103 06:09:45.116828 4854 generic.go:334] "Generic (PLEG): container finished" podID="d346baaf-3040-4209-9049-e92c7b033015" containerID="d59a70be9d0375e47163a3d0b8327ed47cd9ba0c8844425faf08091c6be4990b" exitCode=0 Jan 03 06:09:45 crc kubenswrapper[4854]: I0103 06:09:45.116878 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6749994886-zsx65" event={"ID":"d346baaf-3040-4209-9049-e92c7b033015","Type":"ContainerDied","Data":"d59a70be9d0375e47163a3d0b8327ed47cd9ba0c8844425faf08091c6be4990b"} Jan 03 06:09:46 crc kubenswrapper[4854]: I0103 06:09:46.120309 4854 scope.go:117] "RemoveContainer" containerID="1db592b68f62b6c2ad08a01037bad35421ca86a46de4654a3ad5f8ce76bb6f1b" Jan 03 06:09:46 crc kubenswrapper[4854]: E0103 06:09:46.121158 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:09:46 crc kubenswrapper[4854]: I0103 06:09:46.790266 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6749994886-zsx65" Jan 03 06:09:46 crc kubenswrapper[4854]: I0103 06:09:46.823993 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d346baaf-3040-4209-9049-e92c7b033015-config-data\") pod \"d346baaf-3040-4209-9049-e92c7b033015\" (UID: \"d346baaf-3040-4209-9049-e92c7b033015\") " Jan 03 06:09:46 crc kubenswrapper[4854]: I0103 06:09:46.824226 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d346baaf-3040-4209-9049-e92c7b033015-combined-ca-bundle\") pod \"d346baaf-3040-4209-9049-e92c7b033015\" (UID: \"d346baaf-3040-4209-9049-e92c7b033015\") " Jan 03 06:09:46 crc kubenswrapper[4854]: I0103 06:09:46.824333 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d346baaf-3040-4209-9049-e92c7b033015-config-data-custom\") pod \"d346baaf-3040-4209-9049-e92c7b033015\" (UID: \"d346baaf-3040-4209-9049-e92c7b033015\") " Jan 03 06:09:46 crc kubenswrapper[4854]: I0103 06:09:46.824409 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kg62d\" (UniqueName: \"kubernetes.io/projected/d346baaf-3040-4209-9049-e92c7b033015-kube-api-access-kg62d\") pod \"d346baaf-3040-4209-9049-e92c7b033015\" (UID: \"d346baaf-3040-4209-9049-e92c7b033015\") " Jan 03 06:09:46 crc kubenswrapper[4854]: I0103 06:09:46.828798 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d346baaf-3040-4209-9049-e92c7b033015-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d346baaf-3040-4209-9049-e92c7b033015" (UID: "d346baaf-3040-4209-9049-e92c7b033015"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:09:46 crc kubenswrapper[4854]: I0103 06:09:46.842464 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d346baaf-3040-4209-9049-e92c7b033015-kube-api-access-kg62d" (OuterVolumeSpecName: "kube-api-access-kg62d") pod "d346baaf-3040-4209-9049-e92c7b033015" (UID: "d346baaf-3040-4209-9049-e92c7b033015"). InnerVolumeSpecName "kube-api-access-kg62d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:09:46 crc kubenswrapper[4854]: I0103 06:09:46.892547 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d346baaf-3040-4209-9049-e92c7b033015-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d346baaf-3040-4209-9049-e92c7b033015" (UID: "d346baaf-3040-4209-9049-e92c7b033015"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:09:46 crc kubenswrapper[4854]: I0103 06:09:46.919532 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d346baaf-3040-4209-9049-e92c7b033015-config-data" (OuterVolumeSpecName: "config-data") pod "d346baaf-3040-4209-9049-e92c7b033015" (UID: "d346baaf-3040-4209-9049-e92c7b033015"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:09:46 crc kubenswrapper[4854]: I0103 06:09:46.928822 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d346baaf-3040-4209-9049-e92c7b033015-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 06:09:46 crc kubenswrapper[4854]: I0103 06:09:46.928869 4854 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d346baaf-3040-4209-9049-e92c7b033015-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:09:46 crc kubenswrapper[4854]: I0103 06:09:46.928882 4854 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d346baaf-3040-4209-9049-e92c7b033015-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 03 06:09:46 crc kubenswrapper[4854]: I0103 06:09:46.928898 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kg62d\" (UniqueName: \"kubernetes.io/projected/d346baaf-3040-4209-9049-e92c7b033015-kube-api-access-kg62d\") on node \"crc\" DevicePath \"\"" Jan 03 06:09:47 crc kubenswrapper[4854]: I0103 06:09:47.142410 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6749994886-zsx65" event={"ID":"d346baaf-3040-4209-9049-e92c7b033015","Type":"ContainerDied","Data":"e8fed35324b766976ad7e2b21295c85175c80803b9cd488abf74f10595a77f49"} Jan 03 06:09:47 crc kubenswrapper[4854]: I0103 06:09:47.142449 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6749994886-zsx65" Jan 03 06:09:47 crc kubenswrapper[4854]: I0103 06:09:47.142471 4854 scope.go:117] "RemoveContainer" containerID="d59a70be9d0375e47163a3d0b8327ed47cd9ba0c8844425faf08091c6be4990b" Jan 03 06:09:47 crc kubenswrapper[4854]: I0103 06:09:47.145856 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-mjxfd" event={"ID":"523cd360-c0b6-4711-b501-031ad4b8ed4f","Type":"ContainerStarted","Data":"ce6d54cb224afb56ad776a76583153b8945e1caf98b1df961f80d5b8879898fd"} Jan 03 06:09:47 crc kubenswrapper[4854]: I0103 06:09:47.171041 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-mjxfd" podStartSLOduration=1.831552458 podStartE2EDuration="7.171011357s" podCreationTimestamp="2026-01-03 06:09:40 +0000 UTC" firstStartedPulling="2026-01-03 06:09:41.288885844 +0000 UTC m=+1759.615462416" lastFinishedPulling="2026-01-03 06:09:46.628344733 +0000 UTC m=+1764.954921315" observedRunningTime="2026-01-03 06:09:47.166860394 +0000 UTC m=+1765.493436986" watchObservedRunningTime="2026-01-03 06:09:47.171011357 +0000 UTC m=+1765.497587969" Jan 03 06:09:47 crc kubenswrapper[4854]: I0103 06:09:47.207888 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-6749994886-zsx65"] Jan 03 06:09:47 crc kubenswrapper[4854]: I0103 06:09:47.241224 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-6749994886-zsx65"] Jan 03 06:09:48 crc kubenswrapper[4854]: I0103 06:09:48.152464 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d346baaf-3040-4209-9049-e92c7b033015" path="/var/lib/kubelet/pods/d346baaf-3040-4209-9049-e92c7b033015/volumes" Jan 03 06:09:48 crc kubenswrapper[4854]: E0103 06:09:48.293931 4854 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67938fba_f337_493f_82f0_3076f30fd0fd.slice/crio-6bf6765c4ad480a7d1fefba15847b0989d5429b9e2031c53e23fe42fdae07cd4.scope\": RecentStats: unable to find data in memory cache]" Jan 03 06:09:48 crc kubenswrapper[4854]: E0103 06:09:48.423235 4854 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67938fba_f337_493f_82f0_3076f30fd0fd.slice/crio-conmon-6bf6765c4ad480a7d1fefba15847b0989d5429b9e2031c53e23fe42fdae07cd4.scope\": RecentStats: unable to find data in memory cache]" Jan 03 06:09:49 crc kubenswrapper[4854]: I0103 06:09:49.178455 4854 generic.go:334] "Generic (PLEG): container finished" podID="67938fba-f337-493f-82f0-3076f30fd0fd" containerID="6bf6765c4ad480a7d1fefba15847b0989d5429b9e2031c53e23fe42fdae07cd4" exitCode=0 Jan 03 06:09:49 crc kubenswrapper[4854]: I0103 06:09:49.178499 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k79vj" event={"ID":"67938fba-f337-493f-82f0-3076f30fd0fd","Type":"ContainerDied","Data":"6bf6765c4ad480a7d1fefba15847b0989d5429b9e2031c53e23fe42fdae07cd4"} Jan 03 06:09:50 crc kubenswrapper[4854]: I0103 06:09:50.987893 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k79vj" Jan 03 06:09:51 crc kubenswrapper[4854]: I0103 06:09:51.044856 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkbnx\" (UniqueName: \"kubernetes.io/projected/67938fba-f337-493f-82f0-3076f30fd0fd-kube-api-access-kkbnx\") pod \"67938fba-f337-493f-82f0-3076f30fd0fd\" (UID: \"67938fba-f337-493f-82f0-3076f30fd0fd\") " Jan 03 06:09:51 crc kubenswrapper[4854]: I0103 06:09:51.044945 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/67938fba-f337-493f-82f0-3076f30fd0fd-ssh-key\") pod \"67938fba-f337-493f-82f0-3076f30fd0fd\" (UID: \"67938fba-f337-493f-82f0-3076f30fd0fd\") " Jan 03 06:09:51 crc kubenswrapper[4854]: I0103 06:09:51.045100 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67938fba-f337-493f-82f0-3076f30fd0fd-repo-setup-combined-ca-bundle\") pod \"67938fba-f337-493f-82f0-3076f30fd0fd\" (UID: \"67938fba-f337-493f-82f0-3076f30fd0fd\") " Jan 03 06:09:51 crc kubenswrapper[4854]: I0103 06:09:51.045174 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67938fba-f337-493f-82f0-3076f30fd0fd-inventory\") pod \"67938fba-f337-493f-82f0-3076f30fd0fd\" (UID: \"67938fba-f337-493f-82f0-3076f30fd0fd\") " Jan 03 06:09:51 crc kubenswrapper[4854]: I0103 06:09:51.050835 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67938fba-f337-493f-82f0-3076f30fd0fd-kube-api-access-kkbnx" (OuterVolumeSpecName: "kube-api-access-kkbnx") pod "67938fba-f337-493f-82f0-3076f30fd0fd" (UID: "67938fba-f337-493f-82f0-3076f30fd0fd"). InnerVolumeSpecName "kube-api-access-kkbnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:09:51 crc kubenswrapper[4854]: I0103 06:09:51.050982 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67938fba-f337-493f-82f0-3076f30fd0fd-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "67938fba-f337-493f-82f0-3076f30fd0fd" (UID: "67938fba-f337-493f-82f0-3076f30fd0fd"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:09:51 crc kubenswrapper[4854]: I0103 06:09:51.080173 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67938fba-f337-493f-82f0-3076f30fd0fd-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "67938fba-f337-493f-82f0-3076f30fd0fd" (UID: "67938fba-f337-493f-82f0-3076f30fd0fd"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:09:51 crc kubenswrapper[4854]: I0103 06:09:51.081724 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67938fba-f337-493f-82f0-3076f30fd0fd-inventory" (OuterVolumeSpecName: "inventory") pod "67938fba-f337-493f-82f0-3076f30fd0fd" (UID: "67938fba-f337-493f-82f0-3076f30fd0fd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:09:51 crc kubenswrapper[4854]: I0103 06:09:51.160045 4854 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/67938fba-f337-493f-82f0-3076f30fd0fd-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 03 06:09:51 crc kubenswrapper[4854]: I0103 06:09:51.160103 4854 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67938fba-f337-493f-82f0-3076f30fd0fd-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:09:51 crc kubenswrapper[4854]: I0103 06:09:51.160116 4854 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67938fba-f337-493f-82f0-3076f30fd0fd-inventory\") on node \"crc\" DevicePath \"\"" Jan 03 06:09:51 crc kubenswrapper[4854]: I0103 06:09:51.160126 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkbnx\" (UniqueName: \"kubernetes.io/projected/67938fba-f337-493f-82f0-3076f30fd0fd-kube-api-access-kkbnx\") on node \"crc\" DevicePath \"\"" Jan 03 06:09:51 crc kubenswrapper[4854]: I0103 06:09:51.207407 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k79vj" event={"ID":"67938fba-f337-493f-82f0-3076f30fd0fd","Type":"ContainerDied","Data":"83745cdeda2bbb32e2441f88212a3fe94ce2e150423d3f83b3c49a6b60455e0d"} Jan 03 06:09:51 crc kubenswrapper[4854]: I0103 06:09:51.207453 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83745cdeda2bbb32e2441f88212a3fe94ce2e150423d3f83b3c49a6b60455e0d" Jan 03 06:09:51 crc kubenswrapper[4854]: I0103 06:09:51.207507 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k79vj" Jan 03 06:09:51 crc kubenswrapper[4854]: I0103 06:09:51.372646 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-bf4rv"] Jan 03 06:09:51 crc kubenswrapper[4854]: E0103 06:09:51.373567 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d346baaf-3040-4209-9049-e92c7b033015" containerName="heat-engine" Jan 03 06:09:51 crc kubenswrapper[4854]: I0103 06:09:51.373593 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="d346baaf-3040-4209-9049-e92c7b033015" containerName="heat-engine" Jan 03 06:09:51 crc kubenswrapper[4854]: E0103 06:09:51.373653 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67938fba-f337-493f-82f0-3076f30fd0fd" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 03 06:09:51 crc kubenswrapper[4854]: I0103 06:09:51.373666 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="67938fba-f337-493f-82f0-3076f30fd0fd" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 03 06:09:51 crc kubenswrapper[4854]: I0103 06:09:51.374004 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="d346baaf-3040-4209-9049-e92c7b033015" containerName="heat-engine" Jan 03 06:09:51 crc kubenswrapper[4854]: I0103 06:09:51.374040 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="67938fba-f337-493f-82f0-3076f30fd0fd" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 03 06:09:51 crc kubenswrapper[4854]: I0103 06:09:51.375210 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bf4rv" Jan 03 06:09:51 crc kubenswrapper[4854]: I0103 06:09:51.378245 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 03 06:09:51 crc kubenswrapper[4854]: I0103 06:09:51.380603 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 03 06:09:51 crc kubenswrapper[4854]: I0103 06:09:51.380707 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 03 06:09:51 crc kubenswrapper[4854]: I0103 06:09:51.396916 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-bf4rv"] Jan 03 06:09:51 crc kubenswrapper[4854]: I0103 06:09:51.401753 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4bl62" Jan 03 06:09:51 crc kubenswrapper[4854]: I0103 06:09:51.467999 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec14fa8c-09a7-416f-b347-fe6358d27fee-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-bf4rv\" (UID: \"ec14fa8c-09a7-416f-b347-fe6358d27fee\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bf4rv" Jan 03 06:09:51 crc kubenswrapper[4854]: I0103 06:09:51.468166 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng4xb\" (UniqueName: \"kubernetes.io/projected/ec14fa8c-09a7-416f-b347-fe6358d27fee-kube-api-access-ng4xb\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-bf4rv\" (UID: \"ec14fa8c-09a7-416f-b347-fe6358d27fee\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bf4rv" Jan 03 06:09:51 crc kubenswrapper[4854]: I0103 06:09:51.468372 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ec14fa8c-09a7-416f-b347-fe6358d27fee-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-bf4rv\" (UID: \"ec14fa8c-09a7-416f-b347-fe6358d27fee\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bf4rv" Jan 03 06:09:51 crc kubenswrapper[4854]: I0103 06:09:51.570802 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec14fa8c-09a7-416f-b347-fe6358d27fee-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-bf4rv\" (UID: \"ec14fa8c-09a7-416f-b347-fe6358d27fee\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bf4rv" Jan 03 06:09:51 crc kubenswrapper[4854]: I0103 06:09:51.570879 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ng4xb\" (UniqueName: \"kubernetes.io/projected/ec14fa8c-09a7-416f-b347-fe6358d27fee-kube-api-access-ng4xb\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-bf4rv\" (UID: \"ec14fa8c-09a7-416f-b347-fe6358d27fee\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bf4rv" Jan 03 06:09:51 crc kubenswrapper[4854]: I0103 06:09:51.570934 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ec14fa8c-09a7-416f-b347-fe6358d27fee-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-bf4rv\" (UID: \"ec14fa8c-09a7-416f-b347-fe6358d27fee\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bf4rv" Jan 03 06:09:51 crc kubenswrapper[4854]: I0103 06:09:51.574896 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ec14fa8c-09a7-416f-b347-fe6358d27fee-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-bf4rv\" (UID: \"ec14fa8c-09a7-416f-b347-fe6358d27fee\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bf4rv" Jan 03 06:09:51 crc kubenswrapper[4854]: I0103 06:09:51.575518 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec14fa8c-09a7-416f-b347-fe6358d27fee-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-bf4rv\" (UID: \"ec14fa8c-09a7-416f-b347-fe6358d27fee\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bf4rv" Jan 03 06:09:51 crc kubenswrapper[4854]: I0103 06:09:51.600434 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ng4xb\" (UniqueName: \"kubernetes.io/projected/ec14fa8c-09a7-416f-b347-fe6358d27fee-kube-api-access-ng4xb\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-bf4rv\" (UID: \"ec14fa8c-09a7-416f-b347-fe6358d27fee\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bf4rv" Jan 03 06:09:51 crc kubenswrapper[4854]: I0103 06:09:51.697699 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bf4rv" Jan 03 06:09:51 crc kubenswrapper[4854]: I0103 06:09:51.853419 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 03 06:09:52 crc kubenswrapper[4854]: I0103 06:09:52.456980 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-bf4rv"] Jan 03 06:09:52 crc kubenswrapper[4854]: W0103 06:09:52.481048 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec14fa8c_09a7_416f_b347_fe6358d27fee.slice/crio-4875c4507bfd9a68a41a844d3551940973d2fb7ff9bcebca6f31c0efb67af955 WatchSource:0}: Error finding container 4875c4507bfd9a68a41a844d3551940973d2fb7ff9bcebca6f31c0efb67af955: Status 404 returned error can't find the container with id 4875c4507bfd9a68a41a844d3551940973d2fb7ff9bcebca6f31c0efb67af955 Jan 03 06:09:53 crc kubenswrapper[4854]: I0103 06:09:53.242328 4854 generic.go:334] "Generic (PLEG): container finished" podID="523cd360-c0b6-4711-b501-031ad4b8ed4f" containerID="ce6d54cb224afb56ad776a76583153b8945e1caf98b1df961f80d5b8879898fd" exitCode=0 Jan 03 06:09:53 crc kubenswrapper[4854]: I0103 06:09:53.242673 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-mjxfd" event={"ID":"523cd360-c0b6-4711-b501-031ad4b8ed4f","Type":"ContainerDied","Data":"ce6d54cb224afb56ad776a76583153b8945e1caf98b1df961f80d5b8879898fd"} Jan 03 06:09:53 crc kubenswrapper[4854]: I0103 06:09:53.245313 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bf4rv" event={"ID":"ec14fa8c-09a7-416f-b347-fe6358d27fee","Type":"ContainerStarted","Data":"4875c4507bfd9a68a41a844d3551940973d2fb7ff9bcebca6f31c0efb67af955"} Jan 03 06:09:53 crc kubenswrapper[4854]: I0103 06:09:53.289303 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Jan 03 06:09:53 crc kubenswrapper[4854]: I0103 06:09:53.348053 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 03 06:09:54 crc kubenswrapper[4854]: I0103 06:09:54.267293 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bf4rv" event={"ID":"ec14fa8c-09a7-416f-b347-fe6358d27fee","Type":"ContainerStarted","Data":"3cb14156944019cc91f348d4465453b212b4f0dcdf6ace7c73db678047a61c10"} Jan 03 06:09:54 crc kubenswrapper[4854]: I0103 06:09:54.315225 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bf4rv" podStartSLOduration=2.1603839320000002 podStartE2EDuration="3.31520073s" podCreationTimestamp="2026-01-03 06:09:51 +0000 UTC" firstStartedPulling="2026-01-03 06:09:52.486115511 +0000 UTC m=+1770.812692083" lastFinishedPulling="2026-01-03 06:09:53.640932309 +0000 UTC m=+1771.967508881" observedRunningTime="2026-01-03 06:09:54.295895089 +0000 UTC m=+1772.622471661" watchObservedRunningTime="2026-01-03 06:09:54.31520073 +0000 UTC m=+1772.641777302" Jan 03 06:09:54 crc kubenswrapper[4854]: I0103 06:09:54.745856 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-mjxfd" Jan 03 06:09:54 crc kubenswrapper[4854]: I0103 06:09:54.880823 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/523cd360-c0b6-4711-b501-031ad4b8ed4f-combined-ca-bundle\") pod \"523cd360-c0b6-4711-b501-031ad4b8ed4f\" (UID: \"523cd360-c0b6-4711-b501-031ad4b8ed4f\") " Jan 03 06:09:54 crc kubenswrapper[4854]: I0103 06:09:54.880931 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/523cd360-c0b6-4711-b501-031ad4b8ed4f-scripts\") pod \"523cd360-c0b6-4711-b501-031ad4b8ed4f\" (UID: \"523cd360-c0b6-4711-b501-031ad4b8ed4f\") " Jan 03 06:09:54 crc kubenswrapper[4854]: I0103 06:09:54.881004 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/523cd360-c0b6-4711-b501-031ad4b8ed4f-config-data\") pod \"523cd360-c0b6-4711-b501-031ad4b8ed4f\" (UID: \"523cd360-c0b6-4711-b501-031ad4b8ed4f\") " Jan 03 06:09:54 crc kubenswrapper[4854]: I0103 06:09:54.881151 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97vcs\" (UniqueName: \"kubernetes.io/projected/523cd360-c0b6-4711-b501-031ad4b8ed4f-kube-api-access-97vcs\") pod \"523cd360-c0b6-4711-b501-031ad4b8ed4f\" (UID: \"523cd360-c0b6-4711-b501-031ad4b8ed4f\") " Jan 03 06:09:54 crc kubenswrapper[4854]: I0103 06:09:54.887783 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/523cd360-c0b6-4711-b501-031ad4b8ed4f-kube-api-access-97vcs" (OuterVolumeSpecName: "kube-api-access-97vcs") pod "523cd360-c0b6-4711-b501-031ad4b8ed4f" (UID: "523cd360-c0b6-4711-b501-031ad4b8ed4f"). InnerVolumeSpecName "kube-api-access-97vcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:09:54 crc kubenswrapper[4854]: I0103 06:09:54.890323 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/523cd360-c0b6-4711-b501-031ad4b8ed4f-scripts" (OuterVolumeSpecName: "scripts") pod "523cd360-c0b6-4711-b501-031ad4b8ed4f" (UID: "523cd360-c0b6-4711-b501-031ad4b8ed4f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:09:54 crc kubenswrapper[4854]: I0103 06:09:54.919230 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/523cd360-c0b6-4711-b501-031ad4b8ed4f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "523cd360-c0b6-4711-b501-031ad4b8ed4f" (UID: "523cd360-c0b6-4711-b501-031ad4b8ed4f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:09:54 crc kubenswrapper[4854]: I0103 06:09:54.938201 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/523cd360-c0b6-4711-b501-031ad4b8ed4f-config-data" (OuterVolumeSpecName: "config-data") pod "523cd360-c0b6-4711-b501-031ad4b8ed4f" (UID: "523cd360-c0b6-4711-b501-031ad4b8ed4f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:09:54 crc kubenswrapper[4854]: I0103 06:09:54.984051 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/523cd360-c0b6-4711-b501-031ad4b8ed4f-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 06:09:54 crc kubenswrapper[4854]: I0103 06:09:54.984106 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97vcs\" (UniqueName: \"kubernetes.io/projected/523cd360-c0b6-4711-b501-031ad4b8ed4f-kube-api-access-97vcs\") on node \"crc\" DevicePath \"\"" Jan 03 06:09:54 crc kubenswrapper[4854]: I0103 06:09:54.984118 4854 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/523cd360-c0b6-4711-b501-031ad4b8ed4f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:09:54 crc kubenswrapper[4854]: I0103 06:09:54.984129 4854 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/523cd360-c0b6-4711-b501-031ad4b8ed4f-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:09:55 crc kubenswrapper[4854]: I0103 06:09:55.279876 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-mjxfd" Jan 03 06:09:55 crc kubenswrapper[4854]: I0103 06:09:55.279864 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-mjxfd" event={"ID":"523cd360-c0b6-4711-b501-031ad4b8ed4f","Type":"ContainerDied","Data":"b43bfdb706e273f8ef92f99febe174dac9fd12135b64dc3f167c55c17484f895"} Jan 03 06:09:55 crc kubenswrapper[4854]: I0103 06:09:55.280046 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b43bfdb706e273f8ef92f99febe174dac9fd12135b64dc3f167c55c17484f895" Jan 03 06:09:55 crc kubenswrapper[4854]: I0103 06:09:55.469967 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 03 06:09:55 crc kubenswrapper[4854]: I0103 06:09:55.470707 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="df567d27-a8cb-4757-8f04-c46469c0a7e4" containerName="aodh-api" containerID="cri-o://d06d6bad16d7d5f4d1a48a197f5e649ffdb7503b92922d3d84065c3d6270283e" gracePeriod=30 Jan 03 06:09:55 crc kubenswrapper[4854]: I0103 06:09:55.470739 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="df567d27-a8cb-4757-8f04-c46469c0a7e4" containerName="aodh-listener" containerID="cri-o://e39e0543ca796c855c85b3fcdebe71a5e9e48c175344d5e3a7ff77391354c18c" gracePeriod=30 Jan 03 06:09:55 crc kubenswrapper[4854]: I0103 06:09:55.470846 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="df567d27-a8cb-4757-8f04-c46469c0a7e4" containerName="aodh-notifier" containerID="cri-o://3c809ffcc76650f7b9a4d185df60590f7e8f208a277c9315e67ff91ad4db79e3" gracePeriod=30 Jan 03 06:09:55 crc kubenswrapper[4854]: I0103 06:09:55.470875 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="df567d27-a8cb-4757-8f04-c46469c0a7e4" containerName="aodh-evaluator" containerID="cri-o://1589d6a12003c6cf4965e1e3723d23c6200d32fc9e0c5032843c0757ea4173d0" gracePeriod=30 Jan 03 06:09:56 crc kubenswrapper[4854]: I0103 06:09:56.296136 4854 generic.go:334] "Generic (PLEG): container finished" podID="df567d27-a8cb-4757-8f04-c46469c0a7e4" containerID="1589d6a12003c6cf4965e1e3723d23c6200d32fc9e0c5032843c0757ea4173d0" exitCode=0 Jan 03 06:09:56 crc kubenswrapper[4854]: I0103 06:09:56.296189 4854 generic.go:334] "Generic (PLEG): container finished" podID="df567d27-a8cb-4757-8f04-c46469c0a7e4" containerID="d06d6bad16d7d5f4d1a48a197f5e649ffdb7503b92922d3d84065c3d6270283e" exitCode=0 Jan 03 06:09:56 crc kubenswrapper[4854]: I0103 06:09:56.296224 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"df567d27-a8cb-4757-8f04-c46469c0a7e4","Type":"ContainerDied","Data":"1589d6a12003c6cf4965e1e3723d23c6200d32fc9e0c5032843c0757ea4173d0"} Jan 03 06:09:56 crc kubenswrapper[4854]: I0103 06:09:56.296275 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"df567d27-a8cb-4757-8f04-c46469c0a7e4","Type":"ContainerDied","Data":"d06d6bad16d7d5f4d1a48a197f5e649ffdb7503b92922d3d84065c3d6270283e"} Jan 03 06:09:59 crc kubenswrapper[4854]: I0103 06:09:59.078426 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="ba007649-daf8-445b-b2c8-73ce6ec54403" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.131:5671: connect: connection refused" Jan 03 06:09:59 crc kubenswrapper[4854]: I0103 06:09:59.118628 4854 scope.go:117] "RemoveContainer" containerID="1db592b68f62b6c2ad08a01037bad35421ca86a46de4654a3ad5f8ce76bb6f1b" Jan 03 06:09:59 crc kubenswrapper[4854]: E0103 06:09:59.119386 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:09:59 crc kubenswrapper[4854]: I0103 06:09:59.128129 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-1" podUID="ba007649-daf8-445b-b2c8-73ce6ec54403" containerName="rabbitmq" containerID="cri-o://47373f0c373d41d2ed0a8769659142e352b16482458dbc79ebf2fb0ab0dcee4a" gracePeriod=604795 Jan 03 06:09:59 crc kubenswrapper[4854]: I0103 06:09:59.354051 4854 generic.go:334] "Generic (PLEG): container finished" podID="ec14fa8c-09a7-416f-b347-fe6358d27fee" containerID="3cb14156944019cc91f348d4465453b212b4f0dcdf6ace7c73db678047a61c10" exitCode=0 Jan 03 06:09:59 crc kubenswrapper[4854]: I0103 06:09:59.354105 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bf4rv" event={"ID":"ec14fa8c-09a7-416f-b347-fe6358d27fee","Type":"ContainerDied","Data":"3cb14156944019cc91f348d4465453b212b4f0dcdf6ace7c73db678047a61c10"} Jan 03 06:09:59 crc kubenswrapper[4854]: I0103 06:09:59.379246 4854 scope.go:117] "RemoveContainer" containerID="2c18ad83e63a4aed636889a42df51df1b95d61ac5f229e131fa8c306c15c3db1" Jan 03 06:09:59 crc kubenswrapper[4854]: I0103 06:09:59.409335 4854 scope.go:117] "RemoveContainer" containerID="695571ad73ffdb8f7163dbc47877775d4429b8cfb4a0a41df26e999591b95662" Jan 03 06:09:59 crc kubenswrapper[4854]: I0103 06:09:59.466885 4854 scope.go:117] "RemoveContainer" containerID="1692c8acfa3150463e84907272e673ac637c61b8759e684e77f9e6829b387f9e" Jan 03 06:09:59 crc kubenswrapper[4854]: I0103 06:09:59.563196 4854 scope.go:117] "RemoveContainer" containerID="b7d8550b767b745c10631f8f4cfd712f0fbf747774b54c5ddba943f86791c42c" Jan 03 06:09:59 crc kubenswrapper[4854]: I0103 06:09:59.582489 4854 scope.go:117] "RemoveContainer" containerID="cb33efd6867bf5e6a28760f46ec744ebf7f54b9c7e975f3f4daa06727ac6d6ee" Jan 03 06:10:00 crc kubenswrapper[4854]: I0103 06:10:00.954558 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bf4rv" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.038973 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.059007 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ng4xb\" (UniqueName: \"kubernetes.io/projected/ec14fa8c-09a7-416f-b347-fe6358d27fee-kube-api-access-ng4xb\") pod \"ec14fa8c-09a7-416f-b347-fe6358d27fee\" (UID: \"ec14fa8c-09a7-416f-b347-fe6358d27fee\") " Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.059269 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec14fa8c-09a7-416f-b347-fe6358d27fee-inventory\") pod \"ec14fa8c-09a7-416f-b347-fe6358d27fee\" (UID: \"ec14fa8c-09a7-416f-b347-fe6358d27fee\") " Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.059446 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ec14fa8c-09a7-416f-b347-fe6358d27fee-ssh-key\") pod \"ec14fa8c-09a7-416f-b347-fe6358d27fee\" (UID: \"ec14fa8c-09a7-416f-b347-fe6358d27fee\") " Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.077970 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec14fa8c-09a7-416f-b347-fe6358d27fee-kube-api-access-ng4xb" (OuterVolumeSpecName: "kube-api-access-ng4xb") pod "ec14fa8c-09a7-416f-b347-fe6358d27fee" (UID: "ec14fa8c-09a7-416f-b347-fe6358d27fee"). InnerVolumeSpecName "kube-api-access-ng4xb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.103266 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec14fa8c-09a7-416f-b347-fe6358d27fee-inventory" (OuterVolumeSpecName: "inventory") pod "ec14fa8c-09a7-416f-b347-fe6358d27fee" (UID: "ec14fa8c-09a7-416f-b347-fe6358d27fee"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.113432 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec14fa8c-09a7-416f-b347-fe6358d27fee-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ec14fa8c-09a7-416f-b347-fe6358d27fee" (UID: "ec14fa8c-09a7-416f-b347-fe6358d27fee"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.161471 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df567d27-a8cb-4757-8f04-c46469c0a7e4-scripts\") pod \"df567d27-a8cb-4757-8f04-c46469c0a7e4\" (UID: \"df567d27-a8cb-4757-8f04-c46469c0a7e4\") " Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.161708 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8tgw\" (UniqueName: \"kubernetes.io/projected/df567d27-a8cb-4757-8f04-c46469c0a7e4-kube-api-access-n8tgw\") pod \"df567d27-a8cb-4757-8f04-c46469c0a7e4\" (UID: \"df567d27-a8cb-4757-8f04-c46469c0a7e4\") " Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.161746 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df567d27-a8cb-4757-8f04-c46469c0a7e4-combined-ca-bundle\") pod \"df567d27-a8cb-4757-8f04-c46469c0a7e4\" (UID: \"df567d27-a8cb-4757-8f04-c46469c0a7e4\") " Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.162595 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/df567d27-a8cb-4757-8f04-c46469c0a7e4-public-tls-certs\") pod \"df567d27-a8cb-4757-8f04-c46469c0a7e4\" (UID: \"df567d27-a8cb-4757-8f04-c46469c0a7e4\") " Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.162655 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df567d27-a8cb-4757-8f04-c46469c0a7e4-config-data\") pod \"df567d27-a8cb-4757-8f04-c46469c0a7e4\" (UID: \"df567d27-a8cb-4757-8f04-c46469c0a7e4\") " Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.162742 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/df567d27-a8cb-4757-8f04-c46469c0a7e4-internal-tls-certs\") pod \"df567d27-a8cb-4757-8f04-c46469c0a7e4\" (UID: \"df567d27-a8cb-4757-8f04-c46469c0a7e4\") " Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.163266 4854 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec14fa8c-09a7-416f-b347-fe6358d27fee-inventory\") on node \"crc\" DevicePath \"\"" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.163288 4854 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ec14fa8c-09a7-416f-b347-fe6358d27fee-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.163298 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ng4xb\" (UniqueName: \"kubernetes.io/projected/ec14fa8c-09a7-416f-b347-fe6358d27fee-kube-api-access-ng4xb\") on node \"crc\" DevicePath \"\"" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.165404 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df567d27-a8cb-4757-8f04-c46469c0a7e4-scripts" (OuterVolumeSpecName: "scripts") pod "df567d27-a8cb-4757-8f04-c46469c0a7e4" (UID: "df567d27-a8cb-4757-8f04-c46469c0a7e4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.166748 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df567d27-a8cb-4757-8f04-c46469c0a7e4-kube-api-access-n8tgw" (OuterVolumeSpecName: "kube-api-access-n8tgw") pod "df567d27-a8cb-4757-8f04-c46469c0a7e4" (UID: "df567d27-a8cb-4757-8f04-c46469c0a7e4"). InnerVolumeSpecName "kube-api-access-n8tgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.233664 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df567d27-a8cb-4757-8f04-c46469c0a7e4-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "df567d27-a8cb-4757-8f04-c46469c0a7e4" (UID: "df567d27-a8cb-4757-8f04-c46469c0a7e4"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.253408 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df567d27-a8cb-4757-8f04-c46469c0a7e4-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "df567d27-a8cb-4757-8f04-c46469c0a7e4" (UID: "df567d27-a8cb-4757-8f04-c46469c0a7e4"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.265498 4854 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/df567d27-a8cb-4757-8f04-c46469c0a7e4-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.265551 4854 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df567d27-a8cb-4757-8f04-c46469c0a7e4-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.265562 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8tgw\" (UniqueName: \"kubernetes.io/projected/df567d27-a8cb-4757-8f04-c46469c0a7e4-kube-api-access-n8tgw\") on node \"crc\" DevicePath \"\"" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.265571 4854 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/df567d27-a8cb-4757-8f04-c46469c0a7e4-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.314996 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df567d27-a8cb-4757-8f04-c46469c0a7e4-config-data" (OuterVolumeSpecName: "config-data") pod "df567d27-a8cb-4757-8f04-c46469c0a7e4" (UID: "df567d27-a8cb-4757-8f04-c46469c0a7e4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.324714 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df567d27-a8cb-4757-8f04-c46469c0a7e4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df567d27-a8cb-4757-8f04-c46469c0a7e4" (UID: "df567d27-a8cb-4757-8f04-c46469c0a7e4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.367825 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df567d27-a8cb-4757-8f04-c46469c0a7e4-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.367867 4854 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df567d27-a8cb-4757-8f04-c46469c0a7e4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.376308 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bf4rv" event={"ID":"ec14fa8c-09a7-416f-b347-fe6358d27fee","Type":"ContainerDied","Data":"4875c4507bfd9a68a41a844d3551940973d2fb7ff9bcebca6f31c0efb67af955"} Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.376365 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4875c4507bfd9a68a41a844d3551940973d2fb7ff9bcebca6f31c0efb67af955" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.376321 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bf4rv" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.381451 4854 generic.go:334] "Generic (PLEG): container finished" podID="df567d27-a8cb-4757-8f04-c46469c0a7e4" containerID="e39e0543ca796c855c85b3fcdebe71a5e9e48c175344d5e3a7ff77391354c18c" exitCode=0 Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.381482 4854 generic.go:334] "Generic (PLEG): container finished" podID="df567d27-a8cb-4757-8f04-c46469c0a7e4" containerID="3c809ffcc76650f7b9a4d185df60590f7e8f208a277c9315e67ff91ad4db79e3" exitCode=0 Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.381509 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"df567d27-a8cb-4757-8f04-c46469c0a7e4","Type":"ContainerDied","Data":"e39e0543ca796c855c85b3fcdebe71a5e9e48c175344d5e3a7ff77391354c18c"} Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.381540 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"df567d27-a8cb-4757-8f04-c46469c0a7e4","Type":"ContainerDied","Data":"3c809ffcc76650f7b9a4d185df60590f7e8f208a277c9315e67ff91ad4db79e3"} Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.381548 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"df567d27-a8cb-4757-8f04-c46469c0a7e4","Type":"ContainerDied","Data":"fb3a9ab99a93c69c71499c0b9cd7b71c4cb1a4b2ae434995657da4b3fd1936d6"} Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.381567 4854 scope.go:117] "RemoveContainer" containerID="e39e0543ca796c855c85b3fcdebe71a5e9e48c175344d5e3a7ff77391354c18c" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.381610 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.519613 4854 scope.go:117] "RemoveContainer" containerID="3c809ffcc76650f7b9a4d185df60590f7e8f208a277c9315e67ff91ad4db79e3" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.595160 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.642237 4854 scope.go:117] "RemoveContainer" containerID="1589d6a12003c6cf4965e1e3723d23c6200d32fc9e0c5032843c0757ea4173d0" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.671176 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.701141 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 03 06:10:01 crc kubenswrapper[4854]: E0103 06:10:01.740442 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df567d27-a8cb-4757-8f04-c46469c0a7e4" containerName="aodh-notifier" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.740479 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="df567d27-a8cb-4757-8f04-c46469c0a7e4" containerName="aodh-notifier" Jan 03 06:10:01 crc kubenswrapper[4854]: E0103 06:10:01.740501 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec14fa8c-09a7-416f-b347-fe6358d27fee" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.740510 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec14fa8c-09a7-416f-b347-fe6358d27fee" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 03 06:10:01 crc kubenswrapper[4854]: E0103 06:10:01.740527 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df567d27-a8cb-4757-8f04-c46469c0a7e4" containerName="aodh-evaluator" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.740532 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="df567d27-a8cb-4757-8f04-c46469c0a7e4" containerName="aodh-evaluator" Jan 03 06:10:01 crc kubenswrapper[4854]: E0103 06:10:01.740542 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df567d27-a8cb-4757-8f04-c46469c0a7e4" containerName="aodh-listener" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.740548 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="df567d27-a8cb-4757-8f04-c46469c0a7e4" containerName="aodh-listener" Jan 03 06:10:01 crc kubenswrapper[4854]: E0103 06:10:01.740560 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df567d27-a8cb-4757-8f04-c46469c0a7e4" containerName="aodh-api" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.740568 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="df567d27-a8cb-4757-8f04-c46469c0a7e4" containerName="aodh-api" Jan 03 06:10:01 crc kubenswrapper[4854]: E0103 06:10:01.740578 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="523cd360-c0b6-4711-b501-031ad4b8ed4f" containerName="aodh-db-sync" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.740584 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="523cd360-c0b6-4711-b501-031ad4b8ed4f" containerName="aodh-db-sync" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.741629 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="df567d27-a8cb-4757-8f04-c46469c0a7e4" containerName="aodh-evaluator" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.741678 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="df567d27-a8cb-4757-8f04-c46469c0a7e4" containerName="aodh-notifier" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.741701 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="df567d27-a8cb-4757-8f04-c46469c0a7e4" containerName="aodh-api" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.741713 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="df567d27-a8cb-4757-8f04-c46469c0a7e4" containerName="aodh-listener" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.741723 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="523cd360-c0b6-4711-b501-031ad4b8ed4f" containerName="aodh-db-sync" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.741746 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec14fa8c-09a7-416f-b347-fe6358d27fee" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.744126 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.756633 4854 scope.go:117] "RemoveContainer" containerID="d06d6bad16d7d5f4d1a48a197f5e649ffdb7503b92922d3d84065c3d6270283e" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.757040 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-bkf2n" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.757260 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-grmm7"] Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.757373 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.757562 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.757652 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.757750 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.759460 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-grmm7" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.778339 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-grmm7"] Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.782064 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4bl62" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.782320 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.782479 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.782476 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.818567 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.835391 4854 scope.go:117] "RemoveContainer" containerID="e39e0543ca796c855c85b3fcdebe71a5e9e48c175344d5e3a7ff77391354c18c" Jan 03 06:10:01 crc kubenswrapper[4854]: E0103 06:10:01.835857 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e39e0543ca796c855c85b3fcdebe71a5e9e48c175344d5e3a7ff77391354c18c\": container with ID starting with e39e0543ca796c855c85b3fcdebe71a5e9e48c175344d5e3a7ff77391354c18c not found: ID does not exist" containerID="e39e0543ca796c855c85b3fcdebe71a5e9e48c175344d5e3a7ff77391354c18c" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.835891 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e39e0543ca796c855c85b3fcdebe71a5e9e48c175344d5e3a7ff77391354c18c"} err="failed to get container status \"e39e0543ca796c855c85b3fcdebe71a5e9e48c175344d5e3a7ff77391354c18c\": rpc error: code = NotFound desc = could not find container \"e39e0543ca796c855c85b3fcdebe71a5e9e48c175344d5e3a7ff77391354c18c\": container with ID starting with e39e0543ca796c855c85b3fcdebe71a5e9e48c175344d5e3a7ff77391354c18c not found: ID does not exist" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.835916 4854 scope.go:117] "RemoveContainer" containerID="3c809ffcc76650f7b9a4d185df60590f7e8f208a277c9315e67ff91ad4db79e3" Jan 03 06:10:01 crc kubenswrapper[4854]: E0103 06:10:01.836219 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c809ffcc76650f7b9a4d185df60590f7e8f208a277c9315e67ff91ad4db79e3\": container with ID starting with 3c809ffcc76650f7b9a4d185df60590f7e8f208a277c9315e67ff91ad4db79e3 not found: ID does not exist" containerID="3c809ffcc76650f7b9a4d185df60590f7e8f208a277c9315e67ff91ad4db79e3" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.836240 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c809ffcc76650f7b9a4d185df60590f7e8f208a277c9315e67ff91ad4db79e3"} err="failed to get container status \"3c809ffcc76650f7b9a4d185df60590f7e8f208a277c9315e67ff91ad4db79e3\": rpc error: code = NotFound desc = could not find container \"3c809ffcc76650f7b9a4d185df60590f7e8f208a277c9315e67ff91ad4db79e3\": container with ID starting with 3c809ffcc76650f7b9a4d185df60590f7e8f208a277c9315e67ff91ad4db79e3 not found: ID does not exist" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.836251 4854 scope.go:117] "RemoveContainer" containerID="1589d6a12003c6cf4965e1e3723d23c6200d32fc9e0c5032843c0757ea4173d0" Jan 03 06:10:01 crc kubenswrapper[4854]: E0103 06:10:01.836425 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1589d6a12003c6cf4965e1e3723d23c6200d32fc9e0c5032843c0757ea4173d0\": container with ID starting with 1589d6a12003c6cf4965e1e3723d23c6200d32fc9e0c5032843c0757ea4173d0 not found: ID does not exist" containerID="1589d6a12003c6cf4965e1e3723d23c6200d32fc9e0c5032843c0757ea4173d0" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.836444 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1589d6a12003c6cf4965e1e3723d23c6200d32fc9e0c5032843c0757ea4173d0"} err="failed to get container status \"1589d6a12003c6cf4965e1e3723d23c6200d32fc9e0c5032843c0757ea4173d0\": rpc error: code = NotFound desc = could not find container \"1589d6a12003c6cf4965e1e3723d23c6200d32fc9e0c5032843c0757ea4173d0\": container with ID starting with 1589d6a12003c6cf4965e1e3723d23c6200d32fc9e0c5032843c0757ea4173d0 not found: ID does not exist" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.836455 4854 scope.go:117] "RemoveContainer" containerID="d06d6bad16d7d5f4d1a48a197f5e649ffdb7503b92922d3d84065c3d6270283e" Jan 03 06:10:01 crc kubenswrapper[4854]: E0103 06:10:01.836641 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d06d6bad16d7d5f4d1a48a197f5e649ffdb7503b92922d3d84065c3d6270283e\": container with ID starting with d06d6bad16d7d5f4d1a48a197f5e649ffdb7503b92922d3d84065c3d6270283e not found: ID does not exist" containerID="d06d6bad16d7d5f4d1a48a197f5e649ffdb7503b92922d3d84065c3d6270283e" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.836671 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d06d6bad16d7d5f4d1a48a197f5e649ffdb7503b92922d3d84065c3d6270283e"} err="failed to get container status \"d06d6bad16d7d5f4d1a48a197f5e649ffdb7503b92922d3d84065c3d6270283e\": rpc error: code = NotFound desc = could not find container \"d06d6bad16d7d5f4d1a48a197f5e649ffdb7503b92922d3d84065c3d6270283e\": container with ID starting with d06d6bad16d7d5f4d1a48a197f5e649ffdb7503b92922d3d84065c3d6270283e not found: ID does not exist" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.836695 4854 scope.go:117] "RemoveContainer" containerID="e39e0543ca796c855c85b3fcdebe71a5e9e48c175344d5e3a7ff77391354c18c" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.836852 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e39e0543ca796c855c85b3fcdebe71a5e9e48c175344d5e3a7ff77391354c18c"} err="failed to get container status \"e39e0543ca796c855c85b3fcdebe71a5e9e48c175344d5e3a7ff77391354c18c\": rpc error: code = NotFound desc = could not find container \"e39e0543ca796c855c85b3fcdebe71a5e9e48c175344d5e3a7ff77391354c18c\": container with ID starting with e39e0543ca796c855c85b3fcdebe71a5e9e48c175344d5e3a7ff77391354c18c not found: ID does not exist" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.836871 4854 scope.go:117] "RemoveContainer" containerID="3c809ffcc76650f7b9a4d185df60590f7e8f208a277c9315e67ff91ad4db79e3" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.837029 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c809ffcc76650f7b9a4d185df60590f7e8f208a277c9315e67ff91ad4db79e3"} err="failed to get container status \"3c809ffcc76650f7b9a4d185df60590f7e8f208a277c9315e67ff91ad4db79e3\": rpc error: code = NotFound desc = could not find container \"3c809ffcc76650f7b9a4d185df60590f7e8f208a277c9315e67ff91ad4db79e3\": container with ID starting with 3c809ffcc76650f7b9a4d185df60590f7e8f208a277c9315e67ff91ad4db79e3 not found: ID does not exist" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.837054 4854 scope.go:117] "RemoveContainer" containerID="1589d6a12003c6cf4965e1e3723d23c6200d32fc9e0c5032843c0757ea4173d0" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.837227 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1589d6a12003c6cf4965e1e3723d23c6200d32fc9e0c5032843c0757ea4173d0"} err="failed to get container status \"1589d6a12003c6cf4965e1e3723d23c6200d32fc9e0c5032843c0757ea4173d0\": rpc error: code = NotFound desc = could not find container \"1589d6a12003c6cf4965e1e3723d23c6200d32fc9e0c5032843c0757ea4173d0\": container with ID starting with 1589d6a12003c6cf4965e1e3723d23c6200d32fc9e0c5032843c0757ea4173d0 not found: ID does not exist" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.837244 4854 scope.go:117] "RemoveContainer" containerID="d06d6bad16d7d5f4d1a48a197f5e649ffdb7503b92922d3d84065c3d6270283e" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.837432 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d06d6bad16d7d5f4d1a48a197f5e649ffdb7503b92922d3d84065c3d6270283e"} err="failed to get container status \"d06d6bad16d7d5f4d1a48a197f5e649ffdb7503b92922d3d84065c3d6270283e\": rpc error: code = NotFound desc = could not find container \"d06d6bad16d7d5f4d1a48a197f5e649ffdb7503b92922d3d84065c3d6270283e\": container with ID starting with d06d6bad16d7d5f4d1a48a197f5e649ffdb7503b92922d3d84065c3d6270283e not found: ID does not exist" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.844857 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln6c2\" (UniqueName: \"kubernetes.io/projected/4f2831c4-a6e4-46a7-85b9-32fa56e4e268-kube-api-access-ln6c2\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-grmm7\" (UID: \"4f2831c4-a6e4-46a7-85b9-32fa56e4e268\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-grmm7" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.844927 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f2831c4-a6e4-46a7-85b9-32fa56e4e268-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-grmm7\" (UID: \"4f2831c4-a6e4-46a7-85b9-32fa56e4e268\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-grmm7" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.845001 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edfce958-206a-4aed-a7c5-ca9b9cc10227-combined-ca-bundle\") pod \"aodh-0\" (UID: \"edfce958-206a-4aed-a7c5-ca9b9cc10227\") " pod="openstack/aodh-0" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.845039 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edfce958-206a-4aed-a7c5-ca9b9cc10227-config-data\") pod \"aodh-0\" (UID: \"edfce958-206a-4aed-a7c5-ca9b9cc10227\") " pod="openstack/aodh-0" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.845381 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f2831c4-a6e4-46a7-85b9-32fa56e4e268-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-grmm7\" (UID: \"4f2831c4-a6e4-46a7-85b9-32fa56e4e268\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-grmm7" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.845471 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/edfce958-206a-4aed-a7c5-ca9b9cc10227-public-tls-certs\") pod \"aodh-0\" (UID: \"edfce958-206a-4aed-a7c5-ca9b9cc10227\") " pod="openstack/aodh-0" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.845548 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/edfce958-206a-4aed-a7c5-ca9b9cc10227-internal-tls-certs\") pod \"aodh-0\" (UID: \"edfce958-206a-4aed-a7c5-ca9b9cc10227\") " pod="openstack/aodh-0" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.845633 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edfce958-206a-4aed-a7c5-ca9b9cc10227-scripts\") pod \"aodh-0\" (UID: \"edfce958-206a-4aed-a7c5-ca9b9cc10227\") " pod="openstack/aodh-0" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.845665 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz9k4\" (UniqueName: \"kubernetes.io/projected/edfce958-206a-4aed-a7c5-ca9b9cc10227-kube-api-access-nz9k4\") pod \"aodh-0\" (UID: \"edfce958-206a-4aed-a7c5-ca9b9cc10227\") " pod="openstack/aodh-0" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.845718 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4f2831c4-a6e4-46a7-85b9-32fa56e4e268-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-grmm7\" (UID: \"4f2831c4-a6e4-46a7-85b9-32fa56e4e268\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-grmm7" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.947939 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edfce958-206a-4aed-a7c5-ca9b9cc10227-scripts\") pod \"aodh-0\" (UID: \"edfce958-206a-4aed-a7c5-ca9b9cc10227\") " pod="openstack/aodh-0" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.947994 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nz9k4\" (UniqueName: \"kubernetes.io/projected/edfce958-206a-4aed-a7c5-ca9b9cc10227-kube-api-access-nz9k4\") pod \"aodh-0\" (UID: \"edfce958-206a-4aed-a7c5-ca9b9cc10227\") " pod="openstack/aodh-0" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.948037 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4f2831c4-a6e4-46a7-85b9-32fa56e4e268-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-grmm7\" (UID: \"4f2831c4-a6e4-46a7-85b9-32fa56e4e268\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-grmm7" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.948098 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln6c2\" (UniqueName: \"kubernetes.io/projected/4f2831c4-a6e4-46a7-85b9-32fa56e4e268-kube-api-access-ln6c2\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-grmm7\" (UID: \"4f2831c4-a6e4-46a7-85b9-32fa56e4e268\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-grmm7" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.948144 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f2831c4-a6e4-46a7-85b9-32fa56e4e268-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-grmm7\" (UID: \"4f2831c4-a6e4-46a7-85b9-32fa56e4e268\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-grmm7" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.948222 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edfce958-206a-4aed-a7c5-ca9b9cc10227-combined-ca-bundle\") pod \"aodh-0\" (UID: \"edfce958-206a-4aed-a7c5-ca9b9cc10227\") " pod="openstack/aodh-0" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.948248 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edfce958-206a-4aed-a7c5-ca9b9cc10227-config-data\") pod \"aodh-0\" (UID: \"edfce958-206a-4aed-a7c5-ca9b9cc10227\") " pod="openstack/aodh-0" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.948302 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f2831c4-a6e4-46a7-85b9-32fa56e4e268-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-grmm7\" (UID: \"4f2831c4-a6e4-46a7-85b9-32fa56e4e268\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-grmm7" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.948363 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/edfce958-206a-4aed-a7c5-ca9b9cc10227-public-tls-certs\") pod \"aodh-0\" (UID: \"edfce958-206a-4aed-a7c5-ca9b9cc10227\") " pod="openstack/aodh-0" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.948431 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/edfce958-206a-4aed-a7c5-ca9b9cc10227-internal-tls-certs\") pod \"aodh-0\" (UID: \"edfce958-206a-4aed-a7c5-ca9b9cc10227\") " pod="openstack/aodh-0" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.954105 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f2831c4-a6e4-46a7-85b9-32fa56e4e268-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-grmm7\" (UID: \"4f2831c4-a6e4-46a7-85b9-32fa56e4e268\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-grmm7" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.954188 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/edfce958-206a-4aed-a7c5-ca9b9cc10227-internal-tls-certs\") pod \"aodh-0\" (UID: \"edfce958-206a-4aed-a7c5-ca9b9cc10227\") " pod="openstack/aodh-0" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.954571 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f2831c4-a6e4-46a7-85b9-32fa56e4e268-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-grmm7\" (UID: \"4f2831c4-a6e4-46a7-85b9-32fa56e4e268\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-grmm7" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.954638 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edfce958-206a-4aed-a7c5-ca9b9cc10227-scripts\") pod \"aodh-0\" (UID: \"edfce958-206a-4aed-a7c5-ca9b9cc10227\") " pod="openstack/aodh-0" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.955354 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/edfce958-206a-4aed-a7c5-ca9b9cc10227-public-tls-certs\") pod \"aodh-0\" (UID: \"edfce958-206a-4aed-a7c5-ca9b9cc10227\") " pod="openstack/aodh-0" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.957343 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edfce958-206a-4aed-a7c5-ca9b9cc10227-config-data\") pod \"aodh-0\" (UID: \"edfce958-206a-4aed-a7c5-ca9b9cc10227\") " pod="openstack/aodh-0" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.965495 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4f2831c4-a6e4-46a7-85b9-32fa56e4e268-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-grmm7\" (UID: \"4f2831c4-a6e4-46a7-85b9-32fa56e4e268\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-grmm7" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.966308 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edfce958-206a-4aed-a7c5-ca9b9cc10227-combined-ca-bundle\") pod \"aodh-0\" (UID: \"edfce958-206a-4aed-a7c5-ca9b9cc10227\") " pod="openstack/aodh-0" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.972531 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln6c2\" (UniqueName: \"kubernetes.io/projected/4f2831c4-a6e4-46a7-85b9-32fa56e4e268-kube-api-access-ln6c2\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-grmm7\" (UID: \"4f2831c4-a6e4-46a7-85b9-32fa56e4e268\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-grmm7" Jan 03 06:10:01 crc kubenswrapper[4854]: I0103 06:10:01.973804 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nz9k4\" (UniqueName: \"kubernetes.io/projected/edfce958-206a-4aed-a7c5-ca9b9cc10227-kube-api-access-nz9k4\") pod \"aodh-0\" (UID: \"edfce958-206a-4aed-a7c5-ca9b9cc10227\") " pod="openstack/aodh-0" Jan 03 06:10:02 crc kubenswrapper[4854]: I0103 06:10:02.118145 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 03 06:10:02 crc kubenswrapper[4854]: I0103 06:10:02.134278 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df567d27-a8cb-4757-8f04-c46469c0a7e4" path="/var/lib/kubelet/pods/df567d27-a8cb-4757-8f04-c46469c0a7e4/volumes" Jan 03 06:10:02 crc kubenswrapper[4854]: I0103 06:10:02.136794 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-grmm7" Jan 03 06:10:02 crc kubenswrapper[4854]: W0103 06:10:02.759378 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f2831c4_a6e4_46a7_85b9_32fa56e4e268.slice/crio-1c5dff1dd350260bb0f987173a67e157462a8e507b6f5b9b19d3844b40a8e798 WatchSource:0}: Error finding container 1c5dff1dd350260bb0f987173a67e157462a8e507b6f5b9b19d3844b40a8e798: Status 404 returned error can't find the container with id 1c5dff1dd350260bb0f987173a67e157462a8e507b6f5b9b19d3844b40a8e798 Jan 03 06:10:02 crc kubenswrapper[4854]: I0103 06:10:02.759848 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-grmm7"] Jan 03 06:10:02 crc kubenswrapper[4854]: W0103 06:10:02.787979 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podedfce958_206a_4aed_a7c5_ca9b9cc10227.slice/crio-a5b09ac26ecf0ba6d1d360cce2b02852f9b65c570c1bc0eb2c07f1811bc3aa05 WatchSource:0}: Error finding container a5b09ac26ecf0ba6d1d360cce2b02852f9b65c570c1bc0eb2c07f1811bc3aa05: Status 404 returned error can't find the container with id a5b09ac26ecf0ba6d1d360cce2b02852f9b65c570c1bc0eb2c07f1811bc3aa05 Jan 03 06:10:02 crc kubenswrapper[4854]: I0103 06:10:02.791496 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 03 06:10:03 crc kubenswrapper[4854]: I0103 06:10:03.447751 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"edfce958-206a-4aed-a7c5-ca9b9cc10227","Type":"ContainerStarted","Data":"c53a8f51752c03be6ba8088e65327ccfc7645ace19b8a34e171e93025993ffd8"} Jan 03 06:10:03 crc kubenswrapper[4854]: I0103 06:10:03.448266 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"edfce958-206a-4aed-a7c5-ca9b9cc10227","Type":"ContainerStarted","Data":"a5b09ac26ecf0ba6d1d360cce2b02852f9b65c570c1bc0eb2c07f1811bc3aa05"} Jan 03 06:10:03 crc kubenswrapper[4854]: I0103 06:10:03.454980 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-grmm7" event={"ID":"4f2831c4-a6e4-46a7-85b9-32fa56e4e268","Type":"ContainerStarted","Data":"1c5dff1dd350260bb0f987173a67e157462a8e507b6f5b9b19d3844b40a8e798"} Jan 03 06:10:03 crc kubenswrapper[4854]: I0103 06:10:03.482885 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-grmm7" podStartSLOduration=2.078770752 podStartE2EDuration="2.482864845s" podCreationTimestamp="2026-01-03 06:10:01 +0000 UTC" firstStartedPulling="2026-01-03 06:10:02.761444059 +0000 UTC m=+1781.088020631" lastFinishedPulling="2026-01-03 06:10:03.165538152 +0000 UTC m=+1781.492114724" observedRunningTime="2026-01-03 06:10:03.472105577 +0000 UTC m=+1781.798682159" watchObservedRunningTime="2026-01-03 06:10:03.482864845 +0000 UTC m=+1781.809441427" Jan 03 06:10:04 crc kubenswrapper[4854]: I0103 06:10:04.474191 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-grmm7" event={"ID":"4f2831c4-a6e4-46a7-85b9-32fa56e4e268","Type":"ContainerStarted","Data":"dc0cc3d304b58fbb1c337d39f96b65965a8945c36c022e835934d5d20412677c"} Jan 03 06:10:05 crc kubenswrapper[4854]: I0103 06:10:05.495718 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"edfce958-206a-4aed-a7c5-ca9b9cc10227","Type":"ContainerStarted","Data":"5a7fc131a09e0633610dfca00a5bc3c4362298d6aa4b7e962a80c74b2ee80b76"} Jan 03 06:10:05 crc kubenswrapper[4854]: I0103 06:10:05.498638 4854 generic.go:334] "Generic (PLEG): container finished" podID="ba007649-daf8-445b-b2c8-73ce6ec54403" containerID="47373f0c373d41d2ed0a8769659142e352b16482458dbc79ebf2fb0ab0dcee4a" exitCode=0 Jan 03 06:10:05 crc kubenswrapper[4854]: I0103 06:10:05.498741 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"ba007649-daf8-445b-b2c8-73ce6ec54403","Type":"ContainerDied","Data":"47373f0c373d41d2ed0a8769659142e352b16482458dbc79ebf2fb0ab0dcee4a"} Jan 03 06:10:06 crc kubenswrapper[4854]: I0103 06:10:06.166485 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 03 06:10:06 crc kubenswrapper[4854]: I0103 06:10:06.197618 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ba007649-daf8-445b-b2c8-73ce6ec54403-rabbitmq-confd\") pod \"ba007649-daf8-445b-b2c8-73ce6ec54403\" (UID: \"ba007649-daf8-445b-b2c8-73ce6ec54403\") " Jan 03 06:10:06 crc kubenswrapper[4854]: I0103 06:10:06.197688 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ba007649-daf8-445b-b2c8-73ce6ec54403-rabbitmq-tls\") pod \"ba007649-daf8-445b-b2c8-73ce6ec54403\" (UID: \"ba007649-daf8-445b-b2c8-73ce6ec54403\") " Jan 03 06:10:06 crc kubenswrapper[4854]: I0103 06:10:06.197845 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ba007649-daf8-445b-b2c8-73ce6ec54403-pod-info\") pod \"ba007649-daf8-445b-b2c8-73ce6ec54403\" (UID: \"ba007649-daf8-445b-b2c8-73ce6ec54403\") " Jan 03 06:10:06 crc kubenswrapper[4854]: I0103 06:10:06.197873 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ba007649-daf8-445b-b2c8-73ce6ec54403-server-conf\") pod \"ba007649-daf8-445b-b2c8-73ce6ec54403\" (UID: \"ba007649-daf8-445b-b2c8-73ce6ec54403\") " Jan 03 06:10:06 crc kubenswrapper[4854]: I0103 06:10:06.197976 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ba007649-daf8-445b-b2c8-73ce6ec54403-rabbitmq-erlang-cookie\") pod \"ba007649-daf8-445b-b2c8-73ce6ec54403\" (UID: \"ba007649-daf8-445b-b2c8-73ce6ec54403\") " Jan 03 06:10:06 crc kubenswrapper[4854]: I0103 06:10:06.198066 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ba007649-daf8-445b-b2c8-73ce6ec54403-plugins-conf\") pod \"ba007649-daf8-445b-b2c8-73ce6ec54403\" (UID: \"ba007649-daf8-445b-b2c8-73ce6ec54403\") " Jan 03 06:10:06 crc kubenswrapper[4854]: I0103 06:10:06.198142 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2mg4\" (UniqueName: \"kubernetes.io/projected/ba007649-daf8-445b-b2c8-73ce6ec54403-kube-api-access-p2mg4\") pod \"ba007649-daf8-445b-b2c8-73ce6ec54403\" (UID: \"ba007649-daf8-445b-b2c8-73ce6ec54403\") " Jan 03 06:10:06 crc kubenswrapper[4854]: I0103 06:10:06.198188 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ba007649-daf8-445b-b2c8-73ce6ec54403-config-data\") pod \"ba007649-daf8-445b-b2c8-73ce6ec54403\" (UID: \"ba007649-daf8-445b-b2c8-73ce6ec54403\") " Jan 03 06:10:06 crc kubenswrapper[4854]: I0103 06:10:06.198215 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ba007649-daf8-445b-b2c8-73ce6ec54403-rabbitmq-plugins\") pod \"ba007649-daf8-445b-b2c8-73ce6ec54403\" (UID: \"ba007649-daf8-445b-b2c8-73ce6ec54403\") " Jan 03 06:10:06 crc kubenswrapper[4854]: I0103 06:10:06.199584 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba007649-daf8-445b-b2c8-73ce6ec54403-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "ba007649-daf8-445b-b2c8-73ce6ec54403" (UID: "ba007649-daf8-445b-b2c8-73ce6ec54403"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:10:06 crc kubenswrapper[4854]: I0103 06:10:06.199594 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-785a3b7a-40c0-4233-b4eb-a94f0e723354\") pod \"ba007649-daf8-445b-b2c8-73ce6ec54403\" (UID: \"ba007649-daf8-445b-b2c8-73ce6ec54403\") " Jan 03 06:10:06 crc kubenswrapper[4854]: I0103 06:10:06.199668 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ba007649-daf8-445b-b2c8-73ce6ec54403-erlang-cookie-secret\") pod \"ba007649-daf8-445b-b2c8-73ce6ec54403\" (UID: \"ba007649-daf8-445b-b2c8-73ce6ec54403\") " Jan 03 06:10:06 crc kubenswrapper[4854]: I0103 06:10:06.199728 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba007649-daf8-445b-b2c8-73ce6ec54403-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "ba007649-daf8-445b-b2c8-73ce6ec54403" (UID: "ba007649-daf8-445b-b2c8-73ce6ec54403"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:10:06 crc kubenswrapper[4854]: I0103 06:10:06.200117 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba007649-daf8-445b-b2c8-73ce6ec54403-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "ba007649-daf8-445b-b2c8-73ce6ec54403" (UID: "ba007649-daf8-445b-b2c8-73ce6ec54403"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:10:06 crc kubenswrapper[4854]: I0103 06:10:06.200829 4854 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ba007649-daf8-445b-b2c8-73ce6ec54403-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 03 06:10:06 crc kubenswrapper[4854]: I0103 06:10:06.200846 4854 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ba007649-daf8-445b-b2c8-73ce6ec54403-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 03 06:10:06 crc kubenswrapper[4854]: I0103 06:10:06.200858 4854 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ba007649-daf8-445b-b2c8-73ce6ec54403-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 03 06:10:06 crc kubenswrapper[4854]: I0103 06:10:06.213347 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba007649-daf8-445b-b2c8-73ce6ec54403-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "ba007649-daf8-445b-b2c8-73ce6ec54403" (UID: "ba007649-daf8-445b-b2c8-73ce6ec54403"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:10:06 crc kubenswrapper[4854]: I0103 06:10:06.225789 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba007649-daf8-445b-b2c8-73ce6ec54403-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "ba007649-daf8-445b-b2c8-73ce6ec54403" (UID: "ba007649-daf8-445b-b2c8-73ce6ec54403"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:10:06 crc kubenswrapper[4854]: I0103 06:10:06.227641 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/ba007649-daf8-445b-b2c8-73ce6ec54403-pod-info" (OuterVolumeSpecName: "pod-info") pod "ba007649-daf8-445b-b2c8-73ce6ec54403" (UID: "ba007649-daf8-445b-b2c8-73ce6ec54403"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 03 06:10:06 crc kubenswrapper[4854]: I0103 06:10:06.234795 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba007649-daf8-445b-b2c8-73ce6ec54403-kube-api-access-p2mg4" (OuterVolumeSpecName: "kube-api-access-p2mg4") pod "ba007649-daf8-445b-b2c8-73ce6ec54403" (UID: "ba007649-daf8-445b-b2c8-73ce6ec54403"). InnerVolumeSpecName "kube-api-access-p2mg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:10:06 crc kubenswrapper[4854]: I0103 06:10:06.265889 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba007649-daf8-445b-b2c8-73ce6ec54403-config-data" (OuterVolumeSpecName: "config-data") pod "ba007649-daf8-445b-b2c8-73ce6ec54403" (UID: "ba007649-daf8-445b-b2c8-73ce6ec54403"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:10:06 crc kubenswrapper[4854]: I0103 06:10:06.275966 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-785a3b7a-40c0-4233-b4eb-a94f0e723354" (OuterVolumeSpecName: "persistence") pod "ba007649-daf8-445b-b2c8-73ce6ec54403" (UID: "ba007649-daf8-445b-b2c8-73ce6ec54403"). InnerVolumeSpecName "pvc-785a3b7a-40c0-4233-b4eb-a94f0e723354". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 03 06:10:06 crc kubenswrapper[4854]: I0103 06:10:06.304130 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2mg4\" (UniqueName: \"kubernetes.io/projected/ba007649-daf8-445b-b2c8-73ce6ec54403-kube-api-access-p2mg4\") on node \"crc\" DevicePath \"\"" Jan 03 06:10:06 crc kubenswrapper[4854]: I0103 06:10:06.304178 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ba007649-daf8-445b-b2c8-73ce6ec54403-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 06:10:06 crc kubenswrapper[4854]: I0103 06:10:06.304207 4854 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-785a3b7a-40c0-4233-b4eb-a94f0e723354\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-785a3b7a-40c0-4233-b4eb-a94f0e723354\") on node \"crc\" " Jan 03 06:10:06 crc kubenswrapper[4854]: I0103 06:10:06.304221 4854 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ba007649-daf8-445b-b2c8-73ce6ec54403-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 03 06:10:06 crc kubenswrapper[4854]: I0103 06:10:06.304232 4854 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ba007649-daf8-445b-b2c8-73ce6ec54403-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 03 06:10:06 crc kubenswrapper[4854]: I0103 06:10:06.304280 4854 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ba007649-daf8-445b-b2c8-73ce6ec54403-pod-info\") on node \"crc\" DevicePath \"\"" Jan 03 06:10:06 crc kubenswrapper[4854]: I0103 06:10:06.357631 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba007649-daf8-445b-b2c8-73ce6ec54403-server-conf" (OuterVolumeSpecName: "server-conf") pod "ba007649-daf8-445b-b2c8-73ce6ec54403" (UID: "ba007649-daf8-445b-b2c8-73ce6ec54403"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:10:06 crc kubenswrapper[4854]: I0103 06:10:06.365188 4854 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 03 06:10:06 crc kubenswrapper[4854]: I0103 06:10:06.366487 4854 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-785a3b7a-40c0-4233-b4eb-a94f0e723354" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-785a3b7a-40c0-4233-b4eb-a94f0e723354") on node "crc" Jan 03 06:10:06 crc kubenswrapper[4854]: I0103 06:10:06.406821 4854 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ba007649-daf8-445b-b2c8-73ce6ec54403-server-conf\") on node \"crc\" DevicePath \"\"" Jan 03 06:10:06 crc kubenswrapper[4854]: I0103 06:10:06.407197 4854 reconciler_common.go:293] "Volume detached for volume \"pvc-785a3b7a-40c0-4233-b4eb-a94f0e723354\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-785a3b7a-40c0-4233-b4eb-a94f0e723354\") on node \"crc\" DevicePath \"\"" Jan 03 06:10:06 crc kubenswrapper[4854]: I0103 06:10:06.480206 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba007649-daf8-445b-b2c8-73ce6ec54403-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "ba007649-daf8-445b-b2c8-73ce6ec54403" (UID: "ba007649-daf8-445b-b2c8-73ce6ec54403"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:10:06 crc kubenswrapper[4854]: I0103 06:10:06.509537 4854 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ba007649-daf8-445b-b2c8-73ce6ec54403-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 03 06:10:06 crc kubenswrapper[4854]: I0103 06:10:06.512708 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"edfce958-206a-4aed-a7c5-ca9b9cc10227","Type":"ContainerStarted","Data":"84a0c5feeb71d7808ad0b2275cebc1ac9fb480ca2169cabdeb39f777cfefb1e7"} Jan 03 06:10:06 crc kubenswrapper[4854]: I0103 06:10:06.515060 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"ba007649-daf8-445b-b2c8-73ce6ec54403","Type":"ContainerDied","Data":"0a8423abbecac7236d02416ed90148ceb9912dfa5aeecf071f12da7504b96e87"} Jan 03 06:10:06 crc kubenswrapper[4854]: I0103 06:10:06.515202 4854 scope.go:117] "RemoveContainer" containerID="47373f0c373d41d2ed0a8769659142e352b16482458dbc79ebf2fb0ab0dcee4a" Jan 03 06:10:06 crc kubenswrapper[4854]: I0103 06:10:06.515429 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 03 06:10:06 crc kubenswrapper[4854]: I0103 06:10:06.835042 4854 scope.go:117] "RemoveContainer" containerID="3da223885f26a8fbf9908728d64ed6e77133ba67ab412d2929573a54cadd668b" Jan 03 06:10:06 crc kubenswrapper[4854]: I0103 06:10:06.841141 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 03 06:10:06 crc kubenswrapper[4854]: I0103 06:10:06.858627 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 03 06:10:06 crc kubenswrapper[4854]: I0103 06:10:06.885275 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Jan 03 06:10:06 crc kubenswrapper[4854]: E0103 06:10:06.887669 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba007649-daf8-445b-b2c8-73ce6ec54403" containerName="rabbitmq" Jan 03 06:10:06 crc kubenswrapper[4854]: I0103 06:10:06.887701 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba007649-daf8-445b-b2c8-73ce6ec54403" containerName="rabbitmq" Jan 03 06:10:06 crc kubenswrapper[4854]: E0103 06:10:06.887752 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba007649-daf8-445b-b2c8-73ce6ec54403" containerName="setup-container" Jan 03 06:10:06 crc kubenswrapper[4854]: I0103 06:10:06.887761 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba007649-daf8-445b-b2c8-73ce6ec54403" containerName="setup-container" Jan 03 06:10:06 crc kubenswrapper[4854]: I0103 06:10:06.888143 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba007649-daf8-445b-b2c8-73ce6ec54403" containerName="rabbitmq" Jan 03 06:10:06 crc kubenswrapper[4854]: I0103 06:10:06.891722 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 03 06:10:06 crc kubenswrapper[4854]: I0103 06:10:06.948736 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 03 06:10:07 crc kubenswrapper[4854]: I0103 06:10:07.041095 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e3bf6ad5-2f7b-4587-ab1e-8c887789d9da-server-conf\") pod \"rabbitmq-server-1\" (UID: \"e3bf6ad5-2f7b-4587-ab1e-8c887789d9da\") " pod="openstack/rabbitmq-server-1" Jan 03 06:10:07 crc kubenswrapper[4854]: I0103 06:10:07.041536 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5k97\" (UniqueName: \"kubernetes.io/projected/e3bf6ad5-2f7b-4587-ab1e-8c887789d9da-kube-api-access-g5k97\") pod \"rabbitmq-server-1\" (UID: \"e3bf6ad5-2f7b-4587-ab1e-8c887789d9da\") " pod="openstack/rabbitmq-server-1" Jan 03 06:10:07 crc kubenswrapper[4854]: I0103 06:10:07.041597 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e3bf6ad5-2f7b-4587-ab1e-8c887789d9da-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"e3bf6ad5-2f7b-4587-ab1e-8c887789d9da\") " pod="openstack/rabbitmq-server-1" Jan 03 06:10:07 crc kubenswrapper[4854]: I0103 06:10:07.041637 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e3bf6ad5-2f7b-4587-ab1e-8c887789d9da-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"e3bf6ad5-2f7b-4587-ab1e-8c887789d9da\") " pod="openstack/rabbitmq-server-1" Jan 03 06:10:07 crc kubenswrapper[4854]: I0103 06:10:07.041692 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e3bf6ad5-2f7b-4587-ab1e-8c887789d9da-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"e3bf6ad5-2f7b-4587-ab1e-8c887789d9da\") " pod="openstack/rabbitmq-server-1" Jan 03 06:10:07 crc kubenswrapper[4854]: I0103 06:10:07.041717 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-785a3b7a-40c0-4233-b4eb-a94f0e723354\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-785a3b7a-40c0-4233-b4eb-a94f0e723354\") pod \"rabbitmq-server-1\" (UID: \"e3bf6ad5-2f7b-4587-ab1e-8c887789d9da\") " pod="openstack/rabbitmq-server-1" Jan 03 06:10:07 crc kubenswrapper[4854]: I0103 06:10:07.041744 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e3bf6ad5-2f7b-4587-ab1e-8c887789d9da-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"e3bf6ad5-2f7b-4587-ab1e-8c887789d9da\") " pod="openstack/rabbitmq-server-1" Jan 03 06:10:07 crc kubenswrapper[4854]: I0103 06:10:07.041787 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e3bf6ad5-2f7b-4587-ab1e-8c887789d9da-config-data\") pod \"rabbitmq-server-1\" (UID: \"e3bf6ad5-2f7b-4587-ab1e-8c887789d9da\") " pod="openstack/rabbitmq-server-1" Jan 03 06:10:07 crc kubenswrapper[4854]: I0103 06:10:07.041809 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e3bf6ad5-2f7b-4587-ab1e-8c887789d9da-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"e3bf6ad5-2f7b-4587-ab1e-8c887789d9da\") " pod="openstack/rabbitmq-server-1" Jan 03 06:10:07 crc kubenswrapper[4854]: I0103 06:10:07.041846 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e3bf6ad5-2f7b-4587-ab1e-8c887789d9da-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"e3bf6ad5-2f7b-4587-ab1e-8c887789d9da\") " pod="openstack/rabbitmq-server-1" Jan 03 06:10:07 crc kubenswrapper[4854]: I0103 06:10:07.041874 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e3bf6ad5-2f7b-4587-ab1e-8c887789d9da-pod-info\") pod \"rabbitmq-server-1\" (UID: \"e3bf6ad5-2f7b-4587-ab1e-8c887789d9da\") " pod="openstack/rabbitmq-server-1" Jan 03 06:10:07 crc kubenswrapper[4854]: I0103 06:10:07.144383 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e3bf6ad5-2f7b-4587-ab1e-8c887789d9da-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"e3bf6ad5-2f7b-4587-ab1e-8c887789d9da\") " pod="openstack/rabbitmq-server-1" Jan 03 06:10:07 crc kubenswrapper[4854]: I0103 06:10:07.144450 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e3bf6ad5-2f7b-4587-ab1e-8c887789d9da-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"e3bf6ad5-2f7b-4587-ab1e-8c887789d9da\") " pod="openstack/rabbitmq-server-1" Jan 03 06:10:07 crc kubenswrapper[4854]: I0103 06:10:07.144496 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e3bf6ad5-2f7b-4587-ab1e-8c887789d9da-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"e3bf6ad5-2f7b-4587-ab1e-8c887789d9da\") " pod="openstack/rabbitmq-server-1" Jan 03 06:10:07 crc kubenswrapper[4854]: I0103 06:10:07.144522 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-785a3b7a-40c0-4233-b4eb-a94f0e723354\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-785a3b7a-40c0-4233-b4eb-a94f0e723354\") pod \"rabbitmq-server-1\" (UID: \"e3bf6ad5-2f7b-4587-ab1e-8c887789d9da\") " pod="openstack/rabbitmq-server-1" Jan 03 06:10:07 crc kubenswrapper[4854]: I0103 06:10:07.144544 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e3bf6ad5-2f7b-4587-ab1e-8c887789d9da-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"e3bf6ad5-2f7b-4587-ab1e-8c887789d9da\") " pod="openstack/rabbitmq-server-1" Jan 03 06:10:07 crc kubenswrapper[4854]: I0103 06:10:07.144590 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e3bf6ad5-2f7b-4587-ab1e-8c887789d9da-config-data\") pod \"rabbitmq-server-1\" (UID: \"e3bf6ad5-2f7b-4587-ab1e-8c887789d9da\") " pod="openstack/rabbitmq-server-1" Jan 03 06:10:07 crc kubenswrapper[4854]: I0103 06:10:07.144608 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e3bf6ad5-2f7b-4587-ab1e-8c887789d9da-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"e3bf6ad5-2f7b-4587-ab1e-8c887789d9da\") " pod="openstack/rabbitmq-server-1" Jan 03 06:10:07 crc kubenswrapper[4854]: I0103 06:10:07.144646 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e3bf6ad5-2f7b-4587-ab1e-8c887789d9da-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"e3bf6ad5-2f7b-4587-ab1e-8c887789d9da\") " pod="openstack/rabbitmq-server-1" Jan 03 06:10:07 crc kubenswrapper[4854]: I0103 06:10:07.144672 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e3bf6ad5-2f7b-4587-ab1e-8c887789d9da-pod-info\") pod \"rabbitmq-server-1\" (UID: \"e3bf6ad5-2f7b-4587-ab1e-8c887789d9da\") " pod="openstack/rabbitmq-server-1" Jan 03 06:10:07 crc kubenswrapper[4854]: I0103 06:10:07.144711 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e3bf6ad5-2f7b-4587-ab1e-8c887789d9da-server-conf\") pod \"rabbitmq-server-1\" (UID: \"e3bf6ad5-2f7b-4587-ab1e-8c887789d9da\") " pod="openstack/rabbitmq-server-1" Jan 03 06:10:07 crc kubenswrapper[4854]: I0103 06:10:07.144779 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5k97\" (UniqueName: \"kubernetes.io/projected/e3bf6ad5-2f7b-4587-ab1e-8c887789d9da-kube-api-access-g5k97\") pod \"rabbitmq-server-1\" (UID: \"e3bf6ad5-2f7b-4587-ab1e-8c887789d9da\") " pod="openstack/rabbitmq-server-1" Jan 03 06:10:07 crc kubenswrapper[4854]: I0103 06:10:07.145181 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e3bf6ad5-2f7b-4587-ab1e-8c887789d9da-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"e3bf6ad5-2f7b-4587-ab1e-8c887789d9da\") " pod="openstack/rabbitmq-server-1" Jan 03 06:10:07 crc kubenswrapper[4854]: I0103 06:10:07.146608 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e3bf6ad5-2f7b-4587-ab1e-8c887789d9da-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"e3bf6ad5-2f7b-4587-ab1e-8c887789d9da\") " pod="openstack/rabbitmq-server-1" Jan 03 06:10:07 crc kubenswrapper[4854]: I0103 06:10:07.147348 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e3bf6ad5-2f7b-4587-ab1e-8c887789d9da-config-data\") pod \"rabbitmq-server-1\" (UID: \"e3bf6ad5-2f7b-4587-ab1e-8c887789d9da\") " pod="openstack/rabbitmq-server-1" Jan 03 06:10:07 crc kubenswrapper[4854]: I0103 06:10:07.150910 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e3bf6ad5-2f7b-4587-ab1e-8c887789d9da-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"e3bf6ad5-2f7b-4587-ab1e-8c887789d9da\") " pod="openstack/rabbitmq-server-1" Jan 03 06:10:07 crc kubenswrapper[4854]: I0103 06:10:07.151934 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e3bf6ad5-2f7b-4587-ab1e-8c887789d9da-server-conf\") pod \"rabbitmq-server-1\" (UID: \"e3bf6ad5-2f7b-4587-ab1e-8c887789d9da\") " pod="openstack/rabbitmq-server-1" Jan 03 06:10:07 crc kubenswrapper[4854]: I0103 06:10:07.152517 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e3bf6ad5-2f7b-4587-ab1e-8c887789d9da-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"e3bf6ad5-2f7b-4587-ab1e-8c887789d9da\") " pod="openstack/rabbitmq-server-1" Jan 03 06:10:07 crc kubenswrapper[4854]: I0103 06:10:07.152570 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e3bf6ad5-2f7b-4587-ab1e-8c887789d9da-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"e3bf6ad5-2f7b-4587-ab1e-8c887789d9da\") " pod="openstack/rabbitmq-server-1" Jan 03 06:10:07 crc kubenswrapper[4854]: I0103 06:10:07.153485 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e3bf6ad5-2f7b-4587-ab1e-8c887789d9da-pod-info\") pod \"rabbitmq-server-1\" (UID: \"e3bf6ad5-2f7b-4587-ab1e-8c887789d9da\") " pod="openstack/rabbitmq-server-1" Jan 03 06:10:07 crc kubenswrapper[4854]: I0103 06:10:07.155649 4854 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 03 06:10:07 crc kubenswrapper[4854]: I0103 06:10:07.155689 4854 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-785a3b7a-40c0-4233-b4eb-a94f0e723354\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-785a3b7a-40c0-4233-b4eb-a94f0e723354\") pod \"rabbitmq-server-1\" (UID: \"e3bf6ad5-2f7b-4587-ab1e-8c887789d9da\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c874a54e7a277745e906225a135021eb2dc44cabc2a4e528eb3f699d09437dd7/globalmount\"" pod="openstack/rabbitmq-server-1" Jan 03 06:10:07 crc kubenswrapper[4854]: I0103 06:10:07.156412 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e3bf6ad5-2f7b-4587-ab1e-8c887789d9da-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"e3bf6ad5-2f7b-4587-ab1e-8c887789d9da\") " pod="openstack/rabbitmq-server-1" Jan 03 06:10:07 crc kubenswrapper[4854]: I0103 06:10:07.163329 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5k97\" (UniqueName: \"kubernetes.io/projected/e3bf6ad5-2f7b-4587-ab1e-8c887789d9da-kube-api-access-g5k97\") pod \"rabbitmq-server-1\" (UID: \"e3bf6ad5-2f7b-4587-ab1e-8c887789d9da\") " pod="openstack/rabbitmq-server-1" Jan 03 06:10:07 crc kubenswrapper[4854]: I0103 06:10:07.234011 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-785a3b7a-40c0-4233-b4eb-a94f0e723354\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-785a3b7a-40c0-4233-b4eb-a94f0e723354\") pod \"rabbitmq-server-1\" (UID: \"e3bf6ad5-2f7b-4587-ab1e-8c887789d9da\") " pod="openstack/rabbitmq-server-1" Jan 03 06:10:07 crc kubenswrapper[4854]: I0103 06:10:07.277601 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 03 06:10:07 crc kubenswrapper[4854]: I0103 06:10:07.897826 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 03 06:10:08 crc kubenswrapper[4854]: W0103 06:10:08.124380 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3bf6ad5_2f7b_4587_ab1e_8c887789d9da.slice/crio-873026ad6b86004c7bb83cda2c48a6a06c76aad0c1ebc41252870b6e22d6cb0c WatchSource:0}: Error finding container 873026ad6b86004c7bb83cda2c48a6a06c76aad0c1ebc41252870b6e22d6cb0c: Status 404 returned error can't find the container with id 873026ad6b86004c7bb83cda2c48a6a06c76aad0c1ebc41252870b6e22d6cb0c Jan 03 06:10:08 crc kubenswrapper[4854]: I0103 06:10:08.135613 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba007649-daf8-445b-b2c8-73ce6ec54403" path="/var/lib/kubelet/pods/ba007649-daf8-445b-b2c8-73ce6ec54403/volumes" Jan 03 06:10:08 crc kubenswrapper[4854]: I0103 06:10:08.588099 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"e3bf6ad5-2f7b-4587-ab1e-8c887789d9da","Type":"ContainerStarted","Data":"873026ad6b86004c7bb83cda2c48a6a06c76aad0c1ebc41252870b6e22d6cb0c"} Jan 03 06:10:09 crc kubenswrapper[4854]: I0103 06:10:09.602841 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"edfce958-206a-4aed-a7c5-ca9b9cc10227","Type":"ContainerStarted","Data":"2527d46d527a97564f1b65a50b76c0166fbb06fbbbc86f60e577698cc4e6e2e4"} Jan 03 06:10:09 crc kubenswrapper[4854]: I0103 06:10:09.648377 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=3.251472586 podStartE2EDuration="8.648350365s" podCreationTimestamp="2026-01-03 06:10:01 +0000 UTC" firstStartedPulling="2026-01-03 06:10:02.791280362 +0000 UTC m=+1781.117856934" lastFinishedPulling="2026-01-03 06:10:08.188158151 +0000 UTC m=+1786.514734713" observedRunningTime="2026-01-03 06:10:09.636345786 +0000 UTC m=+1787.962922368" watchObservedRunningTime="2026-01-03 06:10:09.648350365 +0000 UTC m=+1787.974926947" Jan 03 06:10:10 crc kubenswrapper[4854]: I0103 06:10:10.618997 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"e3bf6ad5-2f7b-4587-ab1e-8c887789d9da","Type":"ContainerStarted","Data":"ab68375283a562cccbcb84443d9b9a2694c67699231d7906f0d342e7070161a4"} Jan 03 06:10:14 crc kubenswrapper[4854]: I0103 06:10:14.118451 4854 scope.go:117] "RemoveContainer" containerID="1db592b68f62b6c2ad08a01037bad35421ca86a46de4654a3ad5f8ce76bb6f1b" Jan 03 06:10:14 crc kubenswrapper[4854]: E0103 06:10:14.119155 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:10:26 crc kubenswrapper[4854]: I0103 06:10:26.119027 4854 scope.go:117] "RemoveContainer" containerID="1db592b68f62b6c2ad08a01037bad35421ca86a46de4654a3ad5f8ce76bb6f1b" Jan 03 06:10:26 crc kubenswrapper[4854]: E0103 06:10:26.119878 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:10:41 crc kubenswrapper[4854]: I0103 06:10:41.119044 4854 scope.go:117] "RemoveContainer" containerID="1db592b68f62b6c2ad08a01037bad35421ca86a46de4654a3ad5f8ce76bb6f1b" Jan 03 06:10:41 crc kubenswrapper[4854]: E0103 06:10:41.120214 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:10:44 crc kubenswrapper[4854]: I0103 06:10:44.093578 4854 generic.go:334] "Generic (PLEG): container finished" podID="e3bf6ad5-2f7b-4587-ab1e-8c887789d9da" containerID="ab68375283a562cccbcb84443d9b9a2694c67699231d7906f0d342e7070161a4" exitCode=0 Jan 03 06:10:44 crc kubenswrapper[4854]: I0103 06:10:44.093788 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"e3bf6ad5-2f7b-4587-ab1e-8c887789d9da","Type":"ContainerDied","Data":"ab68375283a562cccbcb84443d9b9a2694c67699231d7906f0d342e7070161a4"} Jan 03 06:10:45 crc kubenswrapper[4854]: I0103 06:10:45.108451 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"e3bf6ad5-2f7b-4587-ab1e-8c887789d9da","Type":"ContainerStarted","Data":"657f35a69843c6efce3f0826e8d0e7d2bd57bcd9a8d7a45eed13a18eed88ec9f"} Jan 03 06:10:45 crc kubenswrapper[4854]: I0103 06:10:45.109163 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Jan 03 06:10:45 crc kubenswrapper[4854]: I0103 06:10:45.135961 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=39.135940939 podStartE2EDuration="39.135940939s" podCreationTimestamp="2026-01-03 06:10:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:10:45.133209081 +0000 UTC m=+1823.459785663" watchObservedRunningTime="2026-01-03 06:10:45.135940939 +0000 UTC m=+1823.462517501" Jan 03 06:10:53 crc kubenswrapper[4854]: I0103 06:10:53.119501 4854 scope.go:117] "RemoveContainer" containerID="1db592b68f62b6c2ad08a01037bad35421ca86a46de4654a3ad5f8ce76bb6f1b" Jan 03 06:10:53 crc kubenswrapper[4854]: E0103 06:10:53.120551 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:10:57 crc kubenswrapper[4854]: I0103 06:10:57.281358 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Jan 03 06:10:57 crc kubenswrapper[4854]: I0103 06:10:57.360985 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 03 06:10:59 crc kubenswrapper[4854]: I0103 06:10:59.790861 4854 scope.go:117] "RemoveContainer" containerID="0f058ad58c99658a32860eff06f299498f5189ce237c486e3b40dae3d2b46db0" Jan 03 06:11:01 crc kubenswrapper[4854]: I0103 06:11:01.838643 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="b5742bd8-396a-4174-a8b7-dd6deec69632" containerName="rabbitmq" containerID="cri-o://0b753d996ac1ea5a51f54ed40a0e4776ee6470150075864e7557c78c2d7875ce" gracePeriod=604796 Jan 03 06:11:05 crc kubenswrapper[4854]: I0103 06:11:05.118938 4854 scope.go:117] "RemoveContainer" containerID="1db592b68f62b6c2ad08a01037bad35421ca86a46de4654a3ad5f8ce76bb6f1b" Jan 03 06:11:05 crc kubenswrapper[4854]: E0103 06:11:05.120281 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:11:09 crc kubenswrapper[4854]: I0103 06:11:09.041094 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="b5742bd8-396a-4174-a8b7-dd6deec69632" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.115224 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.244907 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b5742bd8-396a-4174-a8b7-dd6deec69632-rabbitmq-confd\") pod \"b5742bd8-396a-4174-a8b7-dd6deec69632\" (UID: \"b5742bd8-396a-4174-a8b7-dd6deec69632\") " Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.244976 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b5742bd8-396a-4174-a8b7-dd6deec69632-erlang-cookie-secret\") pod \"b5742bd8-396a-4174-a8b7-dd6deec69632\" (UID: \"b5742bd8-396a-4174-a8b7-dd6deec69632\") " Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.245107 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b5742bd8-396a-4174-a8b7-dd6deec69632-rabbitmq-erlang-cookie\") pod \"b5742bd8-396a-4174-a8b7-dd6deec69632\" (UID: \"b5742bd8-396a-4174-a8b7-dd6deec69632\") " Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.245569 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d3da316d-166a-42b4-866b-872ff9ab007f\") pod \"b5742bd8-396a-4174-a8b7-dd6deec69632\" (UID: \"b5742bd8-396a-4174-a8b7-dd6deec69632\") " Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.245621 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4p8q2\" (UniqueName: \"kubernetes.io/projected/b5742bd8-396a-4174-a8b7-dd6deec69632-kube-api-access-4p8q2\") pod \"b5742bd8-396a-4174-a8b7-dd6deec69632\" (UID: \"b5742bd8-396a-4174-a8b7-dd6deec69632\") " Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.245650 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b5742bd8-396a-4174-a8b7-dd6deec69632-rabbitmq-plugins\") pod \"b5742bd8-396a-4174-a8b7-dd6deec69632\" (UID: \"b5742bd8-396a-4174-a8b7-dd6deec69632\") " Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.245720 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b5742bd8-396a-4174-a8b7-dd6deec69632-config-data\") pod \"b5742bd8-396a-4174-a8b7-dd6deec69632\" (UID: \"b5742bd8-396a-4174-a8b7-dd6deec69632\") " Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.245823 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b5742bd8-396a-4174-a8b7-dd6deec69632-pod-info\") pod \"b5742bd8-396a-4174-a8b7-dd6deec69632\" (UID: \"b5742bd8-396a-4174-a8b7-dd6deec69632\") " Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.245901 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b5742bd8-396a-4174-a8b7-dd6deec69632-server-conf\") pod \"b5742bd8-396a-4174-a8b7-dd6deec69632\" (UID: \"b5742bd8-396a-4174-a8b7-dd6deec69632\") " Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.245933 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b5742bd8-396a-4174-a8b7-dd6deec69632-rabbitmq-tls\") pod \"b5742bd8-396a-4174-a8b7-dd6deec69632\" (UID: \"b5742bd8-396a-4174-a8b7-dd6deec69632\") " Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.245970 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b5742bd8-396a-4174-a8b7-dd6deec69632-plugins-conf\") pod \"b5742bd8-396a-4174-a8b7-dd6deec69632\" (UID: \"b5742bd8-396a-4174-a8b7-dd6deec69632\") " Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.245965 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5742bd8-396a-4174-a8b7-dd6deec69632-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "b5742bd8-396a-4174-a8b7-dd6deec69632" (UID: "b5742bd8-396a-4174-a8b7-dd6deec69632"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.246623 4854 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b5742bd8-396a-4174-a8b7-dd6deec69632-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.247459 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5742bd8-396a-4174-a8b7-dd6deec69632-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "b5742bd8-396a-4174-a8b7-dd6deec69632" (UID: "b5742bd8-396a-4174-a8b7-dd6deec69632"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.248958 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5742bd8-396a-4174-a8b7-dd6deec69632-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "b5742bd8-396a-4174-a8b7-dd6deec69632" (UID: "b5742bd8-396a-4174-a8b7-dd6deec69632"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.253797 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5742bd8-396a-4174-a8b7-dd6deec69632-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "b5742bd8-396a-4174-a8b7-dd6deec69632" (UID: "b5742bd8-396a-4174-a8b7-dd6deec69632"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.253869 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5742bd8-396a-4174-a8b7-dd6deec69632-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "b5742bd8-396a-4174-a8b7-dd6deec69632" (UID: "b5742bd8-396a-4174-a8b7-dd6deec69632"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.254904 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5742bd8-396a-4174-a8b7-dd6deec69632-kube-api-access-4p8q2" (OuterVolumeSpecName: "kube-api-access-4p8q2") pod "b5742bd8-396a-4174-a8b7-dd6deec69632" (UID: "b5742bd8-396a-4174-a8b7-dd6deec69632"). InnerVolumeSpecName "kube-api-access-4p8q2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.272366 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/b5742bd8-396a-4174-a8b7-dd6deec69632-pod-info" (OuterVolumeSpecName: "pod-info") pod "b5742bd8-396a-4174-a8b7-dd6deec69632" (UID: "b5742bd8-396a-4174-a8b7-dd6deec69632"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.289666 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d3da316d-166a-42b4-866b-872ff9ab007f" (OuterVolumeSpecName: "persistence") pod "b5742bd8-396a-4174-a8b7-dd6deec69632" (UID: "b5742bd8-396a-4174-a8b7-dd6deec69632"). InnerVolumeSpecName "pvc-d3da316d-166a-42b4-866b-872ff9ab007f". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.294125 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5742bd8-396a-4174-a8b7-dd6deec69632-config-data" (OuterVolumeSpecName: "config-data") pod "b5742bd8-396a-4174-a8b7-dd6deec69632" (UID: "b5742bd8-396a-4174-a8b7-dd6deec69632"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.350216 4854 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b5742bd8-396a-4174-a8b7-dd6deec69632-pod-info\") on node \"crc\" DevicePath \"\"" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.350255 4854 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b5742bd8-396a-4174-a8b7-dd6deec69632-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.350266 4854 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b5742bd8-396a-4174-a8b7-dd6deec69632-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.350275 4854 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b5742bd8-396a-4174-a8b7-dd6deec69632-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.350314 4854 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-d3da316d-166a-42b4-866b-872ff9ab007f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d3da316d-166a-42b4-866b-872ff9ab007f\") on node \"crc\" " Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.350328 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4p8q2\" (UniqueName: \"kubernetes.io/projected/b5742bd8-396a-4174-a8b7-dd6deec69632-kube-api-access-4p8q2\") on node \"crc\" DevicePath \"\"" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.350338 4854 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b5742bd8-396a-4174-a8b7-dd6deec69632-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.350346 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b5742bd8-396a-4174-a8b7-dd6deec69632-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.354643 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5742bd8-396a-4174-a8b7-dd6deec69632-server-conf" (OuterVolumeSpecName: "server-conf") pod "b5742bd8-396a-4174-a8b7-dd6deec69632" (UID: "b5742bd8-396a-4174-a8b7-dd6deec69632"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.391968 4854 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.392115 4854 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-d3da316d-166a-42b4-866b-872ff9ab007f" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d3da316d-166a-42b4-866b-872ff9ab007f") on node "crc" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.453663 4854 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b5742bd8-396a-4174-a8b7-dd6deec69632-server-conf\") on node \"crc\" DevicePath \"\"" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.453703 4854 reconciler_common.go:293] "Volume detached for volume \"pvc-d3da316d-166a-42b4-866b-872ff9ab007f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d3da316d-166a-42b4-866b-872ff9ab007f\") on node \"crc\" DevicePath \"\"" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.463898 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5742bd8-396a-4174-a8b7-dd6deec69632-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "b5742bd8-396a-4174-a8b7-dd6deec69632" (UID: "b5742bd8-396a-4174-a8b7-dd6deec69632"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.469362 4854 generic.go:334] "Generic (PLEG): container finished" podID="b5742bd8-396a-4174-a8b7-dd6deec69632" containerID="0b753d996ac1ea5a51f54ed40a0e4776ee6470150075864e7557c78c2d7875ce" exitCode=0 Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.469417 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b5742bd8-396a-4174-a8b7-dd6deec69632","Type":"ContainerDied","Data":"0b753d996ac1ea5a51f54ed40a0e4776ee6470150075864e7557c78c2d7875ce"} Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.469455 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b5742bd8-396a-4174-a8b7-dd6deec69632","Type":"ContainerDied","Data":"7363264c49196cd57d70879897271a07b44de86859d62c3ec6a6f7523e3e8853"} Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.469474 4854 scope.go:117] "RemoveContainer" containerID="0b753d996ac1ea5a51f54ed40a0e4776ee6470150075864e7557c78c2d7875ce" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.469716 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.547257 4854 scope.go:117] "RemoveContainer" containerID="88f49f06efbbf4014f255467c048d5442670bcc7f7f5b289052869111303351b" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.554898 4854 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b5742bd8-396a-4174-a8b7-dd6deec69632-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.563150 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.584289 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.628164 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 03 06:11:11 crc kubenswrapper[4854]: E0103 06:11:11.628754 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5742bd8-396a-4174-a8b7-dd6deec69632" containerName="rabbitmq" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.628766 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5742bd8-396a-4174-a8b7-dd6deec69632" containerName="rabbitmq" Jan 03 06:11:11 crc kubenswrapper[4854]: E0103 06:11:11.628791 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5742bd8-396a-4174-a8b7-dd6deec69632" containerName="setup-container" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.628797 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5742bd8-396a-4174-a8b7-dd6deec69632" containerName="setup-container" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.629033 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5742bd8-396a-4174-a8b7-dd6deec69632" containerName="rabbitmq" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.630484 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.638486 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.677282 4854 scope.go:117] "RemoveContainer" containerID="0b753d996ac1ea5a51f54ed40a0e4776ee6470150075864e7557c78c2d7875ce" Jan 03 06:11:11 crc kubenswrapper[4854]: E0103 06:11:11.677759 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b753d996ac1ea5a51f54ed40a0e4776ee6470150075864e7557c78c2d7875ce\": container with ID starting with 0b753d996ac1ea5a51f54ed40a0e4776ee6470150075864e7557c78c2d7875ce not found: ID does not exist" containerID="0b753d996ac1ea5a51f54ed40a0e4776ee6470150075864e7557c78c2d7875ce" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.677803 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b753d996ac1ea5a51f54ed40a0e4776ee6470150075864e7557c78c2d7875ce"} err="failed to get container status \"0b753d996ac1ea5a51f54ed40a0e4776ee6470150075864e7557c78c2d7875ce\": rpc error: code = NotFound desc = could not find container \"0b753d996ac1ea5a51f54ed40a0e4776ee6470150075864e7557c78c2d7875ce\": container with ID starting with 0b753d996ac1ea5a51f54ed40a0e4776ee6470150075864e7557c78c2d7875ce not found: ID does not exist" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.677830 4854 scope.go:117] "RemoveContainer" containerID="88f49f06efbbf4014f255467c048d5442670bcc7f7f5b289052869111303351b" Jan 03 06:11:11 crc kubenswrapper[4854]: E0103 06:11:11.678891 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88f49f06efbbf4014f255467c048d5442670bcc7f7f5b289052869111303351b\": container with ID starting with 88f49f06efbbf4014f255467c048d5442670bcc7f7f5b289052869111303351b not found: ID does not exist" containerID="88f49f06efbbf4014f255467c048d5442670bcc7f7f5b289052869111303351b" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.678927 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88f49f06efbbf4014f255467c048d5442670bcc7f7f5b289052869111303351b"} err="failed to get container status \"88f49f06efbbf4014f255467c048d5442670bcc7f7f5b289052869111303351b\": rpc error: code = NotFound desc = could not find container \"88f49f06efbbf4014f255467c048d5442670bcc7f7f5b289052869111303351b\": container with ID starting with 88f49f06efbbf4014f255467c048d5442670bcc7f7f5b289052869111303351b not found: ID does not exist" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.767045 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/56f97b0e-7c4f-409b-9953-a2db4410ee6a-config-data\") pod \"rabbitmq-server-0\" (UID: \"56f97b0e-7c4f-409b-9953-a2db4410ee6a\") " pod="openstack/rabbitmq-server-0" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.767344 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/56f97b0e-7c4f-409b-9953-a2db4410ee6a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"56f97b0e-7c4f-409b-9953-a2db4410ee6a\") " pod="openstack/rabbitmq-server-0" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.767432 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/56f97b0e-7c4f-409b-9953-a2db4410ee6a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"56f97b0e-7c4f-409b-9953-a2db4410ee6a\") " pod="openstack/rabbitmq-server-0" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.767677 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/56f97b0e-7c4f-409b-9953-a2db4410ee6a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"56f97b0e-7c4f-409b-9953-a2db4410ee6a\") " pod="openstack/rabbitmq-server-0" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.767710 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/56f97b0e-7c4f-409b-9953-a2db4410ee6a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"56f97b0e-7c4f-409b-9953-a2db4410ee6a\") " pod="openstack/rabbitmq-server-0" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.767764 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/56f97b0e-7c4f-409b-9953-a2db4410ee6a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"56f97b0e-7c4f-409b-9953-a2db4410ee6a\") " pod="openstack/rabbitmq-server-0" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.767903 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d3da316d-166a-42b4-866b-872ff9ab007f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d3da316d-166a-42b4-866b-872ff9ab007f\") pod \"rabbitmq-server-0\" (UID: \"56f97b0e-7c4f-409b-9953-a2db4410ee6a\") " pod="openstack/rabbitmq-server-0" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.767959 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh6rq\" (UniqueName: \"kubernetes.io/projected/56f97b0e-7c4f-409b-9953-a2db4410ee6a-kube-api-access-xh6rq\") pod \"rabbitmq-server-0\" (UID: \"56f97b0e-7c4f-409b-9953-a2db4410ee6a\") " pod="openstack/rabbitmq-server-0" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.768209 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/56f97b0e-7c4f-409b-9953-a2db4410ee6a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"56f97b0e-7c4f-409b-9953-a2db4410ee6a\") " pod="openstack/rabbitmq-server-0" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.768318 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/56f97b0e-7c4f-409b-9953-a2db4410ee6a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"56f97b0e-7c4f-409b-9953-a2db4410ee6a\") " pod="openstack/rabbitmq-server-0" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.768397 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/56f97b0e-7c4f-409b-9953-a2db4410ee6a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"56f97b0e-7c4f-409b-9953-a2db4410ee6a\") " pod="openstack/rabbitmq-server-0" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.870613 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/56f97b0e-7c4f-409b-9953-a2db4410ee6a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"56f97b0e-7c4f-409b-9953-a2db4410ee6a\") " pod="openstack/rabbitmq-server-0" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.870665 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/56f97b0e-7c4f-409b-9953-a2db4410ee6a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"56f97b0e-7c4f-409b-9953-a2db4410ee6a\") " pod="openstack/rabbitmq-server-0" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.870699 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/56f97b0e-7c4f-409b-9953-a2db4410ee6a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"56f97b0e-7c4f-409b-9953-a2db4410ee6a\") " pod="openstack/rabbitmq-server-0" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.870749 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/56f97b0e-7c4f-409b-9953-a2db4410ee6a-config-data\") pod \"rabbitmq-server-0\" (UID: \"56f97b0e-7c4f-409b-9953-a2db4410ee6a\") " pod="openstack/rabbitmq-server-0" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.870802 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/56f97b0e-7c4f-409b-9953-a2db4410ee6a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"56f97b0e-7c4f-409b-9953-a2db4410ee6a\") " pod="openstack/rabbitmq-server-0" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.870822 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/56f97b0e-7c4f-409b-9953-a2db4410ee6a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"56f97b0e-7c4f-409b-9953-a2db4410ee6a\") " pod="openstack/rabbitmq-server-0" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.870872 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/56f97b0e-7c4f-409b-9953-a2db4410ee6a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"56f97b0e-7c4f-409b-9953-a2db4410ee6a\") " pod="openstack/rabbitmq-server-0" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.870889 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/56f97b0e-7c4f-409b-9953-a2db4410ee6a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"56f97b0e-7c4f-409b-9953-a2db4410ee6a\") " pod="openstack/rabbitmq-server-0" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.870910 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/56f97b0e-7c4f-409b-9953-a2db4410ee6a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"56f97b0e-7c4f-409b-9953-a2db4410ee6a\") " pod="openstack/rabbitmq-server-0" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.870950 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d3da316d-166a-42b4-866b-872ff9ab007f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d3da316d-166a-42b4-866b-872ff9ab007f\") pod \"rabbitmq-server-0\" (UID: \"56f97b0e-7c4f-409b-9953-a2db4410ee6a\") " pod="openstack/rabbitmq-server-0" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.870972 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xh6rq\" (UniqueName: \"kubernetes.io/projected/56f97b0e-7c4f-409b-9953-a2db4410ee6a-kube-api-access-xh6rq\") pod \"rabbitmq-server-0\" (UID: \"56f97b0e-7c4f-409b-9953-a2db4410ee6a\") " pod="openstack/rabbitmq-server-0" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.871751 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/56f97b0e-7c4f-409b-9953-a2db4410ee6a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"56f97b0e-7c4f-409b-9953-a2db4410ee6a\") " pod="openstack/rabbitmq-server-0" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.872130 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/56f97b0e-7c4f-409b-9953-a2db4410ee6a-config-data\") pod \"rabbitmq-server-0\" (UID: \"56f97b0e-7c4f-409b-9953-a2db4410ee6a\") " pod="openstack/rabbitmq-server-0" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.872852 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/56f97b0e-7c4f-409b-9953-a2db4410ee6a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"56f97b0e-7c4f-409b-9953-a2db4410ee6a\") " pod="openstack/rabbitmq-server-0" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.875535 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/56f97b0e-7c4f-409b-9953-a2db4410ee6a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"56f97b0e-7c4f-409b-9953-a2db4410ee6a\") " pod="openstack/rabbitmq-server-0" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.877417 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/56f97b0e-7c4f-409b-9953-a2db4410ee6a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"56f97b0e-7c4f-409b-9953-a2db4410ee6a\") " pod="openstack/rabbitmq-server-0" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.879802 4854 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.879846 4854 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d3da316d-166a-42b4-866b-872ff9ab007f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d3da316d-166a-42b4-866b-872ff9ab007f\") pod \"rabbitmq-server-0\" (UID: \"56f97b0e-7c4f-409b-9953-a2db4410ee6a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e21c3608fa8a5b41b0269f8d0775f0d8ff74b744eb534b992671df54d6ebfc27/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.879986 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/56f97b0e-7c4f-409b-9953-a2db4410ee6a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"56f97b0e-7c4f-409b-9953-a2db4410ee6a\") " pod="openstack/rabbitmq-server-0" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.880681 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/56f97b0e-7c4f-409b-9953-a2db4410ee6a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"56f97b0e-7c4f-409b-9953-a2db4410ee6a\") " pod="openstack/rabbitmq-server-0" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.880792 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/56f97b0e-7c4f-409b-9953-a2db4410ee6a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"56f97b0e-7c4f-409b-9953-a2db4410ee6a\") " pod="openstack/rabbitmq-server-0" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.883841 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/56f97b0e-7c4f-409b-9953-a2db4410ee6a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"56f97b0e-7c4f-409b-9953-a2db4410ee6a\") " pod="openstack/rabbitmq-server-0" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.898868 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xh6rq\" (UniqueName: \"kubernetes.io/projected/56f97b0e-7c4f-409b-9953-a2db4410ee6a-kube-api-access-xh6rq\") pod \"rabbitmq-server-0\" (UID: \"56f97b0e-7c4f-409b-9953-a2db4410ee6a\") " pod="openstack/rabbitmq-server-0" Jan 03 06:11:11 crc kubenswrapper[4854]: I0103 06:11:11.951740 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d3da316d-166a-42b4-866b-872ff9ab007f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d3da316d-166a-42b4-866b-872ff9ab007f\") pod \"rabbitmq-server-0\" (UID: \"56f97b0e-7c4f-409b-9953-a2db4410ee6a\") " pod="openstack/rabbitmq-server-0" Jan 03 06:11:12 crc kubenswrapper[4854]: I0103 06:11:12.041675 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 03 06:11:12 crc kubenswrapper[4854]: I0103 06:11:12.169559 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5742bd8-396a-4174-a8b7-dd6deec69632" path="/var/lib/kubelet/pods/b5742bd8-396a-4174-a8b7-dd6deec69632/volumes" Jan 03 06:11:12 crc kubenswrapper[4854]: I0103 06:11:12.536760 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 03 06:11:13 crc kubenswrapper[4854]: I0103 06:11:13.491480 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"56f97b0e-7c4f-409b-9953-a2db4410ee6a","Type":"ContainerStarted","Data":"eecd5409780ef284d9d15721200ed9caeac3d425f68db914256c53fd5af014f7"} Jan 03 06:11:14 crc kubenswrapper[4854]: I0103 06:11:14.508723 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"56f97b0e-7c4f-409b-9953-a2db4410ee6a","Type":"ContainerStarted","Data":"1960287e0e7e952bec8e3ee1bce98684b0b9965c2cbab31b72de6b1e96a4d4aa"} Jan 03 06:11:20 crc kubenswrapper[4854]: I0103 06:11:20.120984 4854 scope.go:117] "RemoveContainer" containerID="1db592b68f62b6c2ad08a01037bad35421ca86a46de4654a3ad5f8ce76bb6f1b" Jan 03 06:11:20 crc kubenswrapper[4854]: E0103 06:11:20.122279 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:11:32 crc kubenswrapper[4854]: I0103 06:11:32.130691 4854 scope.go:117] "RemoveContainer" containerID="1db592b68f62b6c2ad08a01037bad35421ca86a46de4654a3ad5f8ce76bb6f1b" Jan 03 06:11:32 crc kubenswrapper[4854]: E0103 06:11:32.132122 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:11:44 crc kubenswrapper[4854]: I0103 06:11:44.119133 4854 scope.go:117] "RemoveContainer" containerID="1db592b68f62b6c2ad08a01037bad35421ca86a46de4654a3ad5f8ce76bb6f1b" Jan 03 06:11:44 crc kubenswrapper[4854]: E0103 06:11:44.120063 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:11:48 crc kubenswrapper[4854]: I0103 06:11:48.100934 4854 generic.go:334] "Generic (PLEG): container finished" podID="56f97b0e-7c4f-409b-9953-a2db4410ee6a" containerID="1960287e0e7e952bec8e3ee1bce98684b0b9965c2cbab31b72de6b1e96a4d4aa" exitCode=0 Jan 03 06:11:48 crc kubenswrapper[4854]: I0103 06:11:48.101021 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"56f97b0e-7c4f-409b-9953-a2db4410ee6a","Type":"ContainerDied","Data":"1960287e0e7e952bec8e3ee1bce98684b0b9965c2cbab31b72de6b1e96a4d4aa"} Jan 03 06:11:49 crc kubenswrapper[4854]: I0103 06:11:49.120124 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"56f97b0e-7c4f-409b-9953-a2db4410ee6a","Type":"ContainerStarted","Data":"c47dbd2f17d7e655c818d687b95090300d30e0134d79e235e617cd9d9e609fa3"} Jan 03 06:11:49 crc kubenswrapper[4854]: I0103 06:11:49.121056 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 03 06:11:49 crc kubenswrapper[4854]: I0103 06:11:49.163165 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.163140871 podStartE2EDuration="38.163140871s" podCreationTimestamp="2026-01-03 06:11:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 06:11:49.149773768 +0000 UTC m=+1887.476350410" watchObservedRunningTime="2026-01-03 06:11:49.163140871 +0000 UTC m=+1887.489717463" Jan 03 06:11:55 crc kubenswrapper[4854]: I0103 06:11:55.118159 4854 scope.go:117] "RemoveContainer" containerID="1db592b68f62b6c2ad08a01037bad35421ca86a46de4654a3ad5f8ce76bb6f1b" Jan 03 06:11:55 crc kubenswrapper[4854]: E0103 06:11:55.118937 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:11:59 crc kubenswrapper[4854]: I0103 06:11:59.908686 4854 scope.go:117] "RemoveContainer" containerID="cd0cdff0c3dca2fa64ecaa486b3f9f9418f65ad235f30ce662331a62232f7128" Jan 03 06:11:59 crc kubenswrapper[4854]: I0103 06:11:59.937022 4854 scope.go:117] "RemoveContainer" containerID="86f9f477689f1793ce52c451d0b1e9302ba247029344f9a1a6cfa17c673ff8e7" Jan 03 06:11:59 crc kubenswrapper[4854]: I0103 06:11:59.970141 4854 scope.go:117] "RemoveContainer" containerID="f39d91732fbc3e932a86b606581cdf01cb9c30d6f97ee8558a89acfe632c15e7" Jan 03 06:12:02 crc kubenswrapper[4854]: I0103 06:12:02.045249 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 03 06:12:09 crc kubenswrapper[4854]: I0103 06:12:09.118930 4854 scope.go:117] "RemoveContainer" containerID="1db592b68f62b6c2ad08a01037bad35421ca86a46de4654a3ad5f8ce76bb6f1b" Jan 03 06:12:09 crc kubenswrapper[4854]: E0103 06:12:09.120462 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:12:22 crc kubenswrapper[4854]: I0103 06:12:22.151611 4854 scope.go:117] "RemoveContainer" containerID="1db592b68f62b6c2ad08a01037bad35421ca86a46de4654a3ad5f8ce76bb6f1b" Jan 03 06:12:22 crc kubenswrapper[4854]: E0103 06:12:22.153356 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:12:34 crc kubenswrapper[4854]: I0103 06:12:34.118465 4854 scope.go:117] "RemoveContainer" containerID="1db592b68f62b6c2ad08a01037bad35421ca86a46de4654a3ad5f8ce76bb6f1b" Jan 03 06:12:34 crc kubenswrapper[4854]: E0103 06:12:34.119449 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:12:49 crc kubenswrapper[4854]: I0103 06:12:49.119319 4854 scope.go:117] "RemoveContainer" containerID="1db592b68f62b6c2ad08a01037bad35421ca86a46de4654a3ad5f8ce76bb6f1b" Jan 03 06:12:49 crc kubenswrapper[4854]: E0103 06:12:49.120587 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:13:00 crc kubenswrapper[4854]: I0103 06:13:00.119268 4854 scope.go:117] "RemoveContainer" containerID="1db592b68f62b6c2ad08a01037bad35421ca86a46de4654a3ad5f8ce76bb6f1b" Jan 03 06:13:00 crc kubenswrapper[4854]: E0103 06:13:00.120315 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:13:13 crc kubenswrapper[4854]: I0103 06:13:13.711420 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6nj42"] Jan 03 06:13:13 crc kubenswrapper[4854]: I0103 06:13:13.717701 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6nj42" Jan 03 06:13:13 crc kubenswrapper[4854]: I0103 06:13:13.755489 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6nj42"] Jan 03 06:13:13 crc kubenswrapper[4854]: I0103 06:13:13.838180 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2896\" (UniqueName: \"kubernetes.io/projected/c34c2dd4-f5f1-40bc-8619-cd1877500e5a-kube-api-access-d2896\") pod \"redhat-operators-6nj42\" (UID: \"c34c2dd4-f5f1-40bc-8619-cd1877500e5a\") " pod="openshift-marketplace/redhat-operators-6nj42" Jan 03 06:13:13 crc kubenswrapper[4854]: I0103 06:13:13.838601 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c34c2dd4-f5f1-40bc-8619-cd1877500e5a-catalog-content\") pod \"redhat-operators-6nj42\" (UID: \"c34c2dd4-f5f1-40bc-8619-cd1877500e5a\") " pod="openshift-marketplace/redhat-operators-6nj42" Jan 03 06:13:13 crc kubenswrapper[4854]: I0103 06:13:13.838733 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c34c2dd4-f5f1-40bc-8619-cd1877500e5a-utilities\") pod \"redhat-operators-6nj42\" (UID: \"c34c2dd4-f5f1-40bc-8619-cd1877500e5a\") " pod="openshift-marketplace/redhat-operators-6nj42" Jan 03 06:13:13 crc kubenswrapper[4854]: I0103 06:13:13.941175 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c34c2dd4-f5f1-40bc-8619-cd1877500e5a-catalog-content\") pod \"redhat-operators-6nj42\" (UID: \"c34c2dd4-f5f1-40bc-8619-cd1877500e5a\") " pod="openshift-marketplace/redhat-operators-6nj42" Jan 03 06:13:13 crc kubenswrapper[4854]: I0103 06:13:13.941231 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c34c2dd4-f5f1-40bc-8619-cd1877500e5a-utilities\") pod \"redhat-operators-6nj42\" (UID: \"c34c2dd4-f5f1-40bc-8619-cd1877500e5a\") " pod="openshift-marketplace/redhat-operators-6nj42" Jan 03 06:13:13 crc kubenswrapper[4854]: I0103 06:13:13.941368 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2896\" (UniqueName: \"kubernetes.io/projected/c34c2dd4-f5f1-40bc-8619-cd1877500e5a-kube-api-access-d2896\") pod \"redhat-operators-6nj42\" (UID: \"c34c2dd4-f5f1-40bc-8619-cd1877500e5a\") " pod="openshift-marketplace/redhat-operators-6nj42" Jan 03 06:13:13 crc kubenswrapper[4854]: I0103 06:13:13.941745 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c34c2dd4-f5f1-40bc-8619-cd1877500e5a-catalog-content\") pod \"redhat-operators-6nj42\" (UID: \"c34c2dd4-f5f1-40bc-8619-cd1877500e5a\") " pod="openshift-marketplace/redhat-operators-6nj42" Jan 03 06:13:13 crc kubenswrapper[4854]: I0103 06:13:13.941871 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c34c2dd4-f5f1-40bc-8619-cd1877500e5a-utilities\") pod \"redhat-operators-6nj42\" (UID: \"c34c2dd4-f5f1-40bc-8619-cd1877500e5a\") " pod="openshift-marketplace/redhat-operators-6nj42" Jan 03 06:13:13 crc kubenswrapper[4854]: I0103 06:13:13.962046 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2896\" (UniqueName: \"kubernetes.io/projected/c34c2dd4-f5f1-40bc-8619-cd1877500e5a-kube-api-access-d2896\") pod \"redhat-operators-6nj42\" (UID: \"c34c2dd4-f5f1-40bc-8619-cd1877500e5a\") " pod="openshift-marketplace/redhat-operators-6nj42" Jan 03 06:13:14 crc kubenswrapper[4854]: I0103 06:13:14.053568 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6nj42" Jan 03 06:13:14 crc kubenswrapper[4854]: I0103 06:13:14.123457 4854 scope.go:117] "RemoveContainer" containerID="1db592b68f62b6c2ad08a01037bad35421ca86a46de4654a3ad5f8ce76bb6f1b" Jan 03 06:13:14 crc kubenswrapper[4854]: E0103 06:13:14.123802 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:13:14 crc kubenswrapper[4854]: I0103 06:13:14.579224 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6nj42"] Jan 03 06:13:14 crc kubenswrapper[4854]: W0103 06:13:14.585568 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc34c2dd4_f5f1_40bc_8619_cd1877500e5a.slice/crio-c20d81d5149775c1519f1ba9db8896d011be5c42fc32a68ef55f315d1d9003ed WatchSource:0}: Error finding container c20d81d5149775c1519f1ba9db8896d011be5c42fc32a68ef55f315d1d9003ed: Status 404 returned error can't find the container with id c20d81d5149775c1519f1ba9db8896d011be5c42fc32a68ef55f315d1d9003ed Jan 03 06:13:14 crc kubenswrapper[4854]: I0103 06:13:14.776547 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6nj42" event={"ID":"c34c2dd4-f5f1-40bc-8619-cd1877500e5a","Type":"ContainerStarted","Data":"c20d81d5149775c1519f1ba9db8896d011be5c42fc32a68ef55f315d1d9003ed"} Jan 03 06:13:15 crc kubenswrapper[4854]: I0103 06:13:15.793626 4854 generic.go:334] "Generic (PLEG): container finished" podID="c34c2dd4-f5f1-40bc-8619-cd1877500e5a" containerID="84d3606c6e74ab0b367e1f803d032fd28550391eab8b643ca1a3bf1f347918c2" exitCode=0 Jan 03 06:13:15 crc kubenswrapper[4854]: I0103 06:13:15.794033 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6nj42" event={"ID":"c34c2dd4-f5f1-40bc-8619-cd1877500e5a","Type":"ContainerDied","Data":"84d3606c6e74ab0b367e1f803d032fd28550391eab8b643ca1a3bf1f347918c2"} Jan 03 06:13:15 crc kubenswrapper[4854]: I0103 06:13:15.798340 4854 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 03 06:13:16 crc kubenswrapper[4854]: I0103 06:13:16.808202 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6nj42" event={"ID":"c34c2dd4-f5f1-40bc-8619-cd1877500e5a","Type":"ContainerStarted","Data":"201f37ab454e8aac4857264f10f3871845e312ddd8f4966017f40f25dbff91a1"} Jan 03 06:13:19 crc kubenswrapper[4854]: I0103 06:13:19.061042 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-qv4f6"] Jan 03 06:13:19 crc kubenswrapper[4854]: I0103 06:13:19.082978 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-qv4f6"] Jan 03 06:13:20 crc kubenswrapper[4854]: I0103 06:13:20.066169 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-fspcx"] Jan 03 06:13:20 crc kubenswrapper[4854]: I0103 06:13:20.078462 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-fspcx"] Jan 03 06:13:20 crc kubenswrapper[4854]: I0103 06:13:20.089492 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-8d7c-account-create-update-lpc6t"] Jan 03 06:13:20 crc kubenswrapper[4854]: I0103 06:13:20.099762 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-8d7c-account-create-update-lpc6t"] Jan 03 06:13:20 crc kubenswrapper[4854]: I0103 06:13:20.110725 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-4c08-account-create-update-7pbzj"] Jan 03 06:13:20 crc kubenswrapper[4854]: I0103 06:13:20.192647 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b06e03e-86ca-4379-9199-a4c1bddd4e33" path="/var/lib/kubelet/pods/1b06e03e-86ca-4379-9199-a4c1bddd4e33/volumes" Jan 03 06:13:20 crc kubenswrapper[4854]: I0103 06:13:20.195356 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ac766a6-c8c8-4506-b86d-55b398c38783" path="/var/lib/kubelet/pods/7ac766a6-c8c8-4506-b86d-55b398c38783/volumes" Jan 03 06:13:20 crc kubenswrapper[4854]: I0103 06:13:20.196385 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8251ed1f-e0cc-48dd-8bbd-14c8753a65a3" path="/var/lib/kubelet/pods/8251ed1f-e0cc-48dd-8bbd-14c8753a65a3/volumes" Jan 03 06:13:20 crc kubenswrapper[4854]: I0103 06:13:20.197593 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-4c08-account-create-update-7pbzj"] Jan 03 06:13:21 crc kubenswrapper[4854]: I0103 06:13:21.035964 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-aee3-account-create-update-9rwqc"] Jan 03 06:13:21 crc kubenswrapper[4854]: I0103 06:13:21.051117 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-aee3-account-create-update-9rwqc"] Jan 03 06:13:21 crc kubenswrapper[4854]: I0103 06:13:21.892198 4854 generic.go:334] "Generic (PLEG): container finished" podID="c34c2dd4-f5f1-40bc-8619-cd1877500e5a" containerID="201f37ab454e8aac4857264f10f3871845e312ddd8f4966017f40f25dbff91a1" exitCode=0 Jan 03 06:13:21 crc kubenswrapper[4854]: I0103 06:13:21.892334 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6nj42" event={"ID":"c34c2dd4-f5f1-40bc-8619-cd1877500e5a","Type":"ContainerDied","Data":"201f37ab454e8aac4857264f10f3871845e312ddd8f4966017f40f25dbff91a1"} Jan 03 06:13:22 crc kubenswrapper[4854]: I0103 06:13:22.039357 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-a2fa-account-create-update-5tjqb"] Jan 03 06:13:22 crc kubenswrapper[4854]: I0103 06:13:22.049959 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-a2fa-account-create-update-5tjqb"] Jan 03 06:13:22 crc kubenswrapper[4854]: I0103 06:13:22.132637 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c244d13-e5f5-4f26-a2b4-361a8012b0c1" path="/var/lib/kubelet/pods/7c244d13-e5f5-4f26-a2b4-361a8012b0c1/volumes" Jan 03 06:13:22 crc kubenswrapper[4854]: I0103 06:13:22.133632 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e58db94-b238-4fd5-a833-0fc6f281465c" path="/var/lib/kubelet/pods/9e58db94-b238-4fd5-a833-0fc6f281465c/volumes" Jan 03 06:13:22 crc kubenswrapper[4854]: I0103 06:13:22.134276 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de481fb0-7bd9-496e-99e1-5a3d1a25e47b" path="/var/lib/kubelet/pods/de481fb0-7bd9-496e-99e1-5a3d1a25e47b/volumes" Jan 03 06:13:22 crc kubenswrapper[4854]: I0103 06:13:22.911247 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6nj42" event={"ID":"c34c2dd4-f5f1-40bc-8619-cd1877500e5a","Type":"ContainerStarted","Data":"5b1163e76fad1192222256f363f96b42492223c4d3ec0042acd0438e34ac777d"} Jan 03 06:13:22 crc kubenswrapper[4854]: I0103 06:13:22.914297 4854 generic.go:334] "Generic (PLEG): container finished" podID="4f2831c4-a6e4-46a7-85b9-32fa56e4e268" containerID="dc0cc3d304b58fbb1c337d39f96b65965a8945c36c022e835934d5d20412677c" exitCode=0 Jan 03 06:13:22 crc kubenswrapper[4854]: I0103 06:13:22.914334 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-grmm7" event={"ID":"4f2831c4-a6e4-46a7-85b9-32fa56e4e268","Type":"ContainerDied","Data":"dc0cc3d304b58fbb1c337d39f96b65965a8945c36c022e835934d5d20412677c"} Jan 03 06:13:22 crc kubenswrapper[4854]: I0103 06:13:22.939503 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6nj42" podStartSLOduration=3.372358604 podStartE2EDuration="9.939479918s" podCreationTimestamp="2026-01-03 06:13:13 +0000 UTC" firstStartedPulling="2026-01-03 06:13:15.797882259 +0000 UTC m=+1974.124458841" lastFinishedPulling="2026-01-03 06:13:22.365003583 +0000 UTC m=+1980.691580155" observedRunningTime="2026-01-03 06:13:22.932205559 +0000 UTC m=+1981.258782141" watchObservedRunningTime="2026-01-03 06:13:22.939479918 +0000 UTC m=+1981.266056490" Jan 03 06:13:23 crc kubenswrapper[4854]: I0103 06:13:23.045675 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-27tnm"] Jan 03 06:13:23 crc kubenswrapper[4854]: I0103 06:13:23.063140 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-27tnm"] Jan 03 06:13:24 crc kubenswrapper[4854]: I0103 06:13:24.054545 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6nj42" Jan 03 06:13:24 crc kubenswrapper[4854]: I0103 06:13:24.054598 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6nj42" Jan 03 06:13:24 crc kubenswrapper[4854]: I0103 06:13:24.137471 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a21d16f-c305-4792-bad1-2eb5451b15dc" path="/var/lib/kubelet/pods/8a21d16f-c305-4792-bad1-2eb5451b15dc/volumes" Jan 03 06:13:24 crc kubenswrapper[4854]: I0103 06:13:24.518698 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-grmm7" Jan 03 06:13:24 crc kubenswrapper[4854]: I0103 06:13:24.650909 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ln6c2\" (UniqueName: \"kubernetes.io/projected/4f2831c4-a6e4-46a7-85b9-32fa56e4e268-kube-api-access-ln6c2\") pod \"4f2831c4-a6e4-46a7-85b9-32fa56e4e268\" (UID: \"4f2831c4-a6e4-46a7-85b9-32fa56e4e268\") " Jan 03 06:13:24 crc kubenswrapper[4854]: I0103 06:13:24.651292 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4f2831c4-a6e4-46a7-85b9-32fa56e4e268-ssh-key\") pod \"4f2831c4-a6e4-46a7-85b9-32fa56e4e268\" (UID: \"4f2831c4-a6e4-46a7-85b9-32fa56e4e268\") " Jan 03 06:13:24 crc kubenswrapper[4854]: I0103 06:13:24.651404 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f2831c4-a6e4-46a7-85b9-32fa56e4e268-inventory\") pod \"4f2831c4-a6e4-46a7-85b9-32fa56e4e268\" (UID: \"4f2831c4-a6e4-46a7-85b9-32fa56e4e268\") " Jan 03 06:13:24 crc kubenswrapper[4854]: I0103 06:13:24.651562 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f2831c4-a6e4-46a7-85b9-32fa56e4e268-bootstrap-combined-ca-bundle\") pod \"4f2831c4-a6e4-46a7-85b9-32fa56e4e268\" (UID: \"4f2831c4-a6e4-46a7-85b9-32fa56e4e268\") " Jan 03 06:13:24 crc kubenswrapper[4854]: I0103 06:13:24.657753 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f2831c4-a6e4-46a7-85b9-32fa56e4e268-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "4f2831c4-a6e4-46a7-85b9-32fa56e4e268" (UID: "4f2831c4-a6e4-46a7-85b9-32fa56e4e268"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:13:24 crc kubenswrapper[4854]: I0103 06:13:24.660424 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f2831c4-a6e4-46a7-85b9-32fa56e4e268-kube-api-access-ln6c2" (OuterVolumeSpecName: "kube-api-access-ln6c2") pod "4f2831c4-a6e4-46a7-85b9-32fa56e4e268" (UID: "4f2831c4-a6e4-46a7-85b9-32fa56e4e268"). InnerVolumeSpecName "kube-api-access-ln6c2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:13:24 crc kubenswrapper[4854]: I0103 06:13:24.684054 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f2831c4-a6e4-46a7-85b9-32fa56e4e268-inventory" (OuterVolumeSpecName: "inventory") pod "4f2831c4-a6e4-46a7-85b9-32fa56e4e268" (UID: "4f2831c4-a6e4-46a7-85b9-32fa56e4e268"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:13:24 crc kubenswrapper[4854]: I0103 06:13:24.693639 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f2831c4-a6e4-46a7-85b9-32fa56e4e268-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4f2831c4-a6e4-46a7-85b9-32fa56e4e268" (UID: "4f2831c4-a6e4-46a7-85b9-32fa56e4e268"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:13:24 crc kubenswrapper[4854]: I0103 06:13:24.755620 4854 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f2831c4-a6e4-46a7-85b9-32fa56e4e268-inventory\") on node \"crc\" DevicePath \"\"" Jan 03 06:13:24 crc kubenswrapper[4854]: I0103 06:13:24.755657 4854 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f2831c4-a6e4-46a7-85b9-32fa56e4e268-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:13:24 crc kubenswrapper[4854]: I0103 06:13:24.755673 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ln6c2\" (UniqueName: \"kubernetes.io/projected/4f2831c4-a6e4-46a7-85b9-32fa56e4e268-kube-api-access-ln6c2\") on node \"crc\" DevicePath \"\"" Jan 03 06:13:24 crc kubenswrapper[4854]: I0103 06:13:24.755684 4854 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4f2831c4-a6e4-46a7-85b9-32fa56e4e268-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 03 06:13:24 crc kubenswrapper[4854]: I0103 06:13:24.948353 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-grmm7" event={"ID":"4f2831c4-a6e4-46a7-85b9-32fa56e4e268","Type":"ContainerDied","Data":"1c5dff1dd350260bb0f987173a67e157462a8e507b6f5b9b19d3844b40a8e798"} Jan 03 06:13:24 crc kubenswrapper[4854]: I0103 06:13:24.948436 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c5dff1dd350260bb0f987173a67e157462a8e507b6f5b9b19d3844b40a8e798" Jan 03 06:13:24 crc kubenswrapper[4854]: I0103 06:13:24.948487 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-grmm7" Jan 03 06:13:25 crc kubenswrapper[4854]: I0103 06:13:25.078685 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-94z52"] Jan 03 06:13:25 crc kubenswrapper[4854]: E0103 06:13:25.079280 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f2831c4-a6e4-46a7-85b9-32fa56e4e268" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 03 06:13:25 crc kubenswrapper[4854]: I0103 06:13:25.079299 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f2831c4-a6e4-46a7-85b9-32fa56e4e268" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 03 06:13:25 crc kubenswrapper[4854]: I0103 06:13:25.079649 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f2831c4-a6e4-46a7-85b9-32fa56e4e268" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 03 06:13:25 crc kubenswrapper[4854]: I0103 06:13:25.080664 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-94z52" Jan 03 06:13:25 crc kubenswrapper[4854]: I0103 06:13:25.083233 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 03 06:13:25 crc kubenswrapper[4854]: I0103 06:13:25.084035 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 03 06:13:25 crc kubenswrapper[4854]: I0103 06:13:25.085427 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 03 06:13:25 crc kubenswrapper[4854]: I0103 06:13:25.085941 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4bl62" Jan 03 06:13:25 crc kubenswrapper[4854]: I0103 06:13:25.091165 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-94z52"] Jan 03 06:13:25 crc kubenswrapper[4854]: I0103 06:13:25.113417 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6nj42" podUID="c34c2dd4-f5f1-40bc-8619-cd1877500e5a" containerName="registry-server" probeResult="failure" output=< Jan 03 06:13:25 crc kubenswrapper[4854]: timeout: failed to connect service ":50051" within 1s Jan 03 06:13:25 crc kubenswrapper[4854]: > Jan 03 06:13:25 crc kubenswrapper[4854]: I0103 06:13:25.167212 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b423ec25-e1c9-4a78-b3fc-8887b5eedc7c-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-94z52\" (UID: \"b423ec25-e1c9-4a78-b3fc-8887b5eedc7c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-94z52" Jan 03 06:13:25 crc kubenswrapper[4854]: I0103 06:13:25.167336 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b423ec25-e1c9-4a78-b3fc-8887b5eedc7c-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-94z52\" (UID: \"b423ec25-e1c9-4a78-b3fc-8887b5eedc7c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-94z52" Jan 03 06:13:25 crc kubenswrapper[4854]: I0103 06:13:25.167415 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wccjw\" (UniqueName: \"kubernetes.io/projected/b423ec25-e1c9-4a78-b3fc-8887b5eedc7c-kube-api-access-wccjw\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-94z52\" (UID: \"b423ec25-e1c9-4a78-b3fc-8887b5eedc7c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-94z52" Jan 03 06:13:25 crc kubenswrapper[4854]: I0103 06:13:25.273673 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b423ec25-e1c9-4a78-b3fc-8887b5eedc7c-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-94z52\" (UID: \"b423ec25-e1c9-4a78-b3fc-8887b5eedc7c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-94z52" Jan 03 06:13:25 crc kubenswrapper[4854]: I0103 06:13:25.273846 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b423ec25-e1c9-4a78-b3fc-8887b5eedc7c-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-94z52\" (UID: \"b423ec25-e1c9-4a78-b3fc-8887b5eedc7c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-94z52" Jan 03 06:13:25 crc kubenswrapper[4854]: I0103 06:13:25.273893 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wccjw\" (UniqueName: \"kubernetes.io/projected/b423ec25-e1c9-4a78-b3fc-8887b5eedc7c-kube-api-access-wccjw\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-94z52\" (UID: \"b423ec25-e1c9-4a78-b3fc-8887b5eedc7c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-94z52" Jan 03 06:13:25 crc kubenswrapper[4854]: I0103 06:13:25.279649 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b423ec25-e1c9-4a78-b3fc-8887b5eedc7c-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-94z52\" (UID: \"b423ec25-e1c9-4a78-b3fc-8887b5eedc7c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-94z52" Jan 03 06:13:25 crc kubenswrapper[4854]: I0103 06:13:25.283298 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b423ec25-e1c9-4a78-b3fc-8887b5eedc7c-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-94z52\" (UID: \"b423ec25-e1c9-4a78-b3fc-8887b5eedc7c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-94z52" Jan 03 06:13:25 crc kubenswrapper[4854]: I0103 06:13:25.295317 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wccjw\" (UniqueName: \"kubernetes.io/projected/b423ec25-e1c9-4a78-b3fc-8887b5eedc7c-kube-api-access-wccjw\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-94z52\" (UID: \"b423ec25-e1c9-4a78-b3fc-8887b5eedc7c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-94z52" Jan 03 06:13:25 crc kubenswrapper[4854]: I0103 06:13:25.401149 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-94z52" Jan 03 06:13:25 crc kubenswrapper[4854]: I0103 06:13:25.967530 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-94z52"] Jan 03 06:13:26 crc kubenswrapper[4854]: I0103 06:13:26.032480 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-4zb5c"] Jan 03 06:13:26 crc kubenswrapper[4854]: I0103 06:13:26.044347 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-4zb5c"] Jan 03 06:13:26 crc kubenswrapper[4854]: I0103 06:13:26.136727 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec6dfccc-6930-4425-b5d6-511366ab6786" path="/var/lib/kubelet/pods/ec6dfccc-6930-4425-b5d6-511366ab6786/volumes" Jan 03 06:13:26 crc kubenswrapper[4854]: I0103 06:13:26.976817 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-94z52" event={"ID":"b423ec25-e1c9-4a78-b3fc-8887b5eedc7c","Type":"ContainerStarted","Data":"4fc3931d90c1a25a0f416237a422dd329ed6d9de43a35855af446943f9ded57d"} Jan 03 06:13:26 crc kubenswrapper[4854]: I0103 06:13:26.977327 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-94z52" event={"ID":"b423ec25-e1c9-4a78-b3fc-8887b5eedc7c","Type":"ContainerStarted","Data":"0969c3b999d0e666286265b52d99e896cf6e09df074e705f92b3bf063864b1e7"} Jan 03 06:13:27 crc kubenswrapper[4854]: I0103 06:13:27.018209 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-94z52" podStartSLOduration=1.3833751730000001 podStartE2EDuration="2.018182223s" podCreationTimestamp="2026-01-03 06:13:25 +0000 UTC" firstStartedPulling="2026-01-03 06:13:25.969881502 +0000 UTC m=+1984.296458074" lastFinishedPulling="2026-01-03 06:13:26.604688562 +0000 UTC m=+1984.931265124" observedRunningTime="2026-01-03 06:13:27.007657474 +0000 UTC m=+1985.334234116" watchObservedRunningTime="2026-01-03 06:13:27.018182223 +0000 UTC m=+1985.344758825" Jan 03 06:13:28 crc kubenswrapper[4854]: I0103 06:13:28.118364 4854 scope.go:117] "RemoveContainer" containerID="1db592b68f62b6c2ad08a01037bad35421ca86a46de4654a3ad5f8ce76bb6f1b" Jan 03 06:13:28 crc kubenswrapper[4854]: E0103 06:13:28.119002 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:13:34 crc kubenswrapper[4854]: I0103 06:13:34.115942 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6nj42" Jan 03 06:13:34 crc kubenswrapper[4854]: I0103 06:13:34.174221 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6nj42" Jan 03 06:13:34 crc kubenswrapper[4854]: I0103 06:13:34.366465 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6nj42"] Jan 03 06:13:35 crc kubenswrapper[4854]: I0103 06:13:35.056797 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-55ddh"] Jan 03 06:13:35 crc kubenswrapper[4854]: I0103 06:13:35.073711 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-q85bq"] Jan 03 06:13:35 crc kubenswrapper[4854]: I0103 06:13:35.091557 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-55ddh"] Jan 03 06:13:35 crc kubenswrapper[4854]: I0103 06:13:35.103836 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-94f4-account-create-update-hlktd"] Jan 03 06:13:35 crc kubenswrapper[4854]: I0103 06:13:35.113538 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-q85bq"] Jan 03 06:13:35 crc kubenswrapper[4854]: I0103 06:13:35.126473 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-94f4-account-create-update-hlktd"] Jan 03 06:13:36 crc kubenswrapper[4854]: I0103 06:13:36.093196 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6nj42" podUID="c34c2dd4-f5f1-40bc-8619-cd1877500e5a" containerName="registry-server" containerID="cri-o://5b1163e76fad1192222256f363f96b42492223c4d3ec0042acd0438e34ac777d" gracePeriod=2 Jan 03 06:13:36 crc kubenswrapper[4854]: I0103 06:13:36.136456 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70d4a259-3160-44ac-8509-3e52076196be" path="/var/lib/kubelet/pods/70d4a259-3160-44ac-8509-3e52076196be/volumes" Jan 03 06:13:36 crc kubenswrapper[4854]: I0103 06:13:36.137840 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d219f3df-5003-4c46-a952-cdb9485b9879" path="/var/lib/kubelet/pods/d219f3df-5003-4c46-a952-cdb9485b9879/volumes" Jan 03 06:13:36 crc kubenswrapper[4854]: I0103 06:13:36.139484 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db4bd4c9-70cf-4ee8-a3c4-71bc3a2ead5d" path="/var/lib/kubelet/pods/db4bd4c9-70cf-4ee8-a3c4-71bc3a2ead5d/volumes" Jan 03 06:13:36 crc kubenswrapper[4854]: I0103 06:13:36.658137 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6nj42" Jan 03 06:13:36 crc kubenswrapper[4854]: I0103 06:13:36.848894 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c34c2dd4-f5f1-40bc-8619-cd1877500e5a-utilities\") pod \"c34c2dd4-f5f1-40bc-8619-cd1877500e5a\" (UID: \"c34c2dd4-f5f1-40bc-8619-cd1877500e5a\") " Jan 03 06:13:36 crc kubenswrapper[4854]: I0103 06:13:36.849343 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2896\" (UniqueName: \"kubernetes.io/projected/c34c2dd4-f5f1-40bc-8619-cd1877500e5a-kube-api-access-d2896\") pod \"c34c2dd4-f5f1-40bc-8619-cd1877500e5a\" (UID: \"c34c2dd4-f5f1-40bc-8619-cd1877500e5a\") " Jan 03 06:13:36 crc kubenswrapper[4854]: I0103 06:13:36.849545 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c34c2dd4-f5f1-40bc-8619-cd1877500e5a-catalog-content\") pod \"c34c2dd4-f5f1-40bc-8619-cd1877500e5a\" (UID: \"c34c2dd4-f5f1-40bc-8619-cd1877500e5a\") " Jan 03 06:13:36 crc kubenswrapper[4854]: I0103 06:13:36.849781 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c34c2dd4-f5f1-40bc-8619-cd1877500e5a-utilities" (OuterVolumeSpecName: "utilities") pod "c34c2dd4-f5f1-40bc-8619-cd1877500e5a" (UID: "c34c2dd4-f5f1-40bc-8619-cd1877500e5a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:13:36 crc kubenswrapper[4854]: I0103 06:13:36.850229 4854 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c34c2dd4-f5f1-40bc-8619-cd1877500e5a-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 06:13:36 crc kubenswrapper[4854]: I0103 06:13:36.856177 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c34c2dd4-f5f1-40bc-8619-cd1877500e5a-kube-api-access-d2896" (OuterVolumeSpecName: "kube-api-access-d2896") pod "c34c2dd4-f5f1-40bc-8619-cd1877500e5a" (UID: "c34c2dd4-f5f1-40bc-8619-cd1877500e5a"). InnerVolumeSpecName "kube-api-access-d2896". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:13:36 crc kubenswrapper[4854]: I0103 06:13:36.953041 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2896\" (UniqueName: \"kubernetes.io/projected/c34c2dd4-f5f1-40bc-8619-cd1877500e5a-kube-api-access-d2896\") on node \"crc\" DevicePath \"\"" Jan 03 06:13:36 crc kubenswrapper[4854]: I0103 06:13:36.976885 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c34c2dd4-f5f1-40bc-8619-cd1877500e5a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c34c2dd4-f5f1-40bc-8619-cd1877500e5a" (UID: "c34c2dd4-f5f1-40bc-8619-cd1877500e5a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:13:37 crc kubenswrapper[4854]: I0103 06:13:37.057254 4854 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c34c2dd4-f5f1-40bc-8619-cd1877500e5a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 06:13:37 crc kubenswrapper[4854]: I0103 06:13:37.105699 4854 generic.go:334] "Generic (PLEG): container finished" podID="c34c2dd4-f5f1-40bc-8619-cd1877500e5a" containerID="5b1163e76fad1192222256f363f96b42492223c4d3ec0042acd0438e34ac777d" exitCode=0 Jan 03 06:13:37 crc kubenswrapper[4854]: I0103 06:13:37.105744 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6nj42" event={"ID":"c34c2dd4-f5f1-40bc-8619-cd1877500e5a","Type":"ContainerDied","Data":"5b1163e76fad1192222256f363f96b42492223c4d3ec0042acd0438e34ac777d"} Jan 03 06:13:37 crc kubenswrapper[4854]: I0103 06:13:37.105781 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6nj42" event={"ID":"c34c2dd4-f5f1-40bc-8619-cd1877500e5a","Type":"ContainerDied","Data":"c20d81d5149775c1519f1ba9db8896d011be5c42fc32a68ef55f315d1d9003ed"} Jan 03 06:13:37 crc kubenswrapper[4854]: I0103 06:13:37.105803 4854 scope.go:117] "RemoveContainer" containerID="5b1163e76fad1192222256f363f96b42492223c4d3ec0042acd0438e34ac777d" Jan 03 06:13:37 crc kubenswrapper[4854]: I0103 06:13:37.105825 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6nj42" Jan 03 06:13:37 crc kubenswrapper[4854]: I0103 06:13:37.135385 4854 scope.go:117] "RemoveContainer" containerID="201f37ab454e8aac4857264f10f3871845e312ddd8f4966017f40f25dbff91a1" Jan 03 06:13:37 crc kubenswrapper[4854]: I0103 06:13:37.155030 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6nj42"] Jan 03 06:13:37 crc kubenswrapper[4854]: I0103 06:13:37.168691 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6nj42"] Jan 03 06:13:37 crc kubenswrapper[4854]: I0103 06:13:37.178607 4854 scope.go:117] "RemoveContainer" containerID="84d3606c6e74ab0b367e1f803d032fd28550391eab8b643ca1a3bf1f347918c2" Jan 03 06:13:37 crc kubenswrapper[4854]: I0103 06:13:37.213331 4854 scope.go:117] "RemoveContainer" containerID="5b1163e76fad1192222256f363f96b42492223c4d3ec0042acd0438e34ac777d" Jan 03 06:13:37 crc kubenswrapper[4854]: E0103 06:13:37.213764 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b1163e76fad1192222256f363f96b42492223c4d3ec0042acd0438e34ac777d\": container with ID starting with 5b1163e76fad1192222256f363f96b42492223c4d3ec0042acd0438e34ac777d not found: ID does not exist" containerID="5b1163e76fad1192222256f363f96b42492223c4d3ec0042acd0438e34ac777d" Jan 03 06:13:37 crc kubenswrapper[4854]: I0103 06:13:37.213795 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b1163e76fad1192222256f363f96b42492223c4d3ec0042acd0438e34ac777d"} err="failed to get container status \"5b1163e76fad1192222256f363f96b42492223c4d3ec0042acd0438e34ac777d\": rpc error: code = NotFound desc = could not find container \"5b1163e76fad1192222256f363f96b42492223c4d3ec0042acd0438e34ac777d\": container with ID starting with 5b1163e76fad1192222256f363f96b42492223c4d3ec0042acd0438e34ac777d not found: ID does not exist" Jan 03 06:13:37 crc kubenswrapper[4854]: I0103 06:13:37.213819 4854 scope.go:117] "RemoveContainer" containerID="201f37ab454e8aac4857264f10f3871845e312ddd8f4966017f40f25dbff91a1" Jan 03 06:13:37 crc kubenswrapper[4854]: E0103 06:13:37.214057 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"201f37ab454e8aac4857264f10f3871845e312ddd8f4966017f40f25dbff91a1\": container with ID starting with 201f37ab454e8aac4857264f10f3871845e312ddd8f4966017f40f25dbff91a1 not found: ID does not exist" containerID="201f37ab454e8aac4857264f10f3871845e312ddd8f4966017f40f25dbff91a1" Jan 03 06:13:37 crc kubenswrapper[4854]: I0103 06:13:37.214106 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"201f37ab454e8aac4857264f10f3871845e312ddd8f4966017f40f25dbff91a1"} err="failed to get container status \"201f37ab454e8aac4857264f10f3871845e312ddd8f4966017f40f25dbff91a1\": rpc error: code = NotFound desc = could not find container \"201f37ab454e8aac4857264f10f3871845e312ddd8f4966017f40f25dbff91a1\": container with ID starting with 201f37ab454e8aac4857264f10f3871845e312ddd8f4966017f40f25dbff91a1 not found: ID does not exist" Jan 03 06:13:37 crc kubenswrapper[4854]: I0103 06:13:37.214120 4854 scope.go:117] "RemoveContainer" containerID="84d3606c6e74ab0b367e1f803d032fd28550391eab8b643ca1a3bf1f347918c2" Jan 03 06:13:37 crc kubenswrapper[4854]: E0103 06:13:37.214419 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84d3606c6e74ab0b367e1f803d032fd28550391eab8b643ca1a3bf1f347918c2\": container with ID starting with 84d3606c6e74ab0b367e1f803d032fd28550391eab8b643ca1a3bf1f347918c2 not found: ID does not exist" containerID="84d3606c6e74ab0b367e1f803d032fd28550391eab8b643ca1a3bf1f347918c2" Jan 03 06:13:37 crc kubenswrapper[4854]: I0103 06:13:37.214439 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84d3606c6e74ab0b367e1f803d032fd28550391eab8b643ca1a3bf1f347918c2"} err="failed to get container status \"84d3606c6e74ab0b367e1f803d032fd28550391eab8b643ca1a3bf1f347918c2\": rpc error: code = NotFound desc = could not find container \"84d3606c6e74ab0b367e1f803d032fd28550391eab8b643ca1a3bf1f347918c2\": container with ID starting with 84d3606c6e74ab0b367e1f803d032fd28550391eab8b643ca1a3bf1f347918c2 not found: ID does not exist" Jan 03 06:13:38 crc kubenswrapper[4854]: I0103 06:13:38.146703 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c34c2dd4-f5f1-40bc-8619-cd1877500e5a" path="/var/lib/kubelet/pods/c34c2dd4-f5f1-40bc-8619-cd1877500e5a/volumes" Jan 03 06:13:39 crc kubenswrapper[4854]: I0103 06:13:39.119175 4854 scope.go:117] "RemoveContainer" containerID="1db592b68f62b6c2ad08a01037bad35421ca86a46de4654a3ad5f8ce76bb6f1b" Jan 03 06:13:39 crc kubenswrapper[4854]: E0103 06:13:39.120178 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:13:51 crc kubenswrapper[4854]: I0103 06:13:51.043070 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-3a66-account-create-update-thf68"] Jan 03 06:13:51 crc kubenswrapper[4854]: I0103 06:13:51.056334 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-3a66-account-create-update-thf68"] Jan 03 06:13:52 crc kubenswrapper[4854]: I0103 06:13:52.136029 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e681201f-947c-41ba-93fe-0533bd1d071a" path="/var/lib/kubelet/pods/e681201f-947c-41ba-93fe-0533bd1d071a/volumes" Jan 03 06:13:53 crc kubenswrapper[4854]: I0103 06:13:53.119411 4854 scope.go:117] "RemoveContainer" containerID="1db592b68f62b6c2ad08a01037bad35421ca86a46de4654a3ad5f8ce76bb6f1b" Jan 03 06:13:54 crc kubenswrapper[4854]: I0103 06:13:54.333944 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" event={"ID":"e8c88d7d-092b-44f7-b4c8-3540be3c0e8b","Type":"ContainerStarted","Data":"0c123e736f10c9692e0df12e04db731fd8637258c1c778380ea9fd1d500829cf"} Jan 03 06:13:55 crc kubenswrapper[4854]: I0103 06:13:55.058886 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-ab39-account-create-update-24pb8"] Jan 03 06:13:55 crc kubenswrapper[4854]: I0103 06:13:55.085857 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-6fa3-account-create-update-pb5n6"] Jan 03 06:13:55 crc kubenswrapper[4854]: I0103 06:13:55.103442 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-ab39-account-create-update-24pb8"] Jan 03 06:13:55 crc kubenswrapper[4854]: I0103 06:13:55.118204 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-6fa3-account-create-update-pb5n6"] Jan 03 06:13:55 crc kubenswrapper[4854]: I0103 06:13:55.132053 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-4rfsf"] Jan 03 06:13:55 crc kubenswrapper[4854]: I0103 06:13:55.192993 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-6sqkj"] Jan 03 06:13:55 crc kubenswrapper[4854]: I0103 06:13:55.208418 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-k7lm6"] Jan 03 06:13:55 crc kubenswrapper[4854]: I0103 06:13:55.221591 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-6sqkj"] Jan 03 06:13:55 crc kubenswrapper[4854]: I0103 06:13:55.233120 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-8c51-account-create-update-9m5qc"] Jan 03 06:13:55 crc kubenswrapper[4854]: I0103 06:13:55.247997 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-4rfsf"] Jan 03 06:13:55 crc kubenswrapper[4854]: I0103 06:13:55.262565 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-k7lm6"] Jan 03 06:13:55 crc kubenswrapper[4854]: I0103 06:13:55.277582 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-ddlqd"] Jan 03 06:13:55 crc kubenswrapper[4854]: I0103 06:13:55.291359 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-ddlqd"] Jan 03 06:13:55 crc kubenswrapper[4854]: I0103 06:13:55.304161 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-8c51-account-create-update-9m5qc"] Jan 03 06:13:56 crc kubenswrapper[4854]: I0103 06:13:56.136891 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0019311a-ce5a-4dbb-bef8-8cac6b78a304" path="/var/lib/kubelet/pods/0019311a-ce5a-4dbb-bef8-8cac6b78a304/volumes" Jan 03 06:13:56 crc kubenswrapper[4854]: I0103 06:13:56.137969 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7134f57e-784c-4c40-b9d3-cf1e86a1237e" path="/var/lib/kubelet/pods/7134f57e-784c-4c40-b9d3-cf1e86a1237e/volumes" Jan 03 06:13:56 crc kubenswrapper[4854]: I0103 06:13:56.138710 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7456fb80-40dc-4ef7-86ee-062ad4b064d2" path="/var/lib/kubelet/pods/7456fb80-40dc-4ef7-86ee-062ad4b064d2/volumes" Jan 03 06:13:56 crc kubenswrapper[4854]: I0103 06:13:56.139343 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b6abe3-ad62-48fc-bd6d-8df5e103c5d4" path="/var/lib/kubelet/pods/96b6abe3-ad62-48fc-bd6d-8df5e103c5d4/volumes" Jan 03 06:13:56 crc kubenswrapper[4854]: I0103 06:13:56.140380 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba2c8def-0d1c-4a79-a63d-c6423a1b4823" path="/var/lib/kubelet/pods/ba2c8def-0d1c-4a79-a63d-c6423a1b4823/volumes" Jan 03 06:13:56 crc kubenswrapper[4854]: I0103 06:13:56.140965 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c448f9c4-fd70-4c6d-853e-c4197af5b80b" path="/var/lib/kubelet/pods/c448f9c4-fd70-4c6d-853e-c4197af5b80b/volumes" Jan 03 06:13:56 crc kubenswrapper[4854]: I0103 06:13:56.141568 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdc310d0-38e5-41d6-a784-d8e534a5e324" path="/var/lib/kubelet/pods/fdc310d0-38e5-41d6-a784-d8e534a5e324/volumes" Jan 03 06:13:59 crc kubenswrapper[4854]: I0103 06:13:59.067746 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-7hvb6"] Jan 03 06:13:59 crc kubenswrapper[4854]: I0103 06:13:59.089710 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-7hvb6"] Jan 03 06:14:00 crc kubenswrapper[4854]: I0103 06:14:00.138037 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d67c032-022a-4d33-95e6-cdf31147fb4c" path="/var/lib/kubelet/pods/8d67c032-022a-4d33-95e6-cdf31147fb4c/volumes" Jan 03 06:14:00 crc kubenswrapper[4854]: I0103 06:14:00.176631 4854 scope.go:117] "RemoveContainer" containerID="3285fcc2748cbe903ffe199d5a51313ce4d8e1adbf5d6cd6d2704540a31d2b60" Jan 03 06:14:00 crc kubenswrapper[4854]: I0103 06:14:00.218745 4854 scope.go:117] "RemoveContainer" containerID="a302c8cdf88997817ebc655772509708609b0e88d3d9ad22260f2adbe7bca8f9" Jan 03 06:14:00 crc kubenswrapper[4854]: I0103 06:14:00.271605 4854 scope.go:117] "RemoveContainer" containerID="ab50404c6d7ec773d7b40476abf6b60c9a0771004e8ec4d93f1937e47c8b1a68" Jan 03 06:14:00 crc kubenswrapper[4854]: I0103 06:14:00.357502 4854 scope.go:117] "RemoveContainer" containerID="59cba3b7f6080425292c17d08b60991f665dcca155120710c11d2a3a5baa2a9f" Jan 03 06:14:00 crc kubenswrapper[4854]: I0103 06:14:00.399300 4854 scope.go:117] "RemoveContainer" containerID="b9363bbc3e9e0398e365e3eb65cad2c07b8aa8c85d49d2c35cbbb209f78823e5" Jan 03 06:14:00 crc kubenswrapper[4854]: I0103 06:14:00.516591 4854 scope.go:117] "RemoveContainer" containerID="f10e5e2935b25c279f3520595d2c1e8c63da466262a6d47e5cd62567480f6a37" Jan 03 06:14:00 crc kubenswrapper[4854]: I0103 06:14:00.560407 4854 scope.go:117] "RemoveContainer" containerID="53658e927cd0cd5f5b0a1356e3571ca5e90e2c58ed658a69bbcd3643d85e6ffd" Jan 03 06:14:00 crc kubenswrapper[4854]: I0103 06:14:00.583109 4854 scope.go:117] "RemoveContainer" containerID="41d1eb780711c0fefdf14ffd19fc7f190770245d6cf2aff8db72d044258042fc" Jan 03 06:14:00 crc kubenswrapper[4854]: I0103 06:14:00.608024 4854 scope.go:117] "RemoveContainer" containerID="de25c359eaad7b92aa63fa4ef0fd0c752e5fac56ef7791d15044fc12d09efe7f" Jan 03 06:14:00 crc kubenswrapper[4854]: I0103 06:14:00.634954 4854 scope.go:117] "RemoveContainer" containerID="bb439f7ca9ee1ecbb09ecb225b0b0b7cfc74798269548d20314434bc74c38b50" Jan 03 06:14:00 crc kubenswrapper[4854]: I0103 06:14:00.655675 4854 scope.go:117] "RemoveContainer" containerID="5c22bce98bc4a32ece7dd30b876deb7e801b8501ffe66759f8f8501daa90c0d3" Jan 03 06:14:00 crc kubenswrapper[4854]: I0103 06:14:00.723320 4854 scope.go:117] "RemoveContainer" containerID="d8ab9dc75dd9131d4da0d649ace9b9643f50ef6d2ec1f4ff874297d58979af95" Jan 03 06:14:00 crc kubenswrapper[4854]: I0103 06:14:00.748011 4854 scope.go:117] "RemoveContainer" containerID="f63a5954aec391713b07f14e9ae550f7a5ef3bf7d214bb0ce824b671a4499301" Jan 03 06:14:00 crc kubenswrapper[4854]: I0103 06:14:00.775218 4854 scope.go:117] "RemoveContainer" containerID="f6a1ae9ef209985cc157ceb0e1ac708a2d2dcf2d00250be1076aa2b626ba9eec" Jan 03 06:14:00 crc kubenswrapper[4854]: I0103 06:14:00.798664 4854 scope.go:117] "RemoveContainer" containerID="336595bce2732d146ca99d23eabf746c246400618dd53c65b617389cb270e350" Jan 03 06:14:00 crc kubenswrapper[4854]: I0103 06:14:00.842872 4854 scope.go:117] "RemoveContainer" containerID="7b4138294218f9b48ab15f5cf556619572463aa1f2fb2fc782f1dc3be5637c97" Jan 03 06:14:00 crc kubenswrapper[4854]: I0103 06:14:00.883499 4854 scope.go:117] "RemoveContainer" containerID="f27a53f5524cc3c7b67b51715250a75176ef26bb2738c5386a7e095b62bd085c" Jan 03 06:14:00 crc kubenswrapper[4854]: I0103 06:14:00.910657 4854 scope.go:117] "RemoveContainer" containerID="6c2b8f2cd6d5e76f60ddfd13d59dc3b7172c67d3dea0535bf21c5fea30948d35" Jan 03 06:14:00 crc kubenswrapper[4854]: I0103 06:14:00.950647 4854 scope.go:117] "RemoveContainer" containerID="aebe30f1e4dfce24a408d34c32ceaff1b8ec4b22e1664456f238bb50a1112a47" Jan 03 06:14:00 crc kubenswrapper[4854]: I0103 06:14:00.974240 4854 scope.go:117] "RemoveContainer" containerID="d605944eac94e87a457aafb1289ad4229f88a9b5361db72fa726fc00a240d35a" Jan 03 06:14:01 crc kubenswrapper[4854]: I0103 06:14:01.005279 4854 scope.go:117] "RemoveContainer" containerID="ff6c8932491e14a996d5c0dd2761667e73f50b596c819e84d1cb1ad74860b7d1" Jan 03 06:14:01 crc kubenswrapper[4854]: I0103 06:14:01.027864 4854 scope.go:117] "RemoveContainer" containerID="86aeb111cd41b99b4e25c4a90df9a0c5af23d8a02afdd671ecfdff248a495fec" Jan 03 06:14:01 crc kubenswrapper[4854]: I0103 06:14:01.049481 4854 scope.go:117] "RemoveContainer" containerID="99f19915558ff686ab30f9271ddb53abb3fd788ffbefbdcb3a9af040acd5d16d" Jan 03 06:14:38 crc kubenswrapper[4854]: I0103 06:14:38.053487 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-v8pxd"] Jan 03 06:14:38 crc kubenswrapper[4854]: I0103 06:14:38.072266 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-v8pxd"] Jan 03 06:14:38 crc kubenswrapper[4854]: I0103 06:14:38.144386 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9acf61c2-85c5-4ba2-9f4b-0778c961a268" path="/var/lib/kubelet/pods/9acf61c2-85c5-4ba2-9f4b-0778c961a268/volumes" Jan 03 06:14:43 crc kubenswrapper[4854]: I0103 06:14:43.048154 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-xqtnh"] Jan 03 06:14:43 crc kubenswrapper[4854]: I0103 06:14:43.065928 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-xqtnh"] Jan 03 06:14:44 crc kubenswrapper[4854]: I0103 06:14:44.131403 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fe19914-d9c1-4a1d-bba5-77167bca38f2" path="/var/lib/kubelet/pods/4fe19914-d9c1-4a1d-bba5-77167bca38f2/volumes" Jan 03 06:14:48 crc kubenswrapper[4854]: I0103 06:14:48.031492 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-ff9wl"] Jan 03 06:14:48 crc kubenswrapper[4854]: I0103 06:14:48.044751 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-ff9wl"] Jan 03 06:14:48 crc kubenswrapper[4854]: I0103 06:14:48.135155 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc184ac1-7e14-435e-898d-93e19dab6615" path="/var/lib/kubelet/pods/dc184ac1-7e14-435e-898d-93e19dab6615/volumes" Jan 03 06:14:49 crc kubenswrapper[4854]: I0103 06:14:49.061496 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-86vzw"] Jan 03 06:14:49 crc kubenswrapper[4854]: I0103 06:14:49.081567 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-86vzw"] Jan 03 06:14:50 crc kubenswrapper[4854]: I0103 06:14:50.133196 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8beded3-7a32-47a0-a12a-e346422e7323" path="/var/lib/kubelet/pods/c8beded3-7a32-47a0-a12a-e346422e7323/volumes" Jan 03 06:15:00 crc kubenswrapper[4854]: I0103 06:15:00.163034 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29457015-7c4zz"] Jan 03 06:15:00 crc kubenswrapper[4854]: E0103 06:15:00.164732 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c34c2dd4-f5f1-40bc-8619-cd1877500e5a" containerName="extract-content" Jan 03 06:15:00 crc kubenswrapper[4854]: I0103 06:15:00.164754 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="c34c2dd4-f5f1-40bc-8619-cd1877500e5a" containerName="extract-content" Jan 03 06:15:00 crc kubenswrapper[4854]: E0103 06:15:00.164790 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c34c2dd4-f5f1-40bc-8619-cd1877500e5a" containerName="registry-server" Jan 03 06:15:00 crc kubenswrapper[4854]: I0103 06:15:00.164800 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="c34c2dd4-f5f1-40bc-8619-cd1877500e5a" containerName="registry-server" Jan 03 06:15:00 crc kubenswrapper[4854]: E0103 06:15:00.164865 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c34c2dd4-f5f1-40bc-8619-cd1877500e5a" containerName="extract-utilities" Jan 03 06:15:00 crc kubenswrapper[4854]: I0103 06:15:00.164877 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="c34c2dd4-f5f1-40bc-8619-cd1877500e5a" containerName="extract-utilities" Jan 03 06:15:00 crc kubenswrapper[4854]: I0103 06:15:00.165300 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="c34c2dd4-f5f1-40bc-8619-cd1877500e5a" containerName="registry-server" Jan 03 06:15:00 crc kubenswrapper[4854]: I0103 06:15:00.166729 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29457015-7c4zz" Jan 03 06:15:00 crc kubenswrapper[4854]: I0103 06:15:00.174351 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 03 06:15:00 crc kubenswrapper[4854]: I0103 06:15:00.174369 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 03 06:15:00 crc kubenswrapper[4854]: I0103 06:15:00.179171 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29457015-7c4zz"] Jan 03 06:15:00 crc kubenswrapper[4854]: I0103 06:15:00.237862 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e7d049d5-9c6d-4970-b922-adfc41096230-config-volume\") pod \"collect-profiles-29457015-7c4zz\" (UID: \"e7d049d5-9c6d-4970-b922-adfc41096230\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29457015-7c4zz" Jan 03 06:15:00 crc kubenswrapper[4854]: I0103 06:15:00.237944 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p56z\" (UniqueName: \"kubernetes.io/projected/e7d049d5-9c6d-4970-b922-adfc41096230-kube-api-access-2p56z\") pod \"collect-profiles-29457015-7c4zz\" (UID: \"e7d049d5-9c6d-4970-b922-adfc41096230\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29457015-7c4zz" Jan 03 06:15:00 crc kubenswrapper[4854]: I0103 06:15:00.238008 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e7d049d5-9c6d-4970-b922-adfc41096230-secret-volume\") pod \"collect-profiles-29457015-7c4zz\" (UID: \"e7d049d5-9c6d-4970-b922-adfc41096230\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29457015-7c4zz" Jan 03 06:15:00 crc kubenswrapper[4854]: I0103 06:15:00.340800 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2p56z\" (UniqueName: \"kubernetes.io/projected/e7d049d5-9c6d-4970-b922-adfc41096230-kube-api-access-2p56z\") pod \"collect-profiles-29457015-7c4zz\" (UID: \"e7d049d5-9c6d-4970-b922-adfc41096230\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29457015-7c4zz" Jan 03 06:15:00 crc kubenswrapper[4854]: I0103 06:15:00.340896 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e7d049d5-9c6d-4970-b922-adfc41096230-secret-volume\") pod \"collect-profiles-29457015-7c4zz\" (UID: \"e7d049d5-9c6d-4970-b922-adfc41096230\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29457015-7c4zz" Jan 03 06:15:00 crc kubenswrapper[4854]: I0103 06:15:00.341179 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e7d049d5-9c6d-4970-b922-adfc41096230-config-volume\") pod \"collect-profiles-29457015-7c4zz\" (UID: \"e7d049d5-9c6d-4970-b922-adfc41096230\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29457015-7c4zz" Jan 03 06:15:00 crc kubenswrapper[4854]: I0103 06:15:00.342159 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e7d049d5-9c6d-4970-b922-adfc41096230-config-volume\") pod \"collect-profiles-29457015-7c4zz\" (UID: \"e7d049d5-9c6d-4970-b922-adfc41096230\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29457015-7c4zz" Jan 03 06:15:00 crc kubenswrapper[4854]: I0103 06:15:00.349395 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e7d049d5-9c6d-4970-b922-adfc41096230-secret-volume\") pod \"collect-profiles-29457015-7c4zz\" (UID: \"e7d049d5-9c6d-4970-b922-adfc41096230\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29457015-7c4zz" Jan 03 06:15:00 crc kubenswrapper[4854]: I0103 06:15:00.371790 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2p56z\" (UniqueName: \"kubernetes.io/projected/e7d049d5-9c6d-4970-b922-adfc41096230-kube-api-access-2p56z\") pod \"collect-profiles-29457015-7c4zz\" (UID: \"e7d049d5-9c6d-4970-b922-adfc41096230\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29457015-7c4zz" Jan 03 06:15:00 crc kubenswrapper[4854]: I0103 06:15:00.497377 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29457015-7c4zz" Jan 03 06:15:01 crc kubenswrapper[4854]: I0103 06:15:01.059829 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29457015-7c4zz"] Jan 03 06:15:01 crc kubenswrapper[4854]: I0103 06:15:01.238561 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29457015-7c4zz" event={"ID":"e7d049d5-9c6d-4970-b922-adfc41096230","Type":"ContainerStarted","Data":"e71a21291cc9ac4253a20463dcb2aa1659c43329feee3ab7a3ba42da77c493bf"} Jan 03 06:15:01 crc kubenswrapper[4854]: I0103 06:15:01.578488 4854 scope.go:117] "RemoveContainer" containerID="f606d764a110f6198b5d3de30409756daae46bec06b2bee8fcbb7ef90ea5e19f" Jan 03 06:15:01 crc kubenswrapper[4854]: I0103 06:15:01.658856 4854 scope.go:117] "RemoveContainer" containerID="4bb770e2e51976e2c8823a59adef7ec53a945d6c6e9d03eef75da7eba5dd1f0c" Jan 03 06:15:01 crc kubenswrapper[4854]: I0103 06:15:01.688749 4854 scope.go:117] "RemoveContainer" containerID="52b8654c0bbaccf80b54446637937ba13332e3f5312bc9670ce3a8571a939151" Jan 03 06:15:01 crc kubenswrapper[4854]: I0103 06:15:01.732276 4854 scope.go:117] "RemoveContainer" containerID="eb3520fc3c3653658357c578dc1ab6472976eef6377fb81043938c28784b4dce" Jan 03 06:15:01 crc kubenswrapper[4854]: I0103 06:15:01.787994 4854 scope.go:117] "RemoveContainer" containerID="5b934e7550dec6ee13cc8c7fe7a463b3cbaad26a3d961beee14b114f14323ff8" Jan 03 06:15:01 crc kubenswrapper[4854]: I0103 06:15:01.853975 4854 scope.go:117] "RemoveContainer" containerID="7ad3dc12935ce23f1c5caad7b24a62c1c73514796ef5fdcd0e70dae3c56ee113" Jan 03 06:15:01 crc kubenswrapper[4854]: I0103 06:15:01.923142 4854 scope.go:117] "RemoveContainer" containerID="f1833f5de57547fabe165b85587da17e832a06ca017f7f36d0429ef14d552a1b" Jan 03 06:15:02 crc kubenswrapper[4854]: I0103 06:15:02.034358 4854 scope.go:117] "RemoveContainer" containerID="6695a30f220b17a6b189176b8b5bfae4f3b9348bd25b12c3c4c19f3146613282" Jan 03 06:15:02 crc kubenswrapper[4854]: I0103 06:15:02.258532 4854 generic.go:334] "Generic (PLEG): container finished" podID="e7d049d5-9c6d-4970-b922-adfc41096230" containerID="75162c99648dbc36cd470c0935bb302596449ec1ff34ac2718c86301c81169fa" exitCode=0 Jan 03 06:15:02 crc kubenswrapper[4854]: I0103 06:15:02.258611 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29457015-7c4zz" event={"ID":"e7d049d5-9c6d-4970-b922-adfc41096230","Type":"ContainerDied","Data":"75162c99648dbc36cd470c0935bb302596449ec1ff34ac2718c86301c81169fa"} Jan 03 06:15:03 crc kubenswrapper[4854]: I0103 06:15:03.850282 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29457015-7c4zz" Jan 03 06:15:03 crc kubenswrapper[4854]: I0103 06:15:03.980604 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2p56z\" (UniqueName: \"kubernetes.io/projected/e7d049d5-9c6d-4970-b922-adfc41096230-kube-api-access-2p56z\") pod \"e7d049d5-9c6d-4970-b922-adfc41096230\" (UID: \"e7d049d5-9c6d-4970-b922-adfc41096230\") " Jan 03 06:15:03 crc kubenswrapper[4854]: I0103 06:15:03.980791 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e7d049d5-9c6d-4970-b922-adfc41096230-config-volume\") pod \"e7d049d5-9c6d-4970-b922-adfc41096230\" (UID: \"e7d049d5-9c6d-4970-b922-adfc41096230\") " Jan 03 06:15:03 crc kubenswrapper[4854]: I0103 06:15:03.981027 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e7d049d5-9c6d-4970-b922-adfc41096230-secret-volume\") pod \"e7d049d5-9c6d-4970-b922-adfc41096230\" (UID: \"e7d049d5-9c6d-4970-b922-adfc41096230\") " Jan 03 06:15:03 crc kubenswrapper[4854]: I0103 06:15:03.981616 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7d049d5-9c6d-4970-b922-adfc41096230-config-volume" (OuterVolumeSpecName: "config-volume") pod "e7d049d5-9c6d-4970-b922-adfc41096230" (UID: "e7d049d5-9c6d-4970-b922-adfc41096230"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:15:03 crc kubenswrapper[4854]: I0103 06:15:03.981878 4854 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e7d049d5-9c6d-4970-b922-adfc41096230-config-volume\") on node \"crc\" DevicePath \"\"" Jan 03 06:15:03 crc kubenswrapper[4854]: I0103 06:15:03.988026 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7d049d5-9c6d-4970-b922-adfc41096230-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e7d049d5-9c6d-4970-b922-adfc41096230" (UID: "e7d049d5-9c6d-4970-b922-adfc41096230"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:15:03 crc kubenswrapper[4854]: I0103 06:15:03.989738 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7d049d5-9c6d-4970-b922-adfc41096230-kube-api-access-2p56z" (OuterVolumeSpecName: "kube-api-access-2p56z") pod "e7d049d5-9c6d-4970-b922-adfc41096230" (UID: "e7d049d5-9c6d-4970-b922-adfc41096230"). InnerVolumeSpecName "kube-api-access-2p56z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:15:04 crc kubenswrapper[4854]: I0103 06:15:04.084273 4854 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e7d049d5-9c6d-4970-b922-adfc41096230-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 03 06:15:04 crc kubenswrapper[4854]: I0103 06:15:04.084871 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2p56z\" (UniqueName: \"kubernetes.io/projected/e7d049d5-9c6d-4970-b922-adfc41096230-kube-api-access-2p56z\") on node \"crc\" DevicePath \"\"" Jan 03 06:15:04 crc kubenswrapper[4854]: I0103 06:15:04.309668 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29457015-7c4zz" event={"ID":"e7d049d5-9c6d-4970-b922-adfc41096230","Type":"ContainerDied","Data":"e71a21291cc9ac4253a20463dcb2aa1659c43329feee3ab7a3ba42da77c493bf"} Jan 03 06:15:04 crc kubenswrapper[4854]: I0103 06:15:04.309710 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e71a21291cc9ac4253a20463dcb2aa1659c43329feee3ab7a3ba42da77c493bf" Jan 03 06:15:04 crc kubenswrapper[4854]: I0103 06:15:04.309774 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29457015-7c4zz" Jan 03 06:15:04 crc kubenswrapper[4854]: I0103 06:15:04.942261 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29456970-f9qt7"] Jan 03 06:15:04 crc kubenswrapper[4854]: I0103 06:15:04.954847 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29456970-f9qt7"] Jan 03 06:15:06 crc kubenswrapper[4854]: I0103 06:15:06.136216 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36a86a6b-3a2c-4994-af93-2b4ae754edfa" path="/var/lib/kubelet/pods/36a86a6b-3a2c-4994-af93-2b4ae754edfa/volumes" Jan 03 06:15:13 crc kubenswrapper[4854]: I0103 06:15:13.044058 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-lk7dp"] Jan 03 06:15:13 crc kubenswrapper[4854]: I0103 06:15:13.057302 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-lk7dp"] Jan 03 06:15:14 crc kubenswrapper[4854]: I0103 06:15:14.131513 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc6ba4e7-fb71-40f0-b478-e12ed5fc5ae4" path="/var/lib/kubelet/pods/cc6ba4e7-fb71-40f0-b478-e12ed5fc5ae4/volumes" Jan 03 06:15:16 crc kubenswrapper[4854]: I0103 06:15:16.047344 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-sd52b"] Jan 03 06:15:16 crc kubenswrapper[4854]: I0103 06:15:16.062229 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-sd52b"] Jan 03 06:15:16 crc kubenswrapper[4854]: I0103 06:15:16.131268 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca061deb-f600-49db-8ac3-6213e22b2f76" path="/var/lib/kubelet/pods/ca061deb-f600-49db-8ac3-6213e22b2f76/volumes" Jan 03 06:15:51 crc kubenswrapper[4854]: I0103 06:15:51.037418 4854 generic.go:334] "Generic (PLEG): container finished" podID="b423ec25-e1c9-4a78-b3fc-8887b5eedc7c" containerID="4fc3931d90c1a25a0f416237a422dd329ed6d9de43a35855af446943f9ded57d" exitCode=0 Jan 03 06:15:51 crc kubenswrapper[4854]: I0103 06:15:51.037545 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-94z52" event={"ID":"b423ec25-e1c9-4a78-b3fc-8887b5eedc7c","Type":"ContainerDied","Data":"4fc3931d90c1a25a0f416237a422dd329ed6d9de43a35855af446943f9ded57d"} Jan 03 06:15:52 crc kubenswrapper[4854]: I0103 06:15:52.705653 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-94z52" Jan 03 06:15:52 crc kubenswrapper[4854]: I0103 06:15:52.874001 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b423ec25-e1c9-4a78-b3fc-8887b5eedc7c-inventory\") pod \"b423ec25-e1c9-4a78-b3fc-8887b5eedc7c\" (UID: \"b423ec25-e1c9-4a78-b3fc-8887b5eedc7c\") " Jan 03 06:15:52 crc kubenswrapper[4854]: I0103 06:15:52.874111 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b423ec25-e1c9-4a78-b3fc-8887b5eedc7c-ssh-key\") pod \"b423ec25-e1c9-4a78-b3fc-8887b5eedc7c\" (UID: \"b423ec25-e1c9-4a78-b3fc-8887b5eedc7c\") " Jan 03 06:15:52 crc kubenswrapper[4854]: I0103 06:15:52.874310 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wccjw\" (UniqueName: \"kubernetes.io/projected/b423ec25-e1c9-4a78-b3fc-8887b5eedc7c-kube-api-access-wccjw\") pod \"b423ec25-e1c9-4a78-b3fc-8887b5eedc7c\" (UID: \"b423ec25-e1c9-4a78-b3fc-8887b5eedc7c\") " Jan 03 06:15:52 crc kubenswrapper[4854]: I0103 06:15:52.882381 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b423ec25-e1c9-4a78-b3fc-8887b5eedc7c-kube-api-access-wccjw" (OuterVolumeSpecName: "kube-api-access-wccjw") pod "b423ec25-e1c9-4a78-b3fc-8887b5eedc7c" (UID: "b423ec25-e1c9-4a78-b3fc-8887b5eedc7c"). InnerVolumeSpecName "kube-api-access-wccjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:15:52 crc kubenswrapper[4854]: I0103 06:15:52.926399 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b423ec25-e1c9-4a78-b3fc-8887b5eedc7c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b423ec25-e1c9-4a78-b3fc-8887b5eedc7c" (UID: "b423ec25-e1c9-4a78-b3fc-8887b5eedc7c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:15:52 crc kubenswrapper[4854]: I0103 06:15:52.930284 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b423ec25-e1c9-4a78-b3fc-8887b5eedc7c-inventory" (OuterVolumeSpecName: "inventory") pod "b423ec25-e1c9-4a78-b3fc-8887b5eedc7c" (UID: "b423ec25-e1c9-4a78-b3fc-8887b5eedc7c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:15:52 crc kubenswrapper[4854]: I0103 06:15:52.977849 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wccjw\" (UniqueName: \"kubernetes.io/projected/b423ec25-e1c9-4a78-b3fc-8887b5eedc7c-kube-api-access-wccjw\") on node \"crc\" DevicePath \"\"" Jan 03 06:15:52 crc kubenswrapper[4854]: I0103 06:15:52.977904 4854 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b423ec25-e1c9-4a78-b3fc-8887b5eedc7c-inventory\") on node \"crc\" DevicePath \"\"" Jan 03 06:15:52 crc kubenswrapper[4854]: I0103 06:15:52.977923 4854 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b423ec25-e1c9-4a78-b3fc-8887b5eedc7c-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 03 06:15:53 crc kubenswrapper[4854]: I0103 06:15:53.060766 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-94z52" event={"ID":"b423ec25-e1c9-4a78-b3fc-8887b5eedc7c","Type":"ContainerDied","Data":"0969c3b999d0e666286265b52d99e896cf6e09df074e705f92b3bf063864b1e7"} Jan 03 06:15:53 crc kubenswrapper[4854]: I0103 06:15:53.060814 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0969c3b999d0e666286265b52d99e896cf6e09df074e705f92b3bf063864b1e7" Jan 03 06:15:53 crc kubenswrapper[4854]: I0103 06:15:53.060843 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-94z52" Jan 03 06:15:53 crc kubenswrapper[4854]: I0103 06:15:53.153024 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-77kqc"] Jan 03 06:15:53 crc kubenswrapper[4854]: E0103 06:15:53.153707 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7d049d5-9c6d-4970-b922-adfc41096230" containerName="collect-profiles" Jan 03 06:15:53 crc kubenswrapper[4854]: I0103 06:15:53.153735 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7d049d5-9c6d-4970-b922-adfc41096230" containerName="collect-profiles" Jan 03 06:15:53 crc kubenswrapper[4854]: E0103 06:15:53.153757 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b423ec25-e1c9-4a78-b3fc-8887b5eedc7c" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 03 06:15:53 crc kubenswrapper[4854]: I0103 06:15:53.153768 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="b423ec25-e1c9-4a78-b3fc-8887b5eedc7c" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 03 06:15:53 crc kubenswrapper[4854]: I0103 06:15:53.154112 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7d049d5-9c6d-4970-b922-adfc41096230" containerName="collect-profiles" Jan 03 06:15:53 crc kubenswrapper[4854]: I0103 06:15:53.154156 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="b423ec25-e1c9-4a78-b3fc-8887b5eedc7c" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 03 06:15:53 crc kubenswrapper[4854]: I0103 06:15:53.155050 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-77kqc" Jan 03 06:15:53 crc kubenswrapper[4854]: I0103 06:15:53.156794 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4bl62" Jan 03 06:15:53 crc kubenswrapper[4854]: I0103 06:15:53.157229 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 03 06:15:53 crc kubenswrapper[4854]: I0103 06:15:53.157307 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 03 06:15:53 crc kubenswrapper[4854]: I0103 06:15:53.159315 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 03 06:15:53 crc kubenswrapper[4854]: I0103 06:15:53.163678 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-77kqc"] Jan 03 06:15:53 crc kubenswrapper[4854]: I0103 06:15:53.285112 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24100053-a781-448a-91e0-f75033961d9a-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-77kqc\" (UID: \"24100053-a781-448a-91e0-f75033961d9a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-77kqc" Jan 03 06:15:53 crc kubenswrapper[4854]: I0103 06:15:53.285354 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxl9x\" (UniqueName: \"kubernetes.io/projected/24100053-a781-448a-91e0-f75033961d9a-kube-api-access-cxl9x\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-77kqc\" (UID: \"24100053-a781-448a-91e0-f75033961d9a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-77kqc" Jan 03 06:15:53 crc kubenswrapper[4854]: I0103 06:15:53.285424 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/24100053-a781-448a-91e0-f75033961d9a-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-77kqc\" (UID: \"24100053-a781-448a-91e0-f75033961d9a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-77kqc" Jan 03 06:15:53 crc kubenswrapper[4854]: I0103 06:15:53.387948 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxl9x\" (UniqueName: \"kubernetes.io/projected/24100053-a781-448a-91e0-f75033961d9a-kube-api-access-cxl9x\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-77kqc\" (UID: \"24100053-a781-448a-91e0-f75033961d9a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-77kqc" Jan 03 06:15:53 crc kubenswrapper[4854]: I0103 06:15:53.388563 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/24100053-a781-448a-91e0-f75033961d9a-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-77kqc\" (UID: \"24100053-a781-448a-91e0-f75033961d9a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-77kqc" Jan 03 06:15:53 crc kubenswrapper[4854]: I0103 06:15:53.388682 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24100053-a781-448a-91e0-f75033961d9a-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-77kqc\" (UID: \"24100053-a781-448a-91e0-f75033961d9a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-77kqc" Jan 03 06:15:53 crc kubenswrapper[4854]: I0103 06:15:53.394274 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24100053-a781-448a-91e0-f75033961d9a-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-77kqc\" (UID: \"24100053-a781-448a-91e0-f75033961d9a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-77kqc" Jan 03 06:15:53 crc kubenswrapper[4854]: I0103 06:15:53.398330 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/24100053-a781-448a-91e0-f75033961d9a-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-77kqc\" (UID: \"24100053-a781-448a-91e0-f75033961d9a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-77kqc" Jan 03 06:15:53 crc kubenswrapper[4854]: I0103 06:15:53.410736 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxl9x\" (UniqueName: \"kubernetes.io/projected/24100053-a781-448a-91e0-f75033961d9a-kube-api-access-cxl9x\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-77kqc\" (UID: \"24100053-a781-448a-91e0-f75033961d9a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-77kqc" Jan 03 06:15:53 crc kubenswrapper[4854]: I0103 06:15:53.486212 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-77kqc" Jan 03 06:15:54 crc kubenswrapper[4854]: I0103 06:15:54.076435 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-77kqc"] Jan 03 06:15:54 crc kubenswrapper[4854]: W0103 06:15:54.086316 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24100053_a781_448a_91e0_f75033961d9a.slice/crio-871cddd57461d399543aebb6880ba7661529744b0cdbd02214f23aee4856fd9e WatchSource:0}: Error finding container 871cddd57461d399543aebb6880ba7661529744b0cdbd02214f23aee4856fd9e: Status 404 returned error can't find the container with id 871cddd57461d399543aebb6880ba7661529744b0cdbd02214f23aee4856fd9e Jan 03 06:15:55 crc kubenswrapper[4854]: I0103 06:15:55.085328 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-77kqc" event={"ID":"24100053-a781-448a-91e0-f75033961d9a","Type":"ContainerStarted","Data":"6b3476ebc27ffd3bd570fd09899350ce5f54d7137d4bc1c5c79864ace8e7971f"} Jan 03 06:15:55 crc kubenswrapper[4854]: I0103 06:15:55.085860 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-77kqc" event={"ID":"24100053-a781-448a-91e0-f75033961d9a","Type":"ContainerStarted","Data":"871cddd57461d399543aebb6880ba7661529744b0cdbd02214f23aee4856fd9e"} Jan 03 06:15:55 crc kubenswrapper[4854]: I0103 06:15:55.114753 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-77kqc" podStartSLOduration=1.588426481 podStartE2EDuration="2.114734159s" podCreationTimestamp="2026-01-03 06:15:53 +0000 UTC" firstStartedPulling="2026-01-03 06:15:54.091297581 +0000 UTC m=+2132.417874153" lastFinishedPulling="2026-01-03 06:15:54.617605239 +0000 UTC m=+2132.944181831" observedRunningTime="2026-01-03 06:15:55.106409584 +0000 UTC m=+2133.432986166" watchObservedRunningTime="2026-01-03 06:15:55.114734159 +0000 UTC m=+2133.441310721" Jan 03 06:16:02 crc kubenswrapper[4854]: I0103 06:16:02.289761 4854 scope.go:117] "RemoveContainer" containerID="e3b5c91257f418ab8f271fe9fa7d08b1009bc5b328230b69226f1d7bb15dd647" Jan 03 06:16:02 crc kubenswrapper[4854]: I0103 06:16:02.316464 4854 scope.go:117] "RemoveContainer" containerID="53c039f0963d33593ee947a8a3ea2c025e9c4672bac14f8a3efbc479981065a6" Jan 03 06:16:02 crc kubenswrapper[4854]: I0103 06:16:02.373412 4854 scope.go:117] "RemoveContainer" containerID="0d2642e00dc964fc0711de5e89b1f8f3eb4f9c0a908d3d734cf3e533f0ab34e1" Jan 03 06:16:09 crc kubenswrapper[4854]: I0103 06:16:09.077446 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-jvwwp"] Jan 03 06:16:09 crc kubenswrapper[4854]: I0103 06:16:09.093701 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-02bf-account-create-update-k67dd"] Jan 03 06:16:09 crc kubenswrapper[4854]: I0103 06:16:09.108648 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-jvwwp"] Jan 03 06:16:09 crc kubenswrapper[4854]: I0103 06:16:09.120685 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-02bf-account-create-update-k67dd"] Jan 03 06:16:09 crc kubenswrapper[4854]: I0103 06:16:09.134845 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-0bd1-account-create-update-jzm2v"] Jan 03 06:16:09 crc kubenswrapper[4854]: I0103 06:16:09.147565 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-0bd1-account-create-update-jzm2v"] Jan 03 06:16:10 crc kubenswrapper[4854]: I0103 06:16:10.057779 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-lr4nt"] Jan 03 06:16:10 crc kubenswrapper[4854]: I0103 06:16:10.071295 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-lr4nt"] Jan 03 06:16:10 crc kubenswrapper[4854]: I0103 06:16:10.084232 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-5jcbw"] Jan 03 06:16:10 crc kubenswrapper[4854]: I0103 06:16:10.093895 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-5jcbw"] Jan 03 06:16:10 crc kubenswrapper[4854]: I0103 06:16:10.106888 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-e562-account-create-update-k68zp"] Jan 03 06:16:10 crc kubenswrapper[4854]: I0103 06:16:10.135077 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1aa6af8e-d27e-4727-a8bc-4a2e5690cc88" path="/var/lib/kubelet/pods/1aa6af8e-d27e-4727-a8bc-4a2e5690cc88/volumes" Jan 03 06:16:10 crc kubenswrapper[4854]: I0103 06:16:10.137482 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82b6b681-415b-40ee-9510-801116f895c8" path="/var/lib/kubelet/pods/82b6b681-415b-40ee-9510-801116f895c8/volumes" Jan 03 06:16:10 crc kubenswrapper[4854]: I0103 06:16:10.139904 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a04b84e4-f513-40cc-bd0e-852449fb839d" path="/var/lib/kubelet/pods/a04b84e4-f513-40cc-bd0e-852449fb839d/volumes" Jan 03 06:16:10 crc kubenswrapper[4854]: I0103 06:16:10.141962 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7b23ad2-3ba6-44d4-88a0-aad1458970d0" path="/var/lib/kubelet/pods/a7b23ad2-3ba6-44d4-88a0-aad1458970d0/volumes" Jan 03 06:16:10 crc kubenswrapper[4854]: I0103 06:16:10.145179 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8224418-e8de-49e2-a7f5-059ea9ed6f72" path="/var/lib/kubelet/pods/e8224418-e8de-49e2-a7f5-059ea9ed6f72/volumes" Jan 03 06:16:10 crc kubenswrapper[4854]: I0103 06:16:10.147541 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-e562-account-create-update-k68zp"] Jan 03 06:16:11 crc kubenswrapper[4854]: I0103 06:16:11.755946 4854 patch_prober.go:28] interesting pod/machine-config-daemon-qdhfx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 06:16:11 crc kubenswrapper[4854]: I0103 06:16:11.756503 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 06:16:12 crc kubenswrapper[4854]: I0103 06:16:12.134001 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1ee7c12-946c-4a6c-b15b-15cd1c15bd30" path="/var/lib/kubelet/pods/a1ee7c12-946c-4a6c-b15b-15cd1c15bd30/volumes" Jan 03 06:16:13 crc kubenswrapper[4854]: I0103 06:16:13.590964 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s6gct"] Jan 03 06:16:13 crc kubenswrapper[4854]: I0103 06:16:13.641281 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s6gct"] Jan 03 06:16:13 crc kubenswrapper[4854]: I0103 06:16:13.641441 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s6gct" Jan 03 06:16:13 crc kubenswrapper[4854]: I0103 06:16:13.662111 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99f863f5-fa79-40f0-8ee2-d3d75b6c3df2-catalog-content\") pod \"community-operators-s6gct\" (UID: \"99f863f5-fa79-40f0-8ee2-d3d75b6c3df2\") " pod="openshift-marketplace/community-operators-s6gct" Jan 03 06:16:13 crc kubenswrapper[4854]: I0103 06:16:13.662223 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z49rf\" (UniqueName: \"kubernetes.io/projected/99f863f5-fa79-40f0-8ee2-d3d75b6c3df2-kube-api-access-z49rf\") pod \"community-operators-s6gct\" (UID: \"99f863f5-fa79-40f0-8ee2-d3d75b6c3df2\") " pod="openshift-marketplace/community-operators-s6gct" Jan 03 06:16:13 crc kubenswrapper[4854]: I0103 06:16:13.662294 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99f863f5-fa79-40f0-8ee2-d3d75b6c3df2-utilities\") pod \"community-operators-s6gct\" (UID: \"99f863f5-fa79-40f0-8ee2-d3d75b6c3df2\") " pod="openshift-marketplace/community-operators-s6gct" Jan 03 06:16:13 crc kubenswrapper[4854]: I0103 06:16:13.764521 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z49rf\" (UniqueName: \"kubernetes.io/projected/99f863f5-fa79-40f0-8ee2-d3d75b6c3df2-kube-api-access-z49rf\") pod \"community-operators-s6gct\" (UID: \"99f863f5-fa79-40f0-8ee2-d3d75b6c3df2\") " pod="openshift-marketplace/community-operators-s6gct" Jan 03 06:16:13 crc kubenswrapper[4854]: I0103 06:16:13.765169 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99f863f5-fa79-40f0-8ee2-d3d75b6c3df2-utilities\") pod \"community-operators-s6gct\" (UID: \"99f863f5-fa79-40f0-8ee2-d3d75b6c3df2\") " pod="openshift-marketplace/community-operators-s6gct" Jan 03 06:16:13 crc kubenswrapper[4854]: I0103 06:16:13.765432 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99f863f5-fa79-40f0-8ee2-d3d75b6c3df2-catalog-content\") pod \"community-operators-s6gct\" (UID: \"99f863f5-fa79-40f0-8ee2-d3d75b6c3df2\") " pod="openshift-marketplace/community-operators-s6gct" Jan 03 06:16:13 crc kubenswrapper[4854]: I0103 06:16:13.766038 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99f863f5-fa79-40f0-8ee2-d3d75b6c3df2-catalog-content\") pod \"community-operators-s6gct\" (UID: \"99f863f5-fa79-40f0-8ee2-d3d75b6c3df2\") " pod="openshift-marketplace/community-operators-s6gct" Jan 03 06:16:13 crc kubenswrapper[4854]: I0103 06:16:13.766351 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99f863f5-fa79-40f0-8ee2-d3d75b6c3df2-utilities\") pod \"community-operators-s6gct\" (UID: \"99f863f5-fa79-40f0-8ee2-d3d75b6c3df2\") " pod="openshift-marketplace/community-operators-s6gct" Jan 03 06:16:13 crc kubenswrapper[4854]: I0103 06:16:13.783664 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z49rf\" (UniqueName: \"kubernetes.io/projected/99f863f5-fa79-40f0-8ee2-d3d75b6c3df2-kube-api-access-z49rf\") pod \"community-operators-s6gct\" (UID: \"99f863f5-fa79-40f0-8ee2-d3d75b6c3df2\") " pod="openshift-marketplace/community-operators-s6gct" Jan 03 06:16:13 crc kubenswrapper[4854]: I0103 06:16:13.985652 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s6gct" Jan 03 06:16:14 crc kubenswrapper[4854]: I0103 06:16:14.890914 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s6gct"] Jan 03 06:16:15 crc kubenswrapper[4854]: I0103 06:16:15.376864 4854 generic.go:334] "Generic (PLEG): container finished" podID="99f863f5-fa79-40f0-8ee2-d3d75b6c3df2" containerID="db4cdbdf46d29955ef82664bea8f592b464ea8eec755a071359c5b8cee3fc336" exitCode=0 Jan 03 06:16:15 crc kubenswrapper[4854]: I0103 06:16:15.376982 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s6gct" event={"ID":"99f863f5-fa79-40f0-8ee2-d3d75b6c3df2","Type":"ContainerDied","Data":"db4cdbdf46d29955ef82664bea8f592b464ea8eec755a071359c5b8cee3fc336"} Jan 03 06:16:15 crc kubenswrapper[4854]: I0103 06:16:15.377204 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s6gct" event={"ID":"99f863f5-fa79-40f0-8ee2-d3d75b6c3df2","Type":"ContainerStarted","Data":"3c03075ff50db9d6d631a75fa8c21647d2c7f037142385909aef3ffb93277276"} Jan 03 06:16:21 crc kubenswrapper[4854]: I0103 06:16:21.454303 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s6gct" event={"ID":"99f863f5-fa79-40f0-8ee2-d3d75b6c3df2","Type":"ContainerStarted","Data":"abaf72645dd862731e315609b2b4941682d7fe5b3086a23d29ce089f7908eeb4"} Jan 03 06:16:22 crc kubenswrapper[4854]: I0103 06:16:22.466614 4854 generic.go:334] "Generic (PLEG): container finished" podID="99f863f5-fa79-40f0-8ee2-d3d75b6c3df2" containerID="abaf72645dd862731e315609b2b4941682d7fe5b3086a23d29ce089f7908eeb4" exitCode=0 Jan 03 06:16:22 crc kubenswrapper[4854]: I0103 06:16:22.467158 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s6gct" event={"ID":"99f863f5-fa79-40f0-8ee2-d3d75b6c3df2","Type":"ContainerDied","Data":"abaf72645dd862731e315609b2b4941682d7fe5b3086a23d29ce089f7908eeb4"} Jan 03 06:16:23 crc kubenswrapper[4854]: I0103 06:16:23.482374 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s6gct" event={"ID":"99f863f5-fa79-40f0-8ee2-d3d75b6c3df2","Type":"ContainerStarted","Data":"dfef79ec6ba8d75de052d84ce5dcd60a393c8f2af22bc8300a68bb6818818d46"} Jan 03 06:16:23 crc kubenswrapper[4854]: I0103 06:16:23.512340 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s6gct" podStartSLOduration=3.009471703 podStartE2EDuration="10.512321586s" podCreationTimestamp="2026-01-03 06:16:13 +0000 UTC" firstStartedPulling="2026-01-03 06:16:15.380118978 +0000 UTC m=+2153.706695550" lastFinishedPulling="2026-01-03 06:16:22.882968851 +0000 UTC m=+2161.209545433" observedRunningTime="2026-01-03 06:16:23.501133541 +0000 UTC m=+2161.827710133" watchObservedRunningTime="2026-01-03 06:16:23.512321586 +0000 UTC m=+2161.838898168" Jan 03 06:16:23 crc kubenswrapper[4854]: I0103 06:16:23.985846 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s6gct" Jan 03 06:16:23 crc kubenswrapper[4854]: I0103 06:16:23.986169 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s6gct" Jan 03 06:16:25 crc kubenswrapper[4854]: I0103 06:16:25.036734 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-s6gct" podUID="99f863f5-fa79-40f0-8ee2-d3d75b6c3df2" containerName="registry-server" probeResult="failure" output=< Jan 03 06:16:25 crc kubenswrapper[4854]: timeout: failed to connect service ":50051" within 1s Jan 03 06:16:25 crc kubenswrapper[4854]: > Jan 03 06:16:34 crc kubenswrapper[4854]: I0103 06:16:34.066799 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s6gct" Jan 03 06:16:34 crc kubenswrapper[4854]: I0103 06:16:34.144351 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s6gct" Jan 03 06:16:34 crc kubenswrapper[4854]: I0103 06:16:34.233243 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s6gct"] Jan 03 06:16:34 crc kubenswrapper[4854]: I0103 06:16:34.333754 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5dgzc"] Jan 03 06:16:34 crc kubenswrapper[4854]: I0103 06:16:34.333990 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5dgzc" podUID="ff5e61bf-01e1-4eb9-93fd-89ac38c3932d" containerName="registry-server" containerID="cri-o://96652880f3451c0b6b3acae039a18348100116918f04cc23b505025360e35b4d" gracePeriod=2 Jan 03 06:16:34 crc kubenswrapper[4854]: I0103 06:16:34.643018 4854 generic.go:334] "Generic (PLEG): container finished" podID="ff5e61bf-01e1-4eb9-93fd-89ac38c3932d" containerID="96652880f3451c0b6b3acae039a18348100116918f04cc23b505025360e35b4d" exitCode=0 Jan 03 06:16:34 crc kubenswrapper[4854]: I0103 06:16:34.643183 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5dgzc" event={"ID":"ff5e61bf-01e1-4eb9-93fd-89ac38c3932d","Type":"ContainerDied","Data":"96652880f3451c0b6b3acae039a18348100116918f04cc23b505025360e35b4d"} Jan 03 06:16:35 crc kubenswrapper[4854]: I0103 06:16:35.415665 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5dgzc" Jan 03 06:16:35 crc kubenswrapper[4854]: I0103 06:16:35.507278 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dsls\" (UniqueName: \"kubernetes.io/projected/ff5e61bf-01e1-4eb9-93fd-89ac38c3932d-kube-api-access-9dsls\") pod \"ff5e61bf-01e1-4eb9-93fd-89ac38c3932d\" (UID: \"ff5e61bf-01e1-4eb9-93fd-89ac38c3932d\") " Jan 03 06:16:35 crc kubenswrapper[4854]: I0103 06:16:35.507522 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff5e61bf-01e1-4eb9-93fd-89ac38c3932d-catalog-content\") pod \"ff5e61bf-01e1-4eb9-93fd-89ac38c3932d\" (UID: \"ff5e61bf-01e1-4eb9-93fd-89ac38c3932d\") " Jan 03 06:16:35 crc kubenswrapper[4854]: I0103 06:16:35.507562 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff5e61bf-01e1-4eb9-93fd-89ac38c3932d-utilities\") pod \"ff5e61bf-01e1-4eb9-93fd-89ac38c3932d\" (UID: \"ff5e61bf-01e1-4eb9-93fd-89ac38c3932d\") " Jan 03 06:16:35 crc kubenswrapper[4854]: I0103 06:16:35.508121 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff5e61bf-01e1-4eb9-93fd-89ac38c3932d-utilities" (OuterVolumeSpecName: "utilities") pod "ff5e61bf-01e1-4eb9-93fd-89ac38c3932d" (UID: "ff5e61bf-01e1-4eb9-93fd-89ac38c3932d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:16:35 crc kubenswrapper[4854]: I0103 06:16:35.510179 4854 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff5e61bf-01e1-4eb9-93fd-89ac38c3932d-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 06:16:35 crc kubenswrapper[4854]: I0103 06:16:35.520346 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff5e61bf-01e1-4eb9-93fd-89ac38c3932d-kube-api-access-9dsls" (OuterVolumeSpecName: "kube-api-access-9dsls") pod "ff5e61bf-01e1-4eb9-93fd-89ac38c3932d" (UID: "ff5e61bf-01e1-4eb9-93fd-89ac38c3932d"). InnerVolumeSpecName "kube-api-access-9dsls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:16:35 crc kubenswrapper[4854]: I0103 06:16:35.557869 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff5e61bf-01e1-4eb9-93fd-89ac38c3932d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ff5e61bf-01e1-4eb9-93fd-89ac38c3932d" (UID: "ff5e61bf-01e1-4eb9-93fd-89ac38c3932d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:16:35 crc kubenswrapper[4854]: I0103 06:16:35.613154 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dsls\" (UniqueName: \"kubernetes.io/projected/ff5e61bf-01e1-4eb9-93fd-89ac38c3932d-kube-api-access-9dsls\") on node \"crc\" DevicePath \"\"" Jan 03 06:16:35 crc kubenswrapper[4854]: I0103 06:16:35.613190 4854 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff5e61bf-01e1-4eb9-93fd-89ac38c3932d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 06:16:35 crc kubenswrapper[4854]: I0103 06:16:35.657510 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5dgzc" event={"ID":"ff5e61bf-01e1-4eb9-93fd-89ac38c3932d","Type":"ContainerDied","Data":"5db31a81547c7efb2d754b7e82fb164cc210c6be342d754ee7f929049c05691d"} Jan 03 06:16:35 crc kubenswrapper[4854]: I0103 06:16:35.657598 4854 scope.go:117] "RemoveContainer" containerID="96652880f3451c0b6b3acae039a18348100116918f04cc23b505025360e35b4d" Jan 03 06:16:35 crc kubenswrapper[4854]: I0103 06:16:35.657541 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5dgzc" Jan 03 06:16:35 crc kubenswrapper[4854]: I0103 06:16:35.699375 4854 scope.go:117] "RemoveContainer" containerID="c4ea397908a7c07b111569c8e0104d4fe1722b8a9a6aabedf655948c947af0bc" Jan 03 06:16:35 crc kubenswrapper[4854]: I0103 06:16:35.710142 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5dgzc"] Jan 03 06:16:35 crc kubenswrapper[4854]: I0103 06:16:35.716578 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5dgzc"] Jan 03 06:16:35 crc kubenswrapper[4854]: I0103 06:16:35.755970 4854 scope.go:117] "RemoveContainer" containerID="ced53a9960cd13e78f3ac4bbe4ed04116667a8b8b4fcf8dd75e7590593936639" Jan 03 06:16:36 crc kubenswrapper[4854]: I0103 06:16:36.135535 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff5e61bf-01e1-4eb9-93fd-89ac38c3932d" path="/var/lib/kubelet/pods/ff5e61bf-01e1-4eb9-93fd-89ac38c3932d/volumes" Jan 03 06:16:39 crc kubenswrapper[4854]: I0103 06:16:39.070748 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ptr5k"] Jan 03 06:16:39 crc kubenswrapper[4854]: I0103 06:16:39.086001 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ptr5k"] Jan 03 06:16:40 crc kubenswrapper[4854]: I0103 06:16:40.136700 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ffba700-7bb8-458d-b50f-322985473e2d" path="/var/lib/kubelet/pods/0ffba700-7bb8-458d-b50f-322985473e2d/volumes" Jan 03 06:16:41 crc kubenswrapper[4854]: I0103 06:16:41.755851 4854 patch_prober.go:28] interesting pod/machine-config-daemon-qdhfx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 06:16:41 crc kubenswrapper[4854]: I0103 06:16:41.756272 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 06:16:57 crc kubenswrapper[4854]: I0103 06:16:57.051778 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-45z7t"] Jan 03 06:16:57 crc kubenswrapper[4854]: I0103 06:16:57.068905 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-45z7t"] Jan 03 06:16:58 crc kubenswrapper[4854]: I0103 06:16:58.136324 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33d7a9cf-9ea2-4e02-b431-4c6b1df21337" path="/var/lib/kubelet/pods/33d7a9cf-9ea2-4e02-b431-4c6b1df21337/volumes" Jan 03 06:16:59 crc kubenswrapper[4854]: I0103 06:16:59.080732 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-b805-account-create-update-hw8rw"] Jan 03 06:16:59 crc kubenswrapper[4854]: I0103 06:16:59.102550 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-b805-account-create-update-hw8rw"] Jan 03 06:17:00 crc kubenswrapper[4854]: I0103 06:17:00.142461 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0800292-1f7a-4d53-85b2-f256b8b27b7f" path="/var/lib/kubelet/pods/c0800292-1f7a-4d53-85b2-f256b8b27b7f/volumes" Jan 03 06:17:01 crc kubenswrapper[4854]: I0103 06:17:01.405144 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vv2zx"] Jan 03 06:17:01 crc kubenswrapper[4854]: E0103 06:17:01.405905 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff5e61bf-01e1-4eb9-93fd-89ac38c3932d" containerName="registry-server" Jan 03 06:17:01 crc kubenswrapper[4854]: I0103 06:17:01.405931 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff5e61bf-01e1-4eb9-93fd-89ac38c3932d" containerName="registry-server" Jan 03 06:17:01 crc kubenswrapper[4854]: E0103 06:17:01.405968 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff5e61bf-01e1-4eb9-93fd-89ac38c3932d" containerName="extract-utilities" Jan 03 06:17:01 crc kubenswrapper[4854]: I0103 06:17:01.405980 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff5e61bf-01e1-4eb9-93fd-89ac38c3932d" containerName="extract-utilities" Jan 03 06:17:01 crc kubenswrapper[4854]: E0103 06:17:01.405997 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff5e61bf-01e1-4eb9-93fd-89ac38c3932d" containerName="extract-content" Jan 03 06:17:01 crc kubenswrapper[4854]: I0103 06:17:01.406009 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff5e61bf-01e1-4eb9-93fd-89ac38c3932d" containerName="extract-content" Jan 03 06:17:01 crc kubenswrapper[4854]: I0103 06:17:01.406459 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff5e61bf-01e1-4eb9-93fd-89ac38c3932d" containerName="registry-server" Jan 03 06:17:01 crc kubenswrapper[4854]: I0103 06:17:01.410261 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vv2zx" Jan 03 06:17:01 crc kubenswrapper[4854]: I0103 06:17:01.417313 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vv2zx"] Jan 03 06:17:01 crc kubenswrapper[4854]: I0103 06:17:01.482589 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac1db8a1-1ac3-4513-97f1-142e4a8e192e-catalog-content\") pod \"certified-operators-vv2zx\" (UID: \"ac1db8a1-1ac3-4513-97f1-142e4a8e192e\") " pod="openshift-marketplace/certified-operators-vv2zx" Jan 03 06:17:01 crc kubenswrapper[4854]: I0103 06:17:01.482705 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s4gc\" (UniqueName: \"kubernetes.io/projected/ac1db8a1-1ac3-4513-97f1-142e4a8e192e-kube-api-access-2s4gc\") pod \"certified-operators-vv2zx\" (UID: \"ac1db8a1-1ac3-4513-97f1-142e4a8e192e\") " pod="openshift-marketplace/certified-operators-vv2zx" Jan 03 06:17:01 crc kubenswrapper[4854]: I0103 06:17:01.482776 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac1db8a1-1ac3-4513-97f1-142e4a8e192e-utilities\") pod \"certified-operators-vv2zx\" (UID: \"ac1db8a1-1ac3-4513-97f1-142e4a8e192e\") " pod="openshift-marketplace/certified-operators-vv2zx" Jan 03 06:17:01 crc kubenswrapper[4854]: I0103 06:17:01.585280 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac1db8a1-1ac3-4513-97f1-142e4a8e192e-catalog-content\") pod \"certified-operators-vv2zx\" (UID: \"ac1db8a1-1ac3-4513-97f1-142e4a8e192e\") " pod="openshift-marketplace/certified-operators-vv2zx" Jan 03 06:17:01 crc kubenswrapper[4854]: I0103 06:17:01.585444 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2s4gc\" (UniqueName: \"kubernetes.io/projected/ac1db8a1-1ac3-4513-97f1-142e4a8e192e-kube-api-access-2s4gc\") pod \"certified-operators-vv2zx\" (UID: \"ac1db8a1-1ac3-4513-97f1-142e4a8e192e\") " pod="openshift-marketplace/certified-operators-vv2zx" Jan 03 06:17:01 crc kubenswrapper[4854]: I0103 06:17:01.586045 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac1db8a1-1ac3-4513-97f1-142e4a8e192e-utilities\") pod \"certified-operators-vv2zx\" (UID: \"ac1db8a1-1ac3-4513-97f1-142e4a8e192e\") " pod="openshift-marketplace/certified-operators-vv2zx" Jan 03 06:17:01 crc kubenswrapper[4854]: I0103 06:17:01.586037 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac1db8a1-1ac3-4513-97f1-142e4a8e192e-catalog-content\") pod \"certified-operators-vv2zx\" (UID: \"ac1db8a1-1ac3-4513-97f1-142e4a8e192e\") " pod="openshift-marketplace/certified-operators-vv2zx" Jan 03 06:17:01 crc kubenswrapper[4854]: I0103 06:17:01.586498 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac1db8a1-1ac3-4513-97f1-142e4a8e192e-utilities\") pod \"certified-operators-vv2zx\" (UID: \"ac1db8a1-1ac3-4513-97f1-142e4a8e192e\") " pod="openshift-marketplace/certified-operators-vv2zx" Jan 03 06:17:01 crc kubenswrapper[4854]: I0103 06:17:01.609689 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2s4gc\" (UniqueName: \"kubernetes.io/projected/ac1db8a1-1ac3-4513-97f1-142e4a8e192e-kube-api-access-2s4gc\") pod \"certified-operators-vv2zx\" (UID: \"ac1db8a1-1ac3-4513-97f1-142e4a8e192e\") " pod="openshift-marketplace/certified-operators-vv2zx" Jan 03 06:17:01 crc kubenswrapper[4854]: I0103 06:17:01.750431 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vv2zx" Jan 03 06:17:02 crc kubenswrapper[4854]: I0103 06:17:02.336403 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vv2zx"] Jan 03 06:17:02 crc kubenswrapper[4854]: I0103 06:17:02.526035 4854 scope.go:117] "RemoveContainer" containerID="c518a6dfeb4244807dc93fe5d4ebd2b6813dc690c8bb874dc6ca36cba420f27b" Jan 03 06:17:02 crc kubenswrapper[4854]: I0103 06:17:02.552017 4854 scope.go:117] "RemoveContainer" containerID="4b9ea40ee8db87fb23371989a5e7c70468dd6a8379abdf18a9ad3a1e0d124b25" Jan 03 06:17:02 crc kubenswrapper[4854]: I0103 06:17:02.579257 4854 scope.go:117] "RemoveContainer" containerID="479bd588e0cebe0f28adc4beafc651688f9a9714769b9b7ed71c6309d93e8b28" Jan 03 06:17:02 crc kubenswrapper[4854]: I0103 06:17:02.606856 4854 scope.go:117] "RemoveContainer" containerID="c4c4b41efa4c18bfbe4ffb15e8f65666f6bf430e325d83a138b571f34baf8da0" Jan 03 06:17:02 crc kubenswrapper[4854]: I0103 06:17:02.761404 4854 scope.go:117] "RemoveContainer" containerID="601848ec57f14b23dfba8b7d3ce2cacb396834177d0434e06bcd0fcdb811bb8f" Jan 03 06:17:02 crc kubenswrapper[4854]: I0103 06:17:02.816739 4854 scope.go:117] "RemoveContainer" containerID="a46991e8b0af15112f224c8ae0d956377d62db81674a0f7d67408d34eba15989" Jan 03 06:17:02 crc kubenswrapper[4854]: I0103 06:17:02.854139 4854 scope.go:117] "RemoveContainer" containerID="a587d926d89f4f7548fe5710ef01a724d92db7802a84764b2b2f8e035c7622b1" Jan 03 06:17:02 crc kubenswrapper[4854]: I0103 06:17:02.947136 4854 scope.go:117] "RemoveContainer" containerID="eb553f9592b832041cf1fdf2c9ef408e88c6069171e67e27b21406565c319ae4" Jan 03 06:17:02 crc kubenswrapper[4854]: I0103 06:17:02.984707 4854 scope.go:117] "RemoveContainer" containerID="4f1afe4a0d42833cb881a50df3661bf589dc97c1dfd0b8b3b0ad1fae7b32e14f" Jan 03 06:17:03 crc kubenswrapper[4854]: I0103 06:17:03.049376 4854 generic.go:334] "Generic (PLEG): container finished" podID="ac1db8a1-1ac3-4513-97f1-142e4a8e192e" containerID="08db5af03d9679ae8e187e1dedc1cd3e68acc9cda57682e7eccf285d41d000d4" exitCode=0 Jan 03 06:17:03 crc kubenswrapper[4854]: I0103 06:17:03.049411 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vv2zx" event={"ID":"ac1db8a1-1ac3-4513-97f1-142e4a8e192e","Type":"ContainerDied","Data":"08db5af03d9679ae8e187e1dedc1cd3e68acc9cda57682e7eccf285d41d000d4"} Jan 03 06:17:03 crc kubenswrapper[4854]: I0103 06:17:03.049432 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vv2zx" event={"ID":"ac1db8a1-1ac3-4513-97f1-142e4a8e192e","Type":"ContainerStarted","Data":"644ca9be2c358fcfa284b2fb5c80401dcde6ca6fe6d0b5fb1e459f7e9bfd05c7"} Jan 03 06:17:04 crc kubenswrapper[4854]: I0103 06:17:04.077164 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vv2zx" event={"ID":"ac1db8a1-1ac3-4513-97f1-142e4a8e192e","Type":"ContainerStarted","Data":"83850a241d3632de7540417a825e6319a4b5aca9e9cd63c3e58ed7da755f5f52"} Jan 03 06:17:06 crc kubenswrapper[4854]: I0103 06:17:06.101789 4854 generic.go:334] "Generic (PLEG): container finished" podID="ac1db8a1-1ac3-4513-97f1-142e4a8e192e" containerID="83850a241d3632de7540417a825e6319a4b5aca9e9cd63c3e58ed7da755f5f52" exitCode=0 Jan 03 06:17:06 crc kubenswrapper[4854]: I0103 06:17:06.101860 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vv2zx" event={"ID":"ac1db8a1-1ac3-4513-97f1-142e4a8e192e","Type":"ContainerDied","Data":"83850a241d3632de7540417a825e6319a4b5aca9e9cd63c3e58ed7da755f5f52"} Jan 03 06:17:07 crc kubenswrapper[4854]: I0103 06:17:07.115578 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vv2zx" event={"ID":"ac1db8a1-1ac3-4513-97f1-142e4a8e192e","Type":"ContainerStarted","Data":"404feab5d41b15e8dc16614362fbe163f8ab521a90c1b77b8e198ee6dfd96ee5"} Jan 03 06:17:07 crc kubenswrapper[4854]: I0103 06:17:07.148564 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vv2zx" podStartSLOduration=2.614770639 podStartE2EDuration="6.148538787s" podCreationTimestamp="2026-01-03 06:17:01 +0000 UTC" firstStartedPulling="2026-01-03 06:17:03.061115007 +0000 UTC m=+2201.387691579" lastFinishedPulling="2026-01-03 06:17:06.594883155 +0000 UTC m=+2204.921459727" observedRunningTime="2026-01-03 06:17:07.135072845 +0000 UTC m=+2205.461649417" watchObservedRunningTime="2026-01-03 06:17:07.148538787 +0000 UTC m=+2205.475115369" Jan 03 06:17:09 crc kubenswrapper[4854]: I0103 06:17:09.038564 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-27kbq"] Jan 03 06:17:09 crc kubenswrapper[4854]: I0103 06:17:09.053301 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-27kbq"] Jan 03 06:17:10 crc kubenswrapper[4854]: I0103 06:17:10.032767 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-6q98m"] Jan 03 06:17:10 crc kubenswrapper[4854]: I0103 06:17:10.045771 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-6q98m"] Jan 03 06:17:10 crc kubenswrapper[4854]: I0103 06:17:10.130806 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc666e24-12a2-4bea-bded-bb83c896dc9d" path="/var/lib/kubelet/pods/bc666e24-12a2-4bea-bded-bb83c896dc9d/volumes" Jan 03 06:17:10 crc kubenswrapper[4854]: I0103 06:17:10.132939 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f93995e8-15fe-446c-b731-ade43a634b9b" path="/var/lib/kubelet/pods/f93995e8-15fe-446c-b731-ade43a634b9b/volumes" Jan 03 06:17:11 crc kubenswrapper[4854]: I0103 06:17:11.751438 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vv2zx" Jan 03 06:17:11 crc kubenswrapper[4854]: I0103 06:17:11.752963 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vv2zx" Jan 03 06:17:11 crc kubenswrapper[4854]: I0103 06:17:11.755520 4854 patch_prober.go:28] interesting pod/machine-config-daemon-qdhfx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 06:17:11 crc kubenswrapper[4854]: I0103 06:17:11.755586 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 06:17:11 crc kubenswrapper[4854]: I0103 06:17:11.755639 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" Jan 03 06:17:11 crc kubenswrapper[4854]: I0103 06:17:11.756490 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0c123e736f10c9692e0df12e04db731fd8637258c1c778380ea9fd1d500829cf"} pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 03 06:17:11 crc kubenswrapper[4854]: I0103 06:17:11.756568 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" containerID="cri-o://0c123e736f10c9692e0df12e04db731fd8637258c1c778380ea9fd1d500829cf" gracePeriod=600 Jan 03 06:17:11 crc kubenswrapper[4854]: I0103 06:17:11.805041 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vv2zx" Jan 03 06:17:12 crc kubenswrapper[4854]: I0103 06:17:12.256736 4854 generic.go:334] "Generic (PLEG): container finished" podID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerID="0c123e736f10c9692e0df12e04db731fd8637258c1c778380ea9fd1d500829cf" exitCode=0 Jan 03 06:17:12 crc kubenswrapper[4854]: I0103 06:17:12.256841 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" event={"ID":"e8c88d7d-092b-44f7-b4c8-3540be3c0e8b","Type":"ContainerDied","Data":"0c123e736f10c9692e0df12e04db731fd8637258c1c778380ea9fd1d500829cf"} Jan 03 06:17:12 crc kubenswrapper[4854]: I0103 06:17:12.257394 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" event={"ID":"e8c88d7d-092b-44f7-b4c8-3540be3c0e8b","Type":"ContainerStarted","Data":"e16522f8da35959e8a35a9556715d70f86468625ee1058a24772e4459c7e4658"} Jan 03 06:17:12 crc kubenswrapper[4854]: I0103 06:17:12.257429 4854 scope.go:117] "RemoveContainer" containerID="1db592b68f62b6c2ad08a01037bad35421ca86a46de4654a3ad5f8ce76bb6f1b" Jan 03 06:17:12 crc kubenswrapper[4854]: I0103 06:17:12.332915 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vv2zx" Jan 03 06:17:12 crc kubenswrapper[4854]: I0103 06:17:12.393261 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vv2zx"] Jan 03 06:17:14 crc kubenswrapper[4854]: I0103 06:17:14.283015 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vv2zx" podUID="ac1db8a1-1ac3-4513-97f1-142e4a8e192e" containerName="registry-server" containerID="cri-o://404feab5d41b15e8dc16614362fbe163f8ab521a90c1b77b8e198ee6dfd96ee5" gracePeriod=2 Jan 03 06:17:14 crc kubenswrapper[4854]: I0103 06:17:14.903369 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vv2zx" Jan 03 06:17:15 crc kubenswrapper[4854]: I0103 06:17:15.076814 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac1db8a1-1ac3-4513-97f1-142e4a8e192e-utilities\") pod \"ac1db8a1-1ac3-4513-97f1-142e4a8e192e\" (UID: \"ac1db8a1-1ac3-4513-97f1-142e4a8e192e\") " Jan 03 06:17:15 crc kubenswrapper[4854]: I0103 06:17:15.076956 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2s4gc\" (UniqueName: \"kubernetes.io/projected/ac1db8a1-1ac3-4513-97f1-142e4a8e192e-kube-api-access-2s4gc\") pod \"ac1db8a1-1ac3-4513-97f1-142e4a8e192e\" (UID: \"ac1db8a1-1ac3-4513-97f1-142e4a8e192e\") " Jan 03 06:17:15 crc kubenswrapper[4854]: I0103 06:17:15.077060 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac1db8a1-1ac3-4513-97f1-142e4a8e192e-catalog-content\") pod \"ac1db8a1-1ac3-4513-97f1-142e4a8e192e\" (UID: \"ac1db8a1-1ac3-4513-97f1-142e4a8e192e\") " Jan 03 06:17:15 crc kubenswrapper[4854]: I0103 06:17:15.078577 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac1db8a1-1ac3-4513-97f1-142e4a8e192e-utilities" (OuterVolumeSpecName: "utilities") pod "ac1db8a1-1ac3-4513-97f1-142e4a8e192e" (UID: "ac1db8a1-1ac3-4513-97f1-142e4a8e192e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:17:15 crc kubenswrapper[4854]: I0103 06:17:15.091895 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac1db8a1-1ac3-4513-97f1-142e4a8e192e-kube-api-access-2s4gc" (OuterVolumeSpecName: "kube-api-access-2s4gc") pod "ac1db8a1-1ac3-4513-97f1-142e4a8e192e" (UID: "ac1db8a1-1ac3-4513-97f1-142e4a8e192e"). InnerVolumeSpecName "kube-api-access-2s4gc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:17:15 crc kubenswrapper[4854]: I0103 06:17:15.156993 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac1db8a1-1ac3-4513-97f1-142e4a8e192e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ac1db8a1-1ac3-4513-97f1-142e4a8e192e" (UID: "ac1db8a1-1ac3-4513-97f1-142e4a8e192e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:17:15 crc kubenswrapper[4854]: I0103 06:17:15.180941 4854 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac1db8a1-1ac3-4513-97f1-142e4a8e192e-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 06:17:15 crc kubenswrapper[4854]: I0103 06:17:15.180985 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2s4gc\" (UniqueName: \"kubernetes.io/projected/ac1db8a1-1ac3-4513-97f1-142e4a8e192e-kube-api-access-2s4gc\") on node \"crc\" DevicePath \"\"" Jan 03 06:17:15 crc kubenswrapper[4854]: I0103 06:17:15.180999 4854 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac1db8a1-1ac3-4513-97f1-142e4a8e192e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 06:17:15 crc kubenswrapper[4854]: I0103 06:17:15.297487 4854 generic.go:334] "Generic (PLEG): container finished" podID="ac1db8a1-1ac3-4513-97f1-142e4a8e192e" containerID="404feab5d41b15e8dc16614362fbe163f8ab521a90c1b77b8e198ee6dfd96ee5" exitCode=0 Jan 03 06:17:15 crc kubenswrapper[4854]: I0103 06:17:15.297519 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vv2zx" Jan 03 06:17:15 crc kubenswrapper[4854]: I0103 06:17:15.297575 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vv2zx" event={"ID":"ac1db8a1-1ac3-4513-97f1-142e4a8e192e","Type":"ContainerDied","Data":"404feab5d41b15e8dc16614362fbe163f8ab521a90c1b77b8e198ee6dfd96ee5"} Jan 03 06:17:15 crc kubenswrapper[4854]: I0103 06:17:15.297629 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vv2zx" event={"ID":"ac1db8a1-1ac3-4513-97f1-142e4a8e192e","Type":"ContainerDied","Data":"644ca9be2c358fcfa284b2fb5c80401dcde6ca6fe6d0b5fb1e459f7e9bfd05c7"} Jan 03 06:17:15 crc kubenswrapper[4854]: I0103 06:17:15.297648 4854 scope.go:117] "RemoveContainer" containerID="404feab5d41b15e8dc16614362fbe163f8ab521a90c1b77b8e198ee6dfd96ee5" Jan 03 06:17:15 crc kubenswrapper[4854]: I0103 06:17:15.319103 4854 scope.go:117] "RemoveContainer" containerID="83850a241d3632de7540417a825e6319a4b5aca9e9cd63c3e58ed7da755f5f52" Jan 03 06:17:15 crc kubenswrapper[4854]: I0103 06:17:15.358889 4854 scope.go:117] "RemoveContainer" containerID="08db5af03d9679ae8e187e1dedc1cd3e68acc9cda57682e7eccf285d41d000d4" Jan 03 06:17:15 crc kubenswrapper[4854]: I0103 06:17:15.363825 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vv2zx"] Jan 03 06:17:15 crc kubenswrapper[4854]: I0103 06:17:15.382280 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vv2zx"] Jan 03 06:17:15 crc kubenswrapper[4854]: I0103 06:17:15.423440 4854 scope.go:117] "RemoveContainer" containerID="404feab5d41b15e8dc16614362fbe163f8ab521a90c1b77b8e198ee6dfd96ee5" Jan 03 06:17:15 crc kubenswrapper[4854]: E0103 06:17:15.424039 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"404feab5d41b15e8dc16614362fbe163f8ab521a90c1b77b8e198ee6dfd96ee5\": container with ID starting with 404feab5d41b15e8dc16614362fbe163f8ab521a90c1b77b8e198ee6dfd96ee5 not found: ID does not exist" containerID="404feab5d41b15e8dc16614362fbe163f8ab521a90c1b77b8e198ee6dfd96ee5" Jan 03 06:17:15 crc kubenswrapper[4854]: I0103 06:17:15.424135 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"404feab5d41b15e8dc16614362fbe163f8ab521a90c1b77b8e198ee6dfd96ee5"} err="failed to get container status \"404feab5d41b15e8dc16614362fbe163f8ab521a90c1b77b8e198ee6dfd96ee5\": rpc error: code = NotFound desc = could not find container \"404feab5d41b15e8dc16614362fbe163f8ab521a90c1b77b8e198ee6dfd96ee5\": container with ID starting with 404feab5d41b15e8dc16614362fbe163f8ab521a90c1b77b8e198ee6dfd96ee5 not found: ID does not exist" Jan 03 06:17:15 crc kubenswrapper[4854]: I0103 06:17:15.424175 4854 scope.go:117] "RemoveContainer" containerID="83850a241d3632de7540417a825e6319a4b5aca9e9cd63c3e58ed7da755f5f52" Jan 03 06:17:15 crc kubenswrapper[4854]: E0103 06:17:15.424564 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83850a241d3632de7540417a825e6319a4b5aca9e9cd63c3e58ed7da755f5f52\": container with ID starting with 83850a241d3632de7540417a825e6319a4b5aca9e9cd63c3e58ed7da755f5f52 not found: ID does not exist" containerID="83850a241d3632de7540417a825e6319a4b5aca9e9cd63c3e58ed7da755f5f52" Jan 03 06:17:15 crc kubenswrapper[4854]: I0103 06:17:15.424610 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83850a241d3632de7540417a825e6319a4b5aca9e9cd63c3e58ed7da755f5f52"} err="failed to get container status \"83850a241d3632de7540417a825e6319a4b5aca9e9cd63c3e58ed7da755f5f52\": rpc error: code = NotFound desc = could not find container \"83850a241d3632de7540417a825e6319a4b5aca9e9cd63c3e58ed7da755f5f52\": container with ID starting with 83850a241d3632de7540417a825e6319a4b5aca9e9cd63c3e58ed7da755f5f52 not found: ID does not exist" Jan 03 06:17:15 crc kubenswrapper[4854]: I0103 06:17:15.424643 4854 scope.go:117] "RemoveContainer" containerID="08db5af03d9679ae8e187e1dedc1cd3e68acc9cda57682e7eccf285d41d000d4" Jan 03 06:17:15 crc kubenswrapper[4854]: E0103 06:17:15.424910 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08db5af03d9679ae8e187e1dedc1cd3e68acc9cda57682e7eccf285d41d000d4\": container with ID starting with 08db5af03d9679ae8e187e1dedc1cd3e68acc9cda57682e7eccf285d41d000d4 not found: ID does not exist" containerID="08db5af03d9679ae8e187e1dedc1cd3e68acc9cda57682e7eccf285d41d000d4" Jan 03 06:17:15 crc kubenswrapper[4854]: I0103 06:17:15.424948 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08db5af03d9679ae8e187e1dedc1cd3e68acc9cda57682e7eccf285d41d000d4"} err="failed to get container status \"08db5af03d9679ae8e187e1dedc1cd3e68acc9cda57682e7eccf285d41d000d4\": rpc error: code = NotFound desc = could not find container \"08db5af03d9679ae8e187e1dedc1cd3e68acc9cda57682e7eccf285d41d000d4\": container with ID starting with 08db5af03d9679ae8e187e1dedc1cd3e68acc9cda57682e7eccf285d41d000d4 not found: ID does not exist" Jan 03 06:17:16 crc kubenswrapper[4854]: I0103 06:17:16.169058 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac1db8a1-1ac3-4513-97f1-142e4a8e192e" path="/var/lib/kubelet/pods/ac1db8a1-1ac3-4513-97f1-142e4a8e192e/volumes" Jan 03 06:17:35 crc kubenswrapper[4854]: I0103 06:17:35.567958 4854 generic.go:334] "Generic (PLEG): container finished" podID="24100053-a781-448a-91e0-f75033961d9a" containerID="6b3476ebc27ffd3bd570fd09899350ce5f54d7137d4bc1c5c79864ace8e7971f" exitCode=0 Jan 03 06:17:35 crc kubenswrapper[4854]: I0103 06:17:35.568126 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-77kqc" event={"ID":"24100053-a781-448a-91e0-f75033961d9a","Type":"ContainerDied","Data":"6b3476ebc27ffd3bd570fd09899350ce5f54d7137d4bc1c5c79864ace8e7971f"} Jan 03 06:17:37 crc kubenswrapper[4854]: I0103 06:17:37.152495 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-77kqc" Jan 03 06:17:37 crc kubenswrapper[4854]: I0103 06:17:37.235145 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxl9x\" (UniqueName: \"kubernetes.io/projected/24100053-a781-448a-91e0-f75033961d9a-kube-api-access-cxl9x\") pod \"24100053-a781-448a-91e0-f75033961d9a\" (UID: \"24100053-a781-448a-91e0-f75033961d9a\") " Jan 03 06:17:37 crc kubenswrapper[4854]: I0103 06:17:37.235664 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24100053-a781-448a-91e0-f75033961d9a-inventory\") pod \"24100053-a781-448a-91e0-f75033961d9a\" (UID: \"24100053-a781-448a-91e0-f75033961d9a\") " Jan 03 06:17:37 crc kubenswrapper[4854]: I0103 06:17:37.236132 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/24100053-a781-448a-91e0-f75033961d9a-ssh-key\") pod \"24100053-a781-448a-91e0-f75033961d9a\" (UID: \"24100053-a781-448a-91e0-f75033961d9a\") " Jan 03 06:17:37 crc kubenswrapper[4854]: I0103 06:17:37.247017 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24100053-a781-448a-91e0-f75033961d9a-kube-api-access-cxl9x" (OuterVolumeSpecName: "kube-api-access-cxl9x") pod "24100053-a781-448a-91e0-f75033961d9a" (UID: "24100053-a781-448a-91e0-f75033961d9a"). InnerVolumeSpecName "kube-api-access-cxl9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:17:37 crc kubenswrapper[4854]: I0103 06:17:37.283040 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24100053-a781-448a-91e0-f75033961d9a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "24100053-a781-448a-91e0-f75033961d9a" (UID: "24100053-a781-448a-91e0-f75033961d9a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:17:37 crc kubenswrapper[4854]: I0103 06:17:37.283906 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24100053-a781-448a-91e0-f75033961d9a-inventory" (OuterVolumeSpecName: "inventory") pod "24100053-a781-448a-91e0-f75033961d9a" (UID: "24100053-a781-448a-91e0-f75033961d9a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:17:37 crc kubenswrapper[4854]: I0103 06:17:37.341775 4854 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/24100053-a781-448a-91e0-f75033961d9a-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 03 06:17:37 crc kubenswrapper[4854]: I0103 06:17:37.341822 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxl9x\" (UniqueName: \"kubernetes.io/projected/24100053-a781-448a-91e0-f75033961d9a-kube-api-access-cxl9x\") on node \"crc\" DevicePath \"\"" Jan 03 06:17:37 crc kubenswrapper[4854]: I0103 06:17:37.341839 4854 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24100053-a781-448a-91e0-f75033961d9a-inventory\") on node \"crc\" DevicePath \"\"" Jan 03 06:17:37 crc kubenswrapper[4854]: I0103 06:17:37.599071 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-77kqc" event={"ID":"24100053-a781-448a-91e0-f75033961d9a","Type":"ContainerDied","Data":"871cddd57461d399543aebb6880ba7661529744b0cdbd02214f23aee4856fd9e"} Jan 03 06:17:37 crc kubenswrapper[4854]: I0103 06:17:37.599163 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="871cddd57461d399543aebb6880ba7661529744b0cdbd02214f23aee4856fd9e" Jan 03 06:17:37 crc kubenswrapper[4854]: I0103 06:17:37.599196 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-77kqc" Jan 03 06:17:37 crc kubenswrapper[4854]: I0103 06:17:37.711066 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-74rvr"] Jan 03 06:17:37 crc kubenswrapper[4854]: E0103 06:17:37.711783 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac1db8a1-1ac3-4513-97f1-142e4a8e192e" containerName="extract-utilities" Jan 03 06:17:37 crc kubenswrapper[4854]: I0103 06:17:37.711809 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac1db8a1-1ac3-4513-97f1-142e4a8e192e" containerName="extract-utilities" Jan 03 06:17:37 crc kubenswrapper[4854]: E0103 06:17:37.711857 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac1db8a1-1ac3-4513-97f1-142e4a8e192e" containerName="registry-server" Jan 03 06:17:37 crc kubenswrapper[4854]: I0103 06:17:37.711866 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac1db8a1-1ac3-4513-97f1-142e4a8e192e" containerName="registry-server" Jan 03 06:17:37 crc kubenswrapper[4854]: E0103 06:17:37.711884 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac1db8a1-1ac3-4513-97f1-142e4a8e192e" containerName="extract-content" Jan 03 06:17:37 crc kubenswrapper[4854]: I0103 06:17:37.711893 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac1db8a1-1ac3-4513-97f1-142e4a8e192e" containerName="extract-content" Jan 03 06:17:37 crc kubenswrapper[4854]: E0103 06:17:37.711913 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24100053-a781-448a-91e0-f75033961d9a" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 03 06:17:37 crc kubenswrapper[4854]: I0103 06:17:37.711926 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="24100053-a781-448a-91e0-f75033961d9a" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 03 06:17:37 crc kubenswrapper[4854]: I0103 06:17:37.712308 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac1db8a1-1ac3-4513-97f1-142e4a8e192e" containerName="registry-server" Jan 03 06:17:37 crc kubenswrapper[4854]: I0103 06:17:37.712341 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="24100053-a781-448a-91e0-f75033961d9a" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 03 06:17:37 crc kubenswrapper[4854]: I0103 06:17:37.713518 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-74rvr" Jan 03 06:17:37 crc kubenswrapper[4854]: I0103 06:17:37.717362 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 03 06:17:37 crc kubenswrapper[4854]: I0103 06:17:37.717548 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 03 06:17:37 crc kubenswrapper[4854]: I0103 06:17:37.717600 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4bl62" Jan 03 06:17:37 crc kubenswrapper[4854]: I0103 06:17:37.717557 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 03 06:17:37 crc kubenswrapper[4854]: I0103 06:17:37.744229 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-74rvr"] Jan 03 06:17:37 crc kubenswrapper[4854]: I0103 06:17:37.851559 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5b200476-e3bd-4760-bc5f-eeea6d8d6780-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-74rvr\" (UID: \"5b200476-e3bd-4760-bc5f-eeea6d8d6780\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-74rvr" Jan 03 06:17:37 crc kubenswrapper[4854]: I0103 06:17:37.851756 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q6p5\" (UniqueName: \"kubernetes.io/projected/5b200476-e3bd-4760-bc5f-eeea6d8d6780-kube-api-access-8q6p5\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-74rvr\" (UID: \"5b200476-e3bd-4760-bc5f-eeea6d8d6780\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-74rvr" Jan 03 06:17:37 crc kubenswrapper[4854]: I0103 06:17:37.851872 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5b200476-e3bd-4760-bc5f-eeea6d8d6780-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-74rvr\" (UID: \"5b200476-e3bd-4760-bc5f-eeea6d8d6780\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-74rvr" Jan 03 06:17:37 crc kubenswrapper[4854]: E0103 06:17:37.895475 4854 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24100053_a781_448a_91e0_f75033961d9a.slice/crio-871cddd57461d399543aebb6880ba7661529744b0cdbd02214f23aee4856fd9e\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24100053_a781_448a_91e0_f75033961d9a.slice\": RecentStats: unable to find data in memory cache]" Jan 03 06:17:37 crc kubenswrapper[4854]: I0103 06:17:37.959923 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5b200476-e3bd-4760-bc5f-eeea6d8d6780-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-74rvr\" (UID: \"5b200476-e3bd-4760-bc5f-eeea6d8d6780\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-74rvr" Jan 03 06:17:37 crc kubenswrapper[4854]: I0103 06:17:37.961688 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8q6p5\" (UniqueName: \"kubernetes.io/projected/5b200476-e3bd-4760-bc5f-eeea6d8d6780-kube-api-access-8q6p5\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-74rvr\" (UID: \"5b200476-e3bd-4760-bc5f-eeea6d8d6780\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-74rvr" Jan 03 06:17:37 crc kubenswrapper[4854]: I0103 06:17:37.961782 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5b200476-e3bd-4760-bc5f-eeea6d8d6780-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-74rvr\" (UID: \"5b200476-e3bd-4760-bc5f-eeea6d8d6780\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-74rvr" Jan 03 06:17:37 crc kubenswrapper[4854]: I0103 06:17:37.966124 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5b200476-e3bd-4760-bc5f-eeea6d8d6780-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-74rvr\" (UID: \"5b200476-e3bd-4760-bc5f-eeea6d8d6780\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-74rvr" Jan 03 06:17:37 crc kubenswrapper[4854]: I0103 06:17:37.967108 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5b200476-e3bd-4760-bc5f-eeea6d8d6780-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-74rvr\" (UID: \"5b200476-e3bd-4760-bc5f-eeea6d8d6780\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-74rvr" Jan 03 06:17:37 crc kubenswrapper[4854]: I0103 06:17:37.978143 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8q6p5\" (UniqueName: \"kubernetes.io/projected/5b200476-e3bd-4760-bc5f-eeea6d8d6780-kube-api-access-8q6p5\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-74rvr\" (UID: \"5b200476-e3bd-4760-bc5f-eeea6d8d6780\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-74rvr" Jan 03 06:17:38 crc kubenswrapper[4854]: I0103 06:17:38.043463 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-74rvr" Jan 03 06:17:38 crc kubenswrapper[4854]: I0103 06:17:38.667718 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-74rvr"] Jan 03 06:17:39 crc kubenswrapper[4854]: I0103 06:17:39.641200 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-74rvr" event={"ID":"5b200476-e3bd-4760-bc5f-eeea6d8d6780","Type":"ContainerStarted","Data":"7dcdbbd37740ed59939a5a12a90b95e2633138598c777b8b6f096995f2bf4d6c"} Jan 03 06:17:39 crc kubenswrapper[4854]: I0103 06:17:39.641721 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-74rvr" event={"ID":"5b200476-e3bd-4760-bc5f-eeea6d8d6780","Type":"ContainerStarted","Data":"349e2ce87f23da7d1d7d9849d342a8790702b4d7e29d1796fd6a7f334f4cc861"} Jan 03 06:17:39 crc kubenswrapper[4854]: I0103 06:17:39.680101 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-74rvr" podStartSLOduration=2.052935544 podStartE2EDuration="2.680061378s" podCreationTimestamp="2026-01-03 06:17:37 +0000 UTC" firstStartedPulling="2026-01-03 06:17:38.68119764 +0000 UTC m=+2237.007774222" lastFinishedPulling="2026-01-03 06:17:39.308323444 +0000 UTC m=+2237.634900056" observedRunningTime="2026-01-03 06:17:39.666696379 +0000 UTC m=+2237.993272981" watchObservedRunningTime="2026-01-03 06:17:39.680061378 +0000 UTC m=+2238.006637960" Jan 03 06:17:45 crc kubenswrapper[4854]: I0103 06:17:45.713517 4854 generic.go:334] "Generic (PLEG): container finished" podID="5b200476-e3bd-4760-bc5f-eeea6d8d6780" containerID="7dcdbbd37740ed59939a5a12a90b95e2633138598c777b8b6f096995f2bf4d6c" exitCode=0 Jan 03 06:17:45 crc kubenswrapper[4854]: I0103 06:17:45.713688 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-74rvr" event={"ID":"5b200476-e3bd-4760-bc5f-eeea6d8d6780","Type":"ContainerDied","Data":"7dcdbbd37740ed59939a5a12a90b95e2633138598c777b8b6f096995f2bf4d6c"} Jan 03 06:17:47 crc kubenswrapper[4854]: I0103 06:17:47.215322 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-74rvr" Jan 03 06:17:47 crc kubenswrapper[4854]: I0103 06:17:47.347158 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5b200476-e3bd-4760-bc5f-eeea6d8d6780-inventory\") pod \"5b200476-e3bd-4760-bc5f-eeea6d8d6780\" (UID: \"5b200476-e3bd-4760-bc5f-eeea6d8d6780\") " Jan 03 06:17:47 crc kubenswrapper[4854]: I0103 06:17:47.347569 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5b200476-e3bd-4760-bc5f-eeea6d8d6780-ssh-key\") pod \"5b200476-e3bd-4760-bc5f-eeea6d8d6780\" (UID: \"5b200476-e3bd-4760-bc5f-eeea6d8d6780\") " Jan 03 06:17:47 crc kubenswrapper[4854]: I0103 06:17:47.347993 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8q6p5\" (UniqueName: \"kubernetes.io/projected/5b200476-e3bd-4760-bc5f-eeea6d8d6780-kube-api-access-8q6p5\") pod \"5b200476-e3bd-4760-bc5f-eeea6d8d6780\" (UID: \"5b200476-e3bd-4760-bc5f-eeea6d8d6780\") " Jan 03 06:17:47 crc kubenswrapper[4854]: I0103 06:17:47.353921 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b200476-e3bd-4760-bc5f-eeea6d8d6780-kube-api-access-8q6p5" (OuterVolumeSpecName: "kube-api-access-8q6p5") pod "5b200476-e3bd-4760-bc5f-eeea6d8d6780" (UID: "5b200476-e3bd-4760-bc5f-eeea6d8d6780"). InnerVolumeSpecName "kube-api-access-8q6p5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:17:47 crc kubenswrapper[4854]: I0103 06:17:47.397119 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b200476-e3bd-4760-bc5f-eeea6d8d6780-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5b200476-e3bd-4760-bc5f-eeea6d8d6780" (UID: "5b200476-e3bd-4760-bc5f-eeea6d8d6780"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:17:47 crc kubenswrapper[4854]: I0103 06:17:47.410652 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b200476-e3bd-4760-bc5f-eeea6d8d6780-inventory" (OuterVolumeSpecName: "inventory") pod "5b200476-e3bd-4760-bc5f-eeea6d8d6780" (UID: "5b200476-e3bd-4760-bc5f-eeea6d8d6780"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:17:47 crc kubenswrapper[4854]: I0103 06:17:47.451723 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8q6p5\" (UniqueName: \"kubernetes.io/projected/5b200476-e3bd-4760-bc5f-eeea6d8d6780-kube-api-access-8q6p5\") on node \"crc\" DevicePath \"\"" Jan 03 06:17:47 crc kubenswrapper[4854]: I0103 06:17:47.451770 4854 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5b200476-e3bd-4760-bc5f-eeea6d8d6780-inventory\") on node \"crc\" DevicePath \"\"" Jan 03 06:17:47 crc kubenswrapper[4854]: I0103 06:17:47.451785 4854 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5b200476-e3bd-4760-bc5f-eeea6d8d6780-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 03 06:17:47 crc kubenswrapper[4854]: I0103 06:17:47.740385 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-74rvr" event={"ID":"5b200476-e3bd-4760-bc5f-eeea6d8d6780","Type":"ContainerDied","Data":"349e2ce87f23da7d1d7d9849d342a8790702b4d7e29d1796fd6a7f334f4cc861"} Jan 03 06:17:47 crc kubenswrapper[4854]: I0103 06:17:47.740714 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="349e2ce87f23da7d1d7d9849d342a8790702b4d7e29d1796fd6a7f334f4cc861" Jan 03 06:17:47 crc kubenswrapper[4854]: I0103 06:17:47.740475 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-74rvr" Jan 03 06:17:47 crc kubenswrapper[4854]: I0103 06:17:47.869314 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-qsjgh"] Jan 03 06:17:47 crc kubenswrapper[4854]: E0103 06:17:47.869801 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b200476-e3bd-4760-bc5f-eeea6d8d6780" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 03 06:17:47 crc kubenswrapper[4854]: I0103 06:17:47.869886 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b200476-e3bd-4760-bc5f-eeea6d8d6780" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 03 06:17:47 crc kubenswrapper[4854]: I0103 06:17:47.870157 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b200476-e3bd-4760-bc5f-eeea6d8d6780" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 03 06:17:47 crc kubenswrapper[4854]: I0103 06:17:47.871141 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qsjgh" Jan 03 06:17:47 crc kubenswrapper[4854]: I0103 06:17:47.874717 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 03 06:17:47 crc kubenswrapper[4854]: I0103 06:17:47.874785 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 03 06:17:47 crc kubenswrapper[4854]: I0103 06:17:47.875208 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 03 06:17:47 crc kubenswrapper[4854]: I0103 06:17:47.875551 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4bl62" Jan 03 06:17:47 crc kubenswrapper[4854]: I0103 06:17:47.890431 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-qsjgh"] Jan 03 06:17:47 crc kubenswrapper[4854]: I0103 06:17:47.969481 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5109b775-53b0-4103-8cfc-1b9d0bcb7e10-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qsjgh\" (UID: \"5109b775-53b0-4103-8cfc-1b9d0bcb7e10\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qsjgh" Jan 03 06:17:47 crc kubenswrapper[4854]: I0103 06:17:47.969625 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4c49\" (UniqueName: \"kubernetes.io/projected/5109b775-53b0-4103-8cfc-1b9d0bcb7e10-kube-api-access-p4c49\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qsjgh\" (UID: \"5109b775-53b0-4103-8cfc-1b9d0bcb7e10\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qsjgh" Jan 03 06:17:47 crc kubenswrapper[4854]: I0103 06:17:47.969800 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5109b775-53b0-4103-8cfc-1b9d0bcb7e10-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qsjgh\" (UID: \"5109b775-53b0-4103-8cfc-1b9d0bcb7e10\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qsjgh" Jan 03 06:17:48 crc kubenswrapper[4854]: I0103 06:17:48.071745 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5109b775-53b0-4103-8cfc-1b9d0bcb7e10-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qsjgh\" (UID: \"5109b775-53b0-4103-8cfc-1b9d0bcb7e10\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qsjgh" Jan 03 06:17:48 crc kubenswrapper[4854]: I0103 06:17:48.071883 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4c49\" (UniqueName: \"kubernetes.io/projected/5109b775-53b0-4103-8cfc-1b9d0bcb7e10-kube-api-access-p4c49\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qsjgh\" (UID: \"5109b775-53b0-4103-8cfc-1b9d0bcb7e10\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qsjgh" Jan 03 06:17:48 crc kubenswrapper[4854]: I0103 06:17:48.072124 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5109b775-53b0-4103-8cfc-1b9d0bcb7e10-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qsjgh\" (UID: \"5109b775-53b0-4103-8cfc-1b9d0bcb7e10\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qsjgh" Jan 03 06:17:48 crc kubenswrapper[4854]: I0103 06:17:48.078219 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5109b775-53b0-4103-8cfc-1b9d0bcb7e10-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qsjgh\" (UID: \"5109b775-53b0-4103-8cfc-1b9d0bcb7e10\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qsjgh" Jan 03 06:17:48 crc kubenswrapper[4854]: I0103 06:17:48.078442 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5109b775-53b0-4103-8cfc-1b9d0bcb7e10-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qsjgh\" (UID: \"5109b775-53b0-4103-8cfc-1b9d0bcb7e10\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qsjgh" Jan 03 06:17:48 crc kubenswrapper[4854]: I0103 06:17:48.094167 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4c49\" (UniqueName: \"kubernetes.io/projected/5109b775-53b0-4103-8cfc-1b9d0bcb7e10-kube-api-access-p4c49\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qsjgh\" (UID: \"5109b775-53b0-4103-8cfc-1b9d0bcb7e10\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qsjgh" Jan 03 06:17:48 crc kubenswrapper[4854]: I0103 06:17:48.195396 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qsjgh" Jan 03 06:17:48 crc kubenswrapper[4854]: I0103 06:17:48.868003 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-qsjgh"] Jan 03 06:17:49 crc kubenswrapper[4854]: I0103 06:17:49.767954 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qsjgh" event={"ID":"5109b775-53b0-4103-8cfc-1b9d0bcb7e10","Type":"ContainerStarted","Data":"69393dc3047539732b7f2ad67c860ad51d86d1e72851cba2cbd160a6dd2f5fe0"} Jan 03 06:17:49 crc kubenswrapper[4854]: I0103 06:17:49.768566 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qsjgh" event={"ID":"5109b775-53b0-4103-8cfc-1b9d0bcb7e10","Type":"ContainerStarted","Data":"706fac5b4005c7bf3de918d9aed1589cabed1ae9fe572b881453eb101632145c"} Jan 03 06:17:49 crc kubenswrapper[4854]: I0103 06:17:49.814000 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qsjgh" podStartSLOduration=2.374049888 podStartE2EDuration="2.813976148s" podCreationTimestamp="2026-01-03 06:17:47 +0000 UTC" firstStartedPulling="2026-01-03 06:17:48.881049432 +0000 UTC m=+2247.207626004" lastFinishedPulling="2026-01-03 06:17:49.320975692 +0000 UTC m=+2247.647552264" observedRunningTime="2026-01-03 06:17:49.790691545 +0000 UTC m=+2248.117268127" watchObservedRunningTime="2026-01-03 06:17:49.813976148 +0000 UTC m=+2248.140552740" Jan 03 06:17:50 crc kubenswrapper[4854]: I0103 06:17:50.052424 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-lkfxj"] Jan 03 06:17:50 crc kubenswrapper[4854]: I0103 06:17:50.062593 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-lkfxj"] Jan 03 06:17:50 crc kubenswrapper[4854]: I0103 06:17:50.133360 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66b95f7c-2775-47c3-ad74-dd5ffe92a9a5" path="/var/lib/kubelet/pods/66b95f7c-2775-47c3-ad74-dd5ffe92a9a5/volumes" Jan 03 06:18:03 crc kubenswrapper[4854]: I0103 06:18:03.218177 4854 scope.go:117] "RemoveContainer" containerID="7582cbb1742cfeac6bc5235eced5dc9da19d4b654e9eed76cd80622e26bcaaf3" Jan 03 06:18:03 crc kubenswrapper[4854]: I0103 06:18:03.275806 4854 scope.go:117] "RemoveContainer" containerID="af0831dd29cf03129c1714e21950c8a6ef74079e760c0d89508cfbe7f72d2a74" Jan 03 06:18:03 crc kubenswrapper[4854]: I0103 06:18:03.319107 4854 scope.go:117] "RemoveContainer" containerID="75759a88f790e3f8574815d73410c788d3731d4c04f53edf6e75193f1d017620" Jan 03 06:18:32 crc kubenswrapper[4854]: I0103 06:18:32.392217 4854 generic.go:334] "Generic (PLEG): container finished" podID="5109b775-53b0-4103-8cfc-1b9d0bcb7e10" containerID="69393dc3047539732b7f2ad67c860ad51d86d1e72851cba2cbd160a6dd2f5fe0" exitCode=0 Jan 03 06:18:32 crc kubenswrapper[4854]: I0103 06:18:32.392341 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qsjgh" event={"ID":"5109b775-53b0-4103-8cfc-1b9d0bcb7e10","Type":"ContainerDied","Data":"69393dc3047539732b7f2ad67c860ad51d86d1e72851cba2cbd160a6dd2f5fe0"} Jan 03 06:18:33 crc kubenswrapper[4854]: I0103 06:18:33.956636 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qsjgh" Jan 03 06:18:33 crc kubenswrapper[4854]: I0103 06:18:33.985161 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5109b775-53b0-4103-8cfc-1b9d0bcb7e10-inventory\") pod \"5109b775-53b0-4103-8cfc-1b9d0bcb7e10\" (UID: \"5109b775-53b0-4103-8cfc-1b9d0bcb7e10\") " Jan 03 06:18:34 crc kubenswrapper[4854]: I0103 06:18:34.056896 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5109b775-53b0-4103-8cfc-1b9d0bcb7e10-inventory" (OuterVolumeSpecName: "inventory") pod "5109b775-53b0-4103-8cfc-1b9d0bcb7e10" (UID: "5109b775-53b0-4103-8cfc-1b9d0bcb7e10"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:18:34 crc kubenswrapper[4854]: I0103 06:18:34.087014 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4c49\" (UniqueName: \"kubernetes.io/projected/5109b775-53b0-4103-8cfc-1b9d0bcb7e10-kube-api-access-p4c49\") pod \"5109b775-53b0-4103-8cfc-1b9d0bcb7e10\" (UID: \"5109b775-53b0-4103-8cfc-1b9d0bcb7e10\") " Jan 03 06:18:34 crc kubenswrapper[4854]: I0103 06:18:34.087854 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5109b775-53b0-4103-8cfc-1b9d0bcb7e10-ssh-key\") pod \"5109b775-53b0-4103-8cfc-1b9d0bcb7e10\" (UID: \"5109b775-53b0-4103-8cfc-1b9d0bcb7e10\") " Jan 03 06:18:34 crc kubenswrapper[4854]: I0103 06:18:34.089576 4854 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5109b775-53b0-4103-8cfc-1b9d0bcb7e10-inventory\") on node \"crc\" DevicePath \"\"" Jan 03 06:18:34 crc kubenswrapper[4854]: I0103 06:18:34.092854 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5109b775-53b0-4103-8cfc-1b9d0bcb7e10-kube-api-access-p4c49" (OuterVolumeSpecName: "kube-api-access-p4c49") pod "5109b775-53b0-4103-8cfc-1b9d0bcb7e10" (UID: "5109b775-53b0-4103-8cfc-1b9d0bcb7e10"). InnerVolumeSpecName "kube-api-access-p4c49". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:18:34 crc kubenswrapper[4854]: I0103 06:18:34.132640 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5109b775-53b0-4103-8cfc-1b9d0bcb7e10-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5109b775-53b0-4103-8cfc-1b9d0bcb7e10" (UID: "5109b775-53b0-4103-8cfc-1b9d0bcb7e10"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:18:34 crc kubenswrapper[4854]: I0103 06:18:34.192822 4854 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5109b775-53b0-4103-8cfc-1b9d0bcb7e10-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 03 06:18:34 crc kubenswrapper[4854]: I0103 06:18:34.192867 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4c49\" (UniqueName: \"kubernetes.io/projected/5109b775-53b0-4103-8cfc-1b9d0bcb7e10-kube-api-access-p4c49\") on node \"crc\" DevicePath \"\"" Jan 03 06:18:34 crc kubenswrapper[4854]: I0103 06:18:34.416112 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qsjgh" event={"ID":"5109b775-53b0-4103-8cfc-1b9d0bcb7e10","Type":"ContainerDied","Data":"706fac5b4005c7bf3de918d9aed1589cabed1ae9fe572b881453eb101632145c"} Jan 03 06:18:34 crc kubenswrapper[4854]: I0103 06:18:34.416163 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="706fac5b4005c7bf3de918d9aed1589cabed1ae9fe572b881453eb101632145c" Jan 03 06:18:34 crc kubenswrapper[4854]: I0103 06:18:34.416231 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qsjgh" Jan 03 06:18:34 crc kubenswrapper[4854]: I0103 06:18:34.548885 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mnxbq"] Jan 03 06:18:34 crc kubenswrapper[4854]: E0103 06:18:34.551566 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5109b775-53b0-4103-8cfc-1b9d0bcb7e10" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 03 06:18:34 crc kubenswrapper[4854]: I0103 06:18:34.551593 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="5109b775-53b0-4103-8cfc-1b9d0bcb7e10" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 03 06:18:34 crc kubenswrapper[4854]: I0103 06:18:34.551825 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="5109b775-53b0-4103-8cfc-1b9d0bcb7e10" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 03 06:18:34 crc kubenswrapper[4854]: I0103 06:18:34.552881 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mnxbq" Jan 03 06:18:34 crc kubenswrapper[4854]: I0103 06:18:34.555785 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4bl62" Jan 03 06:18:34 crc kubenswrapper[4854]: I0103 06:18:34.558403 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 03 06:18:34 crc kubenswrapper[4854]: I0103 06:18:34.565260 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 03 06:18:34 crc kubenswrapper[4854]: I0103 06:18:34.565295 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 03 06:18:34 crc kubenswrapper[4854]: I0103 06:18:34.578644 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mnxbq"] Jan 03 06:18:34 crc kubenswrapper[4854]: I0103 06:18:34.704003 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a18cc103-af9c-4d23-b559-e99b60229d39-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mnxbq\" (UID: \"a18cc103-af9c-4d23-b559-e99b60229d39\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mnxbq" Jan 03 06:18:34 crc kubenswrapper[4854]: I0103 06:18:34.704445 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a18cc103-af9c-4d23-b559-e99b60229d39-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mnxbq\" (UID: \"a18cc103-af9c-4d23-b559-e99b60229d39\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mnxbq" Jan 03 06:18:34 crc kubenswrapper[4854]: I0103 06:18:34.704665 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72z9f\" (UniqueName: \"kubernetes.io/projected/a18cc103-af9c-4d23-b559-e99b60229d39-kube-api-access-72z9f\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mnxbq\" (UID: \"a18cc103-af9c-4d23-b559-e99b60229d39\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mnxbq" Jan 03 06:18:34 crc kubenswrapper[4854]: I0103 06:18:34.806486 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72z9f\" (UniqueName: \"kubernetes.io/projected/a18cc103-af9c-4d23-b559-e99b60229d39-kube-api-access-72z9f\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mnxbq\" (UID: \"a18cc103-af9c-4d23-b559-e99b60229d39\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mnxbq" Jan 03 06:18:34 crc kubenswrapper[4854]: I0103 06:18:34.806619 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a18cc103-af9c-4d23-b559-e99b60229d39-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mnxbq\" (UID: \"a18cc103-af9c-4d23-b559-e99b60229d39\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mnxbq" Jan 03 06:18:34 crc kubenswrapper[4854]: I0103 06:18:34.806706 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a18cc103-af9c-4d23-b559-e99b60229d39-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mnxbq\" (UID: \"a18cc103-af9c-4d23-b559-e99b60229d39\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mnxbq" Jan 03 06:18:34 crc kubenswrapper[4854]: I0103 06:18:34.811896 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a18cc103-af9c-4d23-b559-e99b60229d39-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mnxbq\" (UID: \"a18cc103-af9c-4d23-b559-e99b60229d39\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mnxbq" Jan 03 06:18:34 crc kubenswrapper[4854]: I0103 06:18:34.812448 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a18cc103-af9c-4d23-b559-e99b60229d39-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mnxbq\" (UID: \"a18cc103-af9c-4d23-b559-e99b60229d39\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mnxbq" Jan 03 06:18:34 crc kubenswrapper[4854]: I0103 06:18:34.827477 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72z9f\" (UniqueName: \"kubernetes.io/projected/a18cc103-af9c-4d23-b559-e99b60229d39-kube-api-access-72z9f\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mnxbq\" (UID: \"a18cc103-af9c-4d23-b559-e99b60229d39\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mnxbq" Jan 03 06:18:34 crc kubenswrapper[4854]: I0103 06:18:34.877871 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mnxbq" Jan 03 06:18:35 crc kubenswrapper[4854]: I0103 06:18:35.527281 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mnxbq"] Jan 03 06:18:35 crc kubenswrapper[4854]: W0103 06:18:35.534846 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda18cc103_af9c_4d23_b559_e99b60229d39.slice/crio-f18a4d45880dbf298083ee8da28d21a13b68ebc27384c9a859d94b449dc1c8aa WatchSource:0}: Error finding container f18a4d45880dbf298083ee8da28d21a13b68ebc27384c9a859d94b449dc1c8aa: Status 404 returned error can't find the container with id f18a4d45880dbf298083ee8da28d21a13b68ebc27384c9a859d94b449dc1c8aa Jan 03 06:18:35 crc kubenswrapper[4854]: I0103 06:18:35.537183 4854 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 03 06:18:36 crc kubenswrapper[4854]: I0103 06:18:36.438515 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mnxbq" event={"ID":"a18cc103-af9c-4d23-b559-e99b60229d39","Type":"ContainerStarted","Data":"f6397fce9a0928cb3740ffedaae21f5c36ef6c306371e722d8bab7ad4ee37e39"} Jan 03 06:18:36 crc kubenswrapper[4854]: I0103 06:18:36.438909 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mnxbq" event={"ID":"a18cc103-af9c-4d23-b559-e99b60229d39","Type":"ContainerStarted","Data":"f18a4d45880dbf298083ee8da28d21a13b68ebc27384c9a859d94b449dc1c8aa"} Jan 03 06:18:36 crc kubenswrapper[4854]: I0103 06:18:36.462097 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mnxbq" podStartSLOduration=2.034040071 podStartE2EDuration="2.462059249s" podCreationTimestamp="2026-01-03 06:18:34 +0000 UTC" firstStartedPulling="2026-01-03 06:18:35.536901254 +0000 UTC m=+2293.863477826" lastFinishedPulling="2026-01-03 06:18:35.964920432 +0000 UTC m=+2294.291497004" observedRunningTime="2026-01-03 06:18:36.455717904 +0000 UTC m=+2294.782294486" watchObservedRunningTime="2026-01-03 06:18:36.462059249 +0000 UTC m=+2294.788635831" Jan 03 06:19:11 crc kubenswrapper[4854]: I0103 06:19:11.056118 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-2sprl"] Jan 03 06:19:11 crc kubenswrapper[4854]: I0103 06:19:11.071429 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-2sprl"] Jan 03 06:19:12 crc kubenswrapper[4854]: I0103 06:19:12.131036 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fa56f84-4a50-4350-b256-5987e5b990bb" path="/var/lib/kubelet/pods/8fa56f84-4a50-4350-b256-5987e5b990bb/volumes" Jan 03 06:19:34 crc kubenswrapper[4854]: I0103 06:19:34.165053 4854 generic.go:334] "Generic (PLEG): container finished" podID="a18cc103-af9c-4d23-b559-e99b60229d39" containerID="f6397fce9a0928cb3740ffedaae21f5c36ef6c306371e722d8bab7ad4ee37e39" exitCode=0 Jan 03 06:19:34 crc kubenswrapper[4854]: I0103 06:19:34.165189 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mnxbq" event={"ID":"a18cc103-af9c-4d23-b559-e99b60229d39","Type":"ContainerDied","Data":"f6397fce9a0928cb3740ffedaae21f5c36ef6c306371e722d8bab7ad4ee37e39"} Jan 03 06:19:35 crc kubenswrapper[4854]: I0103 06:19:35.862587 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mnxbq" Jan 03 06:19:35 crc kubenswrapper[4854]: I0103 06:19:35.988736 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72z9f\" (UniqueName: \"kubernetes.io/projected/a18cc103-af9c-4d23-b559-e99b60229d39-kube-api-access-72z9f\") pod \"a18cc103-af9c-4d23-b559-e99b60229d39\" (UID: \"a18cc103-af9c-4d23-b559-e99b60229d39\") " Jan 03 06:19:35 crc kubenswrapper[4854]: I0103 06:19:35.988972 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a18cc103-af9c-4d23-b559-e99b60229d39-ssh-key\") pod \"a18cc103-af9c-4d23-b559-e99b60229d39\" (UID: \"a18cc103-af9c-4d23-b559-e99b60229d39\") " Jan 03 06:19:35 crc kubenswrapper[4854]: I0103 06:19:35.989147 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a18cc103-af9c-4d23-b559-e99b60229d39-inventory\") pod \"a18cc103-af9c-4d23-b559-e99b60229d39\" (UID: \"a18cc103-af9c-4d23-b559-e99b60229d39\") " Jan 03 06:19:35 crc kubenswrapper[4854]: I0103 06:19:35.995526 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a18cc103-af9c-4d23-b559-e99b60229d39-kube-api-access-72z9f" (OuterVolumeSpecName: "kube-api-access-72z9f") pod "a18cc103-af9c-4d23-b559-e99b60229d39" (UID: "a18cc103-af9c-4d23-b559-e99b60229d39"). InnerVolumeSpecName "kube-api-access-72z9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:19:36 crc kubenswrapper[4854]: I0103 06:19:36.023954 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a18cc103-af9c-4d23-b559-e99b60229d39-inventory" (OuterVolumeSpecName: "inventory") pod "a18cc103-af9c-4d23-b559-e99b60229d39" (UID: "a18cc103-af9c-4d23-b559-e99b60229d39"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:19:36 crc kubenswrapper[4854]: I0103 06:19:36.025945 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a18cc103-af9c-4d23-b559-e99b60229d39-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a18cc103-af9c-4d23-b559-e99b60229d39" (UID: "a18cc103-af9c-4d23-b559-e99b60229d39"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:19:36 crc kubenswrapper[4854]: I0103 06:19:36.092409 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72z9f\" (UniqueName: \"kubernetes.io/projected/a18cc103-af9c-4d23-b559-e99b60229d39-kube-api-access-72z9f\") on node \"crc\" DevicePath \"\"" Jan 03 06:19:36 crc kubenswrapper[4854]: I0103 06:19:36.092446 4854 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a18cc103-af9c-4d23-b559-e99b60229d39-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 03 06:19:36 crc kubenswrapper[4854]: I0103 06:19:36.092460 4854 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a18cc103-af9c-4d23-b559-e99b60229d39-inventory\") on node \"crc\" DevicePath \"\"" Jan 03 06:19:36 crc kubenswrapper[4854]: I0103 06:19:36.193799 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mnxbq" event={"ID":"a18cc103-af9c-4d23-b559-e99b60229d39","Type":"ContainerDied","Data":"f18a4d45880dbf298083ee8da28d21a13b68ebc27384c9a859d94b449dc1c8aa"} Jan 03 06:19:36 crc kubenswrapper[4854]: I0103 06:19:36.193837 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f18a4d45880dbf298083ee8da28d21a13b68ebc27384c9a859d94b449dc1c8aa" Jan 03 06:19:36 crc kubenswrapper[4854]: I0103 06:19:36.193887 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mnxbq" Jan 03 06:19:36 crc kubenswrapper[4854]: I0103 06:19:36.269038 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-45cvp"] Jan 03 06:19:36 crc kubenswrapper[4854]: E0103 06:19:36.269979 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a18cc103-af9c-4d23-b559-e99b60229d39" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 03 06:19:36 crc kubenswrapper[4854]: I0103 06:19:36.270001 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="a18cc103-af9c-4d23-b559-e99b60229d39" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 03 06:19:36 crc kubenswrapper[4854]: I0103 06:19:36.270331 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="a18cc103-af9c-4d23-b559-e99b60229d39" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 03 06:19:36 crc kubenswrapper[4854]: I0103 06:19:36.271410 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-45cvp" Jan 03 06:19:36 crc kubenswrapper[4854]: I0103 06:19:36.276860 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 03 06:19:36 crc kubenswrapper[4854]: I0103 06:19:36.549557 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 03 06:19:36 crc kubenswrapper[4854]: I0103 06:19:36.549719 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 03 06:19:36 crc kubenswrapper[4854]: I0103 06:19:36.549844 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4bl62" Jan 03 06:19:36 crc kubenswrapper[4854]: I0103 06:19:36.573962 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-45cvp"] Jan 03 06:19:36 crc kubenswrapper[4854]: I0103 06:19:36.651167 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f1ebb09c-5822-4552-a1f4-c2b72f911e74-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-45cvp\" (UID: \"f1ebb09c-5822-4552-a1f4-c2b72f911e74\") " pod="openstack/ssh-known-hosts-edpm-deployment-45cvp" Jan 03 06:19:36 crc kubenswrapper[4854]: I0103 06:19:36.651286 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bkjp\" (UniqueName: \"kubernetes.io/projected/f1ebb09c-5822-4552-a1f4-c2b72f911e74-kube-api-access-7bkjp\") pod \"ssh-known-hosts-edpm-deployment-45cvp\" (UID: \"f1ebb09c-5822-4552-a1f4-c2b72f911e74\") " pod="openstack/ssh-known-hosts-edpm-deployment-45cvp" Jan 03 06:19:36 crc kubenswrapper[4854]: I0103 06:19:36.652611 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f1ebb09c-5822-4552-a1f4-c2b72f911e74-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-45cvp\" (UID: \"f1ebb09c-5822-4552-a1f4-c2b72f911e74\") " pod="openstack/ssh-known-hosts-edpm-deployment-45cvp" Jan 03 06:19:36 crc kubenswrapper[4854]: I0103 06:19:36.755557 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f1ebb09c-5822-4552-a1f4-c2b72f911e74-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-45cvp\" (UID: \"f1ebb09c-5822-4552-a1f4-c2b72f911e74\") " pod="openstack/ssh-known-hosts-edpm-deployment-45cvp" Jan 03 06:19:36 crc kubenswrapper[4854]: I0103 06:19:36.755649 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bkjp\" (UniqueName: \"kubernetes.io/projected/f1ebb09c-5822-4552-a1f4-c2b72f911e74-kube-api-access-7bkjp\") pod \"ssh-known-hosts-edpm-deployment-45cvp\" (UID: \"f1ebb09c-5822-4552-a1f4-c2b72f911e74\") " pod="openstack/ssh-known-hosts-edpm-deployment-45cvp" Jan 03 06:19:36 crc kubenswrapper[4854]: I0103 06:19:36.755761 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f1ebb09c-5822-4552-a1f4-c2b72f911e74-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-45cvp\" (UID: \"f1ebb09c-5822-4552-a1f4-c2b72f911e74\") " pod="openstack/ssh-known-hosts-edpm-deployment-45cvp" Jan 03 06:19:36 crc kubenswrapper[4854]: I0103 06:19:36.760536 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f1ebb09c-5822-4552-a1f4-c2b72f911e74-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-45cvp\" (UID: \"f1ebb09c-5822-4552-a1f4-c2b72f911e74\") " pod="openstack/ssh-known-hosts-edpm-deployment-45cvp" Jan 03 06:19:36 crc kubenswrapper[4854]: I0103 06:19:36.761048 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f1ebb09c-5822-4552-a1f4-c2b72f911e74-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-45cvp\" (UID: \"f1ebb09c-5822-4552-a1f4-c2b72f911e74\") " pod="openstack/ssh-known-hosts-edpm-deployment-45cvp" Jan 03 06:19:36 crc kubenswrapper[4854]: I0103 06:19:36.776411 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bkjp\" (UniqueName: \"kubernetes.io/projected/f1ebb09c-5822-4552-a1f4-c2b72f911e74-kube-api-access-7bkjp\") pod \"ssh-known-hosts-edpm-deployment-45cvp\" (UID: \"f1ebb09c-5822-4552-a1f4-c2b72f911e74\") " pod="openstack/ssh-known-hosts-edpm-deployment-45cvp" Jan 03 06:19:36 crc kubenswrapper[4854]: I0103 06:19:36.849345 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-45cvp" Jan 03 06:19:37 crc kubenswrapper[4854]: I0103 06:19:37.509310 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-45cvp"] Jan 03 06:19:38 crc kubenswrapper[4854]: I0103 06:19:38.214452 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-45cvp" event={"ID":"f1ebb09c-5822-4552-a1f4-c2b72f911e74","Type":"ContainerStarted","Data":"bbec2791390fd494b1e64f627c8f8ebdf025ab73a1fbeb472d607e99fe302aa9"} Jan 03 06:19:39 crc kubenswrapper[4854]: I0103 06:19:39.438479 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-45cvp" event={"ID":"f1ebb09c-5822-4552-a1f4-c2b72f911e74","Type":"ContainerStarted","Data":"9bab5b248ac37788932cf4042af42ec1766ab66a5da6a8aca0317292df938dcc"} Jan 03 06:19:39 crc kubenswrapper[4854]: I0103 06:19:39.470446 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-45cvp" podStartSLOduration=2.9419605559999997 podStartE2EDuration="3.470421964s" podCreationTimestamp="2026-01-03 06:19:36 +0000 UTC" firstStartedPulling="2026-01-03 06:19:37.524352989 +0000 UTC m=+2355.850929561" lastFinishedPulling="2026-01-03 06:19:38.052814357 +0000 UTC m=+2356.379390969" observedRunningTime="2026-01-03 06:19:39.463031883 +0000 UTC m=+2357.789608475" watchObservedRunningTime="2026-01-03 06:19:39.470421964 +0000 UTC m=+2357.796998576" Jan 03 06:19:41 crc kubenswrapper[4854]: I0103 06:19:41.756077 4854 patch_prober.go:28] interesting pod/machine-config-daemon-qdhfx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 06:19:41 crc kubenswrapper[4854]: I0103 06:19:41.756584 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 06:19:46 crc kubenswrapper[4854]: I0103 06:19:46.515762 4854 generic.go:334] "Generic (PLEG): container finished" podID="f1ebb09c-5822-4552-a1f4-c2b72f911e74" containerID="9bab5b248ac37788932cf4042af42ec1766ab66a5da6a8aca0317292df938dcc" exitCode=0 Jan 03 06:19:46 crc kubenswrapper[4854]: I0103 06:19:46.516411 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-45cvp" event={"ID":"f1ebb09c-5822-4552-a1f4-c2b72f911e74","Type":"ContainerDied","Data":"9bab5b248ac37788932cf4042af42ec1766ab66a5da6a8aca0317292df938dcc"} Jan 03 06:19:48 crc kubenswrapper[4854]: I0103 06:19:48.011512 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-45cvp" Jan 03 06:19:48 crc kubenswrapper[4854]: I0103 06:19:48.148136 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bkjp\" (UniqueName: \"kubernetes.io/projected/f1ebb09c-5822-4552-a1f4-c2b72f911e74-kube-api-access-7bkjp\") pod \"f1ebb09c-5822-4552-a1f4-c2b72f911e74\" (UID: \"f1ebb09c-5822-4552-a1f4-c2b72f911e74\") " Jan 03 06:19:48 crc kubenswrapper[4854]: I0103 06:19:48.148448 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f1ebb09c-5822-4552-a1f4-c2b72f911e74-ssh-key-openstack-edpm-ipam\") pod \"f1ebb09c-5822-4552-a1f4-c2b72f911e74\" (UID: \"f1ebb09c-5822-4552-a1f4-c2b72f911e74\") " Jan 03 06:19:48 crc kubenswrapper[4854]: I0103 06:19:48.148580 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f1ebb09c-5822-4552-a1f4-c2b72f911e74-inventory-0\") pod \"f1ebb09c-5822-4552-a1f4-c2b72f911e74\" (UID: \"f1ebb09c-5822-4552-a1f4-c2b72f911e74\") " Jan 03 06:19:48 crc kubenswrapper[4854]: I0103 06:19:48.153331 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1ebb09c-5822-4552-a1f4-c2b72f911e74-kube-api-access-7bkjp" (OuterVolumeSpecName: "kube-api-access-7bkjp") pod "f1ebb09c-5822-4552-a1f4-c2b72f911e74" (UID: "f1ebb09c-5822-4552-a1f4-c2b72f911e74"). InnerVolumeSpecName "kube-api-access-7bkjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:19:48 crc kubenswrapper[4854]: I0103 06:19:48.189737 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1ebb09c-5822-4552-a1f4-c2b72f911e74-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f1ebb09c-5822-4552-a1f4-c2b72f911e74" (UID: "f1ebb09c-5822-4552-a1f4-c2b72f911e74"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:19:48 crc kubenswrapper[4854]: I0103 06:19:48.191211 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1ebb09c-5822-4552-a1f4-c2b72f911e74-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "f1ebb09c-5822-4552-a1f4-c2b72f911e74" (UID: "f1ebb09c-5822-4552-a1f4-c2b72f911e74"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:19:48 crc kubenswrapper[4854]: I0103 06:19:48.252785 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bkjp\" (UniqueName: \"kubernetes.io/projected/f1ebb09c-5822-4552-a1f4-c2b72f911e74-kube-api-access-7bkjp\") on node \"crc\" DevicePath \"\"" Jan 03 06:19:48 crc kubenswrapper[4854]: I0103 06:19:48.253103 4854 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f1ebb09c-5822-4552-a1f4-c2b72f911e74-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 03 06:19:48 crc kubenswrapper[4854]: I0103 06:19:48.253146 4854 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f1ebb09c-5822-4552-a1f4-c2b72f911e74-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 03 06:19:48 crc kubenswrapper[4854]: I0103 06:19:48.538009 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-45cvp" event={"ID":"f1ebb09c-5822-4552-a1f4-c2b72f911e74","Type":"ContainerDied","Data":"bbec2791390fd494b1e64f627c8f8ebdf025ab73a1fbeb472d607e99fe302aa9"} Jan 03 06:19:48 crc kubenswrapper[4854]: I0103 06:19:48.538074 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbec2791390fd494b1e64f627c8f8ebdf025ab73a1fbeb472d607e99fe302aa9" Jan 03 06:19:48 crc kubenswrapper[4854]: I0103 06:19:48.538158 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-45cvp" Jan 03 06:19:48 crc kubenswrapper[4854]: I0103 06:19:48.713021 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-npjps"] Jan 03 06:19:48 crc kubenswrapper[4854]: E0103 06:19:48.721729 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1ebb09c-5822-4552-a1f4-c2b72f911e74" containerName="ssh-known-hosts-edpm-deployment" Jan 03 06:19:48 crc kubenswrapper[4854]: I0103 06:19:48.721758 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1ebb09c-5822-4552-a1f4-c2b72f911e74" containerName="ssh-known-hosts-edpm-deployment" Jan 03 06:19:48 crc kubenswrapper[4854]: I0103 06:19:48.722062 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1ebb09c-5822-4552-a1f4-c2b72f911e74" containerName="ssh-known-hosts-edpm-deployment" Jan 03 06:19:48 crc kubenswrapper[4854]: I0103 06:19:48.723119 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-npjps" Jan 03 06:19:48 crc kubenswrapper[4854]: I0103 06:19:48.739602 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-npjps"] Jan 03 06:19:48 crc kubenswrapper[4854]: I0103 06:19:48.755397 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4bl62" Jan 03 06:19:48 crc kubenswrapper[4854]: I0103 06:19:48.755516 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 03 06:19:48 crc kubenswrapper[4854]: I0103 06:19:48.755633 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 03 06:19:48 crc kubenswrapper[4854]: I0103 06:19:48.756030 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 03 06:19:48 crc kubenswrapper[4854]: I0103 06:19:48.871172 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/535e8deb-1bd9-48ca-82e6-c0ea9174f4a4-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-npjps\" (UID: \"535e8deb-1bd9-48ca-82e6-c0ea9174f4a4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-npjps" Jan 03 06:19:48 crc kubenswrapper[4854]: I0103 06:19:48.871220 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/535e8deb-1bd9-48ca-82e6-c0ea9174f4a4-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-npjps\" (UID: \"535e8deb-1bd9-48ca-82e6-c0ea9174f4a4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-npjps" Jan 03 06:19:48 crc kubenswrapper[4854]: I0103 06:19:48.871381 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7kdx\" (UniqueName: \"kubernetes.io/projected/535e8deb-1bd9-48ca-82e6-c0ea9174f4a4-kube-api-access-p7kdx\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-npjps\" (UID: \"535e8deb-1bd9-48ca-82e6-c0ea9174f4a4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-npjps" Jan 03 06:19:48 crc kubenswrapper[4854]: I0103 06:19:48.973524 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/535e8deb-1bd9-48ca-82e6-c0ea9174f4a4-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-npjps\" (UID: \"535e8deb-1bd9-48ca-82e6-c0ea9174f4a4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-npjps" Jan 03 06:19:48 crc kubenswrapper[4854]: I0103 06:19:48.973586 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/535e8deb-1bd9-48ca-82e6-c0ea9174f4a4-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-npjps\" (UID: \"535e8deb-1bd9-48ca-82e6-c0ea9174f4a4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-npjps" Jan 03 06:19:48 crc kubenswrapper[4854]: I0103 06:19:48.973780 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7kdx\" (UniqueName: \"kubernetes.io/projected/535e8deb-1bd9-48ca-82e6-c0ea9174f4a4-kube-api-access-p7kdx\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-npjps\" (UID: \"535e8deb-1bd9-48ca-82e6-c0ea9174f4a4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-npjps" Jan 03 06:19:48 crc kubenswrapper[4854]: I0103 06:19:48.979420 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/535e8deb-1bd9-48ca-82e6-c0ea9174f4a4-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-npjps\" (UID: \"535e8deb-1bd9-48ca-82e6-c0ea9174f4a4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-npjps" Jan 03 06:19:48 crc kubenswrapper[4854]: I0103 06:19:48.987786 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/535e8deb-1bd9-48ca-82e6-c0ea9174f4a4-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-npjps\" (UID: \"535e8deb-1bd9-48ca-82e6-c0ea9174f4a4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-npjps" Jan 03 06:19:48 crc kubenswrapper[4854]: I0103 06:19:48.995226 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7kdx\" (UniqueName: \"kubernetes.io/projected/535e8deb-1bd9-48ca-82e6-c0ea9174f4a4-kube-api-access-p7kdx\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-npjps\" (UID: \"535e8deb-1bd9-48ca-82e6-c0ea9174f4a4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-npjps" Jan 03 06:19:49 crc kubenswrapper[4854]: I0103 06:19:49.075788 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-npjps" Jan 03 06:19:49 crc kubenswrapper[4854]: I0103 06:19:49.449048 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-npjps"] Jan 03 06:19:49 crc kubenswrapper[4854]: I0103 06:19:49.551319 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-npjps" event={"ID":"535e8deb-1bd9-48ca-82e6-c0ea9174f4a4","Type":"ContainerStarted","Data":"c9a62cf514cda064c90fdf5677a062c39a34cd10c071fa9e273024fa7730ace9"} Jan 03 06:19:50 crc kubenswrapper[4854]: I0103 06:19:50.564726 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-npjps" event={"ID":"535e8deb-1bd9-48ca-82e6-c0ea9174f4a4","Type":"ContainerStarted","Data":"6ebbad1689633647f104cd74ff1fdfd69edb5a60dc6fa36c290b858274206bd2"} Jan 03 06:19:50 crc kubenswrapper[4854]: I0103 06:19:50.582445 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-npjps" podStartSLOduration=2.08471245 podStartE2EDuration="2.582419621s" podCreationTimestamp="2026-01-03 06:19:48 +0000 UTC" firstStartedPulling="2026-01-03 06:19:49.454246393 +0000 UTC m=+2367.780822965" lastFinishedPulling="2026-01-03 06:19:49.951953554 +0000 UTC m=+2368.278530136" observedRunningTime="2026-01-03 06:19:50.581536579 +0000 UTC m=+2368.908113161" watchObservedRunningTime="2026-01-03 06:19:50.582419621 +0000 UTC m=+2368.908996213" Jan 03 06:19:55 crc kubenswrapper[4854]: I0103 06:19:55.049343 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-mjxfd"] Jan 03 06:19:55 crc kubenswrapper[4854]: I0103 06:19:55.059671 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-mjxfd"] Jan 03 06:19:56 crc kubenswrapper[4854]: I0103 06:19:56.138344 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="523cd360-c0b6-4711-b501-031ad4b8ed4f" path="/var/lib/kubelet/pods/523cd360-c0b6-4711-b501-031ad4b8ed4f/volumes" Jan 03 06:19:59 crc kubenswrapper[4854]: I0103 06:19:59.672134 4854 generic.go:334] "Generic (PLEG): container finished" podID="535e8deb-1bd9-48ca-82e6-c0ea9174f4a4" containerID="6ebbad1689633647f104cd74ff1fdfd69edb5a60dc6fa36c290b858274206bd2" exitCode=0 Jan 03 06:19:59 crc kubenswrapper[4854]: I0103 06:19:59.672208 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-npjps" event={"ID":"535e8deb-1bd9-48ca-82e6-c0ea9174f4a4","Type":"ContainerDied","Data":"6ebbad1689633647f104cd74ff1fdfd69edb5a60dc6fa36c290b858274206bd2"} Jan 03 06:20:01 crc kubenswrapper[4854]: I0103 06:20:01.227670 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-npjps" Jan 03 06:20:01 crc kubenswrapper[4854]: I0103 06:20:01.295629 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7kdx\" (UniqueName: \"kubernetes.io/projected/535e8deb-1bd9-48ca-82e6-c0ea9174f4a4-kube-api-access-p7kdx\") pod \"535e8deb-1bd9-48ca-82e6-c0ea9174f4a4\" (UID: \"535e8deb-1bd9-48ca-82e6-c0ea9174f4a4\") " Jan 03 06:20:01 crc kubenswrapper[4854]: I0103 06:20:01.296014 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/535e8deb-1bd9-48ca-82e6-c0ea9174f4a4-ssh-key\") pod \"535e8deb-1bd9-48ca-82e6-c0ea9174f4a4\" (UID: \"535e8deb-1bd9-48ca-82e6-c0ea9174f4a4\") " Jan 03 06:20:01 crc kubenswrapper[4854]: I0103 06:20:01.296061 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/535e8deb-1bd9-48ca-82e6-c0ea9174f4a4-inventory\") pod \"535e8deb-1bd9-48ca-82e6-c0ea9174f4a4\" (UID: \"535e8deb-1bd9-48ca-82e6-c0ea9174f4a4\") " Jan 03 06:20:01 crc kubenswrapper[4854]: I0103 06:20:01.329471 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/535e8deb-1bd9-48ca-82e6-c0ea9174f4a4-kube-api-access-p7kdx" (OuterVolumeSpecName: "kube-api-access-p7kdx") pod "535e8deb-1bd9-48ca-82e6-c0ea9174f4a4" (UID: "535e8deb-1bd9-48ca-82e6-c0ea9174f4a4"). InnerVolumeSpecName "kube-api-access-p7kdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:20:01 crc kubenswrapper[4854]: I0103 06:20:01.365918 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/535e8deb-1bd9-48ca-82e6-c0ea9174f4a4-inventory" (OuterVolumeSpecName: "inventory") pod "535e8deb-1bd9-48ca-82e6-c0ea9174f4a4" (UID: "535e8deb-1bd9-48ca-82e6-c0ea9174f4a4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:20:01 crc kubenswrapper[4854]: I0103 06:20:01.392256 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/535e8deb-1bd9-48ca-82e6-c0ea9174f4a4-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "535e8deb-1bd9-48ca-82e6-c0ea9174f4a4" (UID: "535e8deb-1bd9-48ca-82e6-c0ea9174f4a4"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:20:01 crc kubenswrapper[4854]: I0103 06:20:01.399502 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7kdx\" (UniqueName: \"kubernetes.io/projected/535e8deb-1bd9-48ca-82e6-c0ea9174f4a4-kube-api-access-p7kdx\") on node \"crc\" DevicePath \"\"" Jan 03 06:20:01 crc kubenswrapper[4854]: I0103 06:20:01.399542 4854 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/535e8deb-1bd9-48ca-82e6-c0ea9174f4a4-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 03 06:20:01 crc kubenswrapper[4854]: I0103 06:20:01.399559 4854 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/535e8deb-1bd9-48ca-82e6-c0ea9174f4a4-inventory\") on node \"crc\" DevicePath \"\"" Jan 03 06:20:01 crc kubenswrapper[4854]: I0103 06:20:01.703404 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-npjps" event={"ID":"535e8deb-1bd9-48ca-82e6-c0ea9174f4a4","Type":"ContainerDied","Data":"c9a62cf514cda064c90fdf5677a062c39a34cd10c071fa9e273024fa7730ace9"} Jan 03 06:20:01 crc kubenswrapper[4854]: I0103 06:20:01.703443 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9a62cf514cda064c90fdf5677a062c39a34cd10c071fa9e273024fa7730ace9" Jan 03 06:20:01 crc kubenswrapper[4854]: I0103 06:20:01.703448 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-npjps" Jan 03 06:20:01 crc kubenswrapper[4854]: I0103 06:20:01.808316 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bprgr"] Jan 03 06:20:01 crc kubenswrapper[4854]: E0103 06:20:01.808944 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="535e8deb-1bd9-48ca-82e6-c0ea9174f4a4" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 03 06:20:01 crc kubenswrapper[4854]: I0103 06:20:01.808958 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="535e8deb-1bd9-48ca-82e6-c0ea9174f4a4" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 03 06:20:01 crc kubenswrapper[4854]: I0103 06:20:01.809269 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="535e8deb-1bd9-48ca-82e6-c0ea9174f4a4" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 03 06:20:01 crc kubenswrapper[4854]: I0103 06:20:01.810160 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bprgr" Jan 03 06:20:01 crc kubenswrapper[4854]: I0103 06:20:01.822809 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bprgr"] Jan 03 06:20:01 crc kubenswrapper[4854]: I0103 06:20:01.877754 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 03 06:20:01 crc kubenswrapper[4854]: I0103 06:20:01.878037 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4bl62" Jan 03 06:20:01 crc kubenswrapper[4854]: I0103 06:20:01.878248 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 03 06:20:01 crc kubenswrapper[4854]: I0103 06:20:01.878428 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 03 06:20:01 crc kubenswrapper[4854]: I0103 06:20:01.912326 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzhls\" (UniqueName: \"kubernetes.io/projected/4e71015f-c3fd-4e24-a9d3-6f8ec4937127-kube-api-access-xzhls\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bprgr\" (UID: \"4e71015f-c3fd-4e24-a9d3-6f8ec4937127\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bprgr" Jan 03 06:20:01 crc kubenswrapper[4854]: I0103 06:20:01.912645 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e71015f-c3fd-4e24-a9d3-6f8ec4937127-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bprgr\" (UID: \"4e71015f-c3fd-4e24-a9d3-6f8ec4937127\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bprgr" Jan 03 06:20:01 crc kubenswrapper[4854]: I0103 06:20:01.912767 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4e71015f-c3fd-4e24-a9d3-6f8ec4937127-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bprgr\" (UID: \"4e71015f-c3fd-4e24-a9d3-6f8ec4937127\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bprgr" Jan 03 06:20:02 crc kubenswrapper[4854]: I0103 06:20:02.015601 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e71015f-c3fd-4e24-a9d3-6f8ec4937127-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bprgr\" (UID: \"4e71015f-c3fd-4e24-a9d3-6f8ec4937127\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bprgr" Jan 03 06:20:02 crc kubenswrapper[4854]: I0103 06:20:02.015684 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4e71015f-c3fd-4e24-a9d3-6f8ec4937127-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bprgr\" (UID: \"4e71015f-c3fd-4e24-a9d3-6f8ec4937127\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bprgr" Jan 03 06:20:02 crc kubenswrapper[4854]: I0103 06:20:02.015722 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzhls\" (UniqueName: \"kubernetes.io/projected/4e71015f-c3fd-4e24-a9d3-6f8ec4937127-kube-api-access-xzhls\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bprgr\" (UID: \"4e71015f-c3fd-4e24-a9d3-6f8ec4937127\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bprgr" Jan 03 06:20:02 crc kubenswrapper[4854]: I0103 06:20:02.020591 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e71015f-c3fd-4e24-a9d3-6f8ec4937127-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bprgr\" (UID: \"4e71015f-c3fd-4e24-a9d3-6f8ec4937127\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bprgr" Jan 03 06:20:02 crc kubenswrapper[4854]: I0103 06:20:02.020980 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4e71015f-c3fd-4e24-a9d3-6f8ec4937127-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bprgr\" (UID: \"4e71015f-c3fd-4e24-a9d3-6f8ec4937127\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bprgr" Jan 03 06:20:02 crc kubenswrapper[4854]: I0103 06:20:02.033834 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzhls\" (UniqueName: \"kubernetes.io/projected/4e71015f-c3fd-4e24-a9d3-6f8ec4937127-kube-api-access-xzhls\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bprgr\" (UID: \"4e71015f-c3fd-4e24-a9d3-6f8ec4937127\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bprgr" Jan 03 06:20:02 crc kubenswrapper[4854]: I0103 06:20:02.207372 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bprgr" Jan 03 06:20:02 crc kubenswrapper[4854]: I0103 06:20:02.783257 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bprgr"] Jan 03 06:20:02 crc kubenswrapper[4854]: W0103 06:20:02.785841 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e71015f_c3fd_4e24_a9d3_6f8ec4937127.slice/crio-5834c00b5667d8fa60ec10b825877528d9c239d2758f5be75958beffdf854152 WatchSource:0}: Error finding container 5834c00b5667d8fa60ec10b825877528d9c239d2758f5be75958beffdf854152: Status 404 returned error can't find the container with id 5834c00b5667d8fa60ec10b825877528d9c239d2758f5be75958beffdf854152 Jan 03 06:20:03 crc kubenswrapper[4854]: I0103 06:20:03.536949 4854 scope.go:117] "RemoveContainer" containerID="ce6d54cb224afb56ad776a76583153b8945e1caf98b1df961f80d5b8879898fd" Jan 03 06:20:03 crc kubenswrapper[4854]: I0103 06:20:03.573672 4854 scope.go:117] "RemoveContainer" containerID="99a9bb4cb21e4def9aa5c449ac0640a93bcfd2962989bb1009d492734be07cad" Jan 03 06:20:03 crc kubenswrapper[4854]: I0103 06:20:03.749530 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bprgr" event={"ID":"4e71015f-c3fd-4e24-a9d3-6f8ec4937127","Type":"ContainerStarted","Data":"06a5fd2db77f8732f7f507eb25ffb8cca840985d7b383662a78883f20cfc4797"} Jan 03 06:20:03 crc kubenswrapper[4854]: I0103 06:20:03.749788 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bprgr" event={"ID":"4e71015f-c3fd-4e24-a9d3-6f8ec4937127","Type":"ContainerStarted","Data":"5834c00b5667d8fa60ec10b825877528d9c239d2758f5be75958beffdf854152"} Jan 03 06:20:03 crc kubenswrapper[4854]: I0103 06:20:03.776902 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bprgr" podStartSLOduration=2.299278171 podStartE2EDuration="2.776880398s" podCreationTimestamp="2026-01-03 06:20:01 +0000 UTC" firstStartedPulling="2026-01-03 06:20:02.788786145 +0000 UTC m=+2381.115362707" lastFinishedPulling="2026-01-03 06:20:03.266388342 +0000 UTC m=+2381.592964934" observedRunningTime="2026-01-03 06:20:03.765183591 +0000 UTC m=+2382.091760193" watchObservedRunningTime="2026-01-03 06:20:03.776880398 +0000 UTC m=+2382.103456970" Jan 03 06:20:11 crc kubenswrapper[4854]: I0103 06:20:11.755494 4854 patch_prober.go:28] interesting pod/machine-config-daemon-qdhfx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 06:20:11 crc kubenswrapper[4854]: I0103 06:20:11.756062 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 06:20:14 crc kubenswrapper[4854]: I0103 06:20:14.878635 4854 generic.go:334] "Generic (PLEG): container finished" podID="4e71015f-c3fd-4e24-a9d3-6f8ec4937127" containerID="06a5fd2db77f8732f7f507eb25ffb8cca840985d7b383662a78883f20cfc4797" exitCode=0 Jan 03 06:20:14 crc kubenswrapper[4854]: I0103 06:20:14.878749 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bprgr" event={"ID":"4e71015f-c3fd-4e24-a9d3-6f8ec4937127","Type":"ContainerDied","Data":"06a5fd2db77f8732f7f507eb25ffb8cca840985d7b383662a78883f20cfc4797"} Jan 03 06:20:16 crc kubenswrapper[4854]: I0103 06:20:16.478997 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bprgr" Jan 03 06:20:16 crc kubenswrapper[4854]: I0103 06:20:16.529422 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4e71015f-c3fd-4e24-a9d3-6f8ec4937127-ssh-key\") pod \"4e71015f-c3fd-4e24-a9d3-6f8ec4937127\" (UID: \"4e71015f-c3fd-4e24-a9d3-6f8ec4937127\") " Jan 03 06:20:16 crc kubenswrapper[4854]: I0103 06:20:16.529580 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e71015f-c3fd-4e24-a9d3-6f8ec4937127-inventory\") pod \"4e71015f-c3fd-4e24-a9d3-6f8ec4937127\" (UID: \"4e71015f-c3fd-4e24-a9d3-6f8ec4937127\") " Jan 03 06:20:16 crc kubenswrapper[4854]: I0103 06:20:16.529718 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzhls\" (UniqueName: \"kubernetes.io/projected/4e71015f-c3fd-4e24-a9d3-6f8ec4937127-kube-api-access-xzhls\") pod \"4e71015f-c3fd-4e24-a9d3-6f8ec4937127\" (UID: \"4e71015f-c3fd-4e24-a9d3-6f8ec4937127\") " Jan 03 06:20:16 crc kubenswrapper[4854]: I0103 06:20:16.553958 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e71015f-c3fd-4e24-a9d3-6f8ec4937127-kube-api-access-xzhls" (OuterVolumeSpecName: "kube-api-access-xzhls") pod "4e71015f-c3fd-4e24-a9d3-6f8ec4937127" (UID: "4e71015f-c3fd-4e24-a9d3-6f8ec4937127"). InnerVolumeSpecName "kube-api-access-xzhls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:20:16 crc kubenswrapper[4854]: I0103 06:20:16.569039 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e71015f-c3fd-4e24-a9d3-6f8ec4937127-inventory" (OuterVolumeSpecName: "inventory") pod "4e71015f-c3fd-4e24-a9d3-6f8ec4937127" (UID: "4e71015f-c3fd-4e24-a9d3-6f8ec4937127"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:20:16 crc kubenswrapper[4854]: I0103 06:20:16.590342 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e71015f-c3fd-4e24-a9d3-6f8ec4937127-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4e71015f-c3fd-4e24-a9d3-6f8ec4937127" (UID: "4e71015f-c3fd-4e24-a9d3-6f8ec4937127"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:20:16 crc kubenswrapper[4854]: I0103 06:20:16.633913 4854 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4e71015f-c3fd-4e24-a9d3-6f8ec4937127-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 03 06:20:16 crc kubenswrapper[4854]: I0103 06:20:16.633953 4854 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e71015f-c3fd-4e24-a9d3-6f8ec4937127-inventory\") on node \"crc\" DevicePath \"\"" Jan 03 06:20:16 crc kubenswrapper[4854]: I0103 06:20:16.633963 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzhls\" (UniqueName: \"kubernetes.io/projected/4e71015f-c3fd-4e24-a9d3-6f8ec4937127-kube-api-access-xzhls\") on node \"crc\" DevicePath \"\"" Jan 03 06:20:16 crc kubenswrapper[4854]: I0103 06:20:16.901893 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bprgr" event={"ID":"4e71015f-c3fd-4e24-a9d3-6f8ec4937127","Type":"ContainerDied","Data":"5834c00b5667d8fa60ec10b825877528d9c239d2758f5be75958beffdf854152"} Jan 03 06:20:16 crc kubenswrapper[4854]: I0103 06:20:16.901937 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5834c00b5667d8fa60ec10b825877528d9c239d2758f5be75958beffdf854152" Jan 03 06:20:16 crc kubenswrapper[4854]: I0103 06:20:16.901985 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bprgr" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.020545 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz"] Jan 03 06:20:17 crc kubenswrapper[4854]: E0103 06:20:17.021111 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e71015f-c3fd-4e24-a9d3-6f8ec4937127" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.021128 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e71015f-c3fd-4e24-a9d3-6f8ec4937127" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.021346 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e71015f-c3fd-4e24-a9d3-6f8ec4937127" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.022273 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.025488 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.025862 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.026026 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.026175 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.026317 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.026648 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.026816 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4bl62" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.026963 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.027067 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.032839 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz"] Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.158640 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5a9333f4-367f-4624-93ed-5b4161b1fb4d-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.158710 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.158776 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.158801 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5a9333f4-367f-4624-93ed-5b4161b1fb4d-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.158845 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.158984 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.159192 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5a9333f4-367f-4624-93ed-5b4161b1fb4d-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.159279 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.159301 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.159411 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5a9333f4-367f-4624-93ed-5b4161b1fb4d-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.159452 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.159483 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.159550 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5a9333f4-367f-4624-93ed-5b4161b1fb4d-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.159596 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.159618 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjtnl\" (UniqueName: \"kubernetes.io/projected/5a9333f4-367f-4624-93ed-5b4161b1fb4d-kube-api-access-kjtnl\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.159639 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.262403 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.262864 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5a9333f4-367f-4624-93ed-5b4161b1fb4d-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.262916 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.263021 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.263129 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5a9333f4-367f-4624-93ed-5b4161b1fb4d-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.263323 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.263435 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.263609 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5a9333f4-367f-4624-93ed-5b4161b1fb4d-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.263728 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.263793 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.264238 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5a9333f4-367f-4624-93ed-5b4161b1fb4d-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.264327 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.264416 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.264520 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5a9333f4-367f-4624-93ed-5b4161b1fb4d-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.264669 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.264725 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjtnl\" (UniqueName: \"kubernetes.io/projected/5a9333f4-367f-4624-93ed-5b4161b1fb4d-kube-api-access-kjtnl\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.268018 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.268057 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5a9333f4-367f-4624-93ed-5b4161b1fb4d-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.268147 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.268935 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.269034 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5a9333f4-367f-4624-93ed-5b4161b1fb4d-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.269999 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.271638 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5a9333f4-367f-4624-93ed-5b4161b1fb4d-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.272028 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.272775 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.272966 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5a9333f4-367f-4624-93ed-5b4161b1fb4d-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.273023 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.273563 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5a9333f4-367f-4624-93ed-5b4161b1fb4d-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.278974 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.279173 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.279847 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.284608 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjtnl\" (UniqueName: \"kubernetes.io/projected/5a9333f4-367f-4624-93ed-5b4161b1fb4d-kube-api-access-kjtnl\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.362439 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz" Jan 03 06:20:17 crc kubenswrapper[4854]: I0103 06:20:17.970664 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz"] Jan 03 06:20:18 crc kubenswrapper[4854]: I0103 06:20:18.936278 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz" event={"ID":"5a9333f4-367f-4624-93ed-5b4161b1fb4d","Type":"ContainerStarted","Data":"24ec9692d2c13fa8f67fa437c8dc2bd22bef707075328497b9bd4d9eae133358"} Jan 03 06:20:19 crc kubenswrapper[4854]: I0103 06:20:19.949503 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz" event={"ID":"5a9333f4-367f-4624-93ed-5b4161b1fb4d","Type":"ContainerStarted","Data":"a273402807fe2a4e4e8edb5221b56ac231539e41176db7360780403dc3e5e958"} Jan 03 06:20:19 crc kubenswrapper[4854]: I0103 06:20:19.992223 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz" podStartSLOduration=2.510980264 podStartE2EDuration="3.992196285s" podCreationTimestamp="2026-01-03 06:20:16 +0000 UTC" firstStartedPulling="2026-01-03 06:20:17.975271457 +0000 UTC m=+2396.301848029" lastFinishedPulling="2026-01-03 06:20:19.456487468 +0000 UTC m=+2397.783064050" observedRunningTime="2026-01-03 06:20:19.968092882 +0000 UTC m=+2398.294669444" watchObservedRunningTime="2026-01-03 06:20:19.992196285 +0000 UTC m=+2398.318772877" Jan 03 06:20:41 crc kubenswrapper[4854]: I0103 06:20:41.756016 4854 patch_prober.go:28] interesting pod/machine-config-daemon-qdhfx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 06:20:41 crc kubenswrapper[4854]: I0103 06:20:41.756546 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 06:20:41 crc kubenswrapper[4854]: I0103 06:20:41.756603 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" Jan 03 06:20:41 crc kubenswrapper[4854]: I0103 06:20:41.757655 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e16522f8da35959e8a35a9556715d70f86468625ee1058a24772e4459c7e4658"} pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 03 06:20:41 crc kubenswrapper[4854]: I0103 06:20:41.757722 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" containerID="cri-o://e16522f8da35959e8a35a9556715d70f86468625ee1058a24772e4459c7e4658" gracePeriod=600 Jan 03 06:20:41 crc kubenswrapper[4854]: E0103 06:20:41.922775 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:20:42 crc kubenswrapper[4854]: I0103 06:20:42.280938 4854 generic.go:334] "Generic (PLEG): container finished" podID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerID="e16522f8da35959e8a35a9556715d70f86468625ee1058a24772e4459c7e4658" exitCode=0 Jan 03 06:20:42 crc kubenswrapper[4854]: I0103 06:20:42.280982 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" event={"ID":"e8c88d7d-092b-44f7-b4c8-3540be3c0e8b","Type":"ContainerDied","Data":"e16522f8da35959e8a35a9556715d70f86468625ee1058a24772e4459c7e4658"} Jan 03 06:20:42 crc kubenswrapper[4854]: I0103 06:20:42.281013 4854 scope.go:117] "RemoveContainer" containerID="0c123e736f10c9692e0df12e04db731fd8637258c1c778380ea9fd1d500829cf" Jan 03 06:20:42 crc kubenswrapper[4854]: I0103 06:20:42.281878 4854 scope.go:117] "RemoveContainer" containerID="e16522f8da35959e8a35a9556715d70f86468625ee1058a24772e4459c7e4658" Jan 03 06:20:42 crc kubenswrapper[4854]: E0103 06:20:42.282706 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:20:48 crc kubenswrapper[4854]: I0103 06:20:48.820023 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vb6qr"] Jan 03 06:20:48 crc kubenswrapper[4854]: I0103 06:20:48.825274 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vb6qr" Jan 03 06:20:48 crc kubenswrapper[4854]: I0103 06:20:48.837185 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vb6qr"] Jan 03 06:20:48 crc kubenswrapper[4854]: I0103 06:20:48.981668 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4714505-d8e0-43e0-a38e-0386360f3a42-catalog-content\") pod \"redhat-marketplace-vb6qr\" (UID: \"f4714505-d8e0-43e0-a38e-0386360f3a42\") " pod="openshift-marketplace/redhat-marketplace-vb6qr" Jan 03 06:20:48 crc kubenswrapper[4854]: I0103 06:20:48.981745 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4714505-d8e0-43e0-a38e-0386360f3a42-utilities\") pod \"redhat-marketplace-vb6qr\" (UID: \"f4714505-d8e0-43e0-a38e-0386360f3a42\") " pod="openshift-marketplace/redhat-marketplace-vb6qr" Jan 03 06:20:48 crc kubenswrapper[4854]: I0103 06:20:48.981846 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmsd4\" (UniqueName: \"kubernetes.io/projected/f4714505-d8e0-43e0-a38e-0386360f3a42-kube-api-access-tmsd4\") pod \"redhat-marketplace-vb6qr\" (UID: \"f4714505-d8e0-43e0-a38e-0386360f3a42\") " pod="openshift-marketplace/redhat-marketplace-vb6qr" Jan 03 06:20:49 crc kubenswrapper[4854]: I0103 06:20:49.084009 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4714505-d8e0-43e0-a38e-0386360f3a42-catalog-content\") pod \"redhat-marketplace-vb6qr\" (UID: \"f4714505-d8e0-43e0-a38e-0386360f3a42\") " pod="openshift-marketplace/redhat-marketplace-vb6qr" Jan 03 06:20:49 crc kubenswrapper[4854]: I0103 06:20:49.084455 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4714505-d8e0-43e0-a38e-0386360f3a42-utilities\") pod \"redhat-marketplace-vb6qr\" (UID: \"f4714505-d8e0-43e0-a38e-0386360f3a42\") " pod="openshift-marketplace/redhat-marketplace-vb6qr" Jan 03 06:20:49 crc kubenswrapper[4854]: I0103 06:20:49.084535 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmsd4\" (UniqueName: \"kubernetes.io/projected/f4714505-d8e0-43e0-a38e-0386360f3a42-kube-api-access-tmsd4\") pod \"redhat-marketplace-vb6qr\" (UID: \"f4714505-d8e0-43e0-a38e-0386360f3a42\") " pod="openshift-marketplace/redhat-marketplace-vb6qr" Jan 03 06:20:49 crc kubenswrapper[4854]: I0103 06:20:49.084680 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4714505-d8e0-43e0-a38e-0386360f3a42-catalog-content\") pod \"redhat-marketplace-vb6qr\" (UID: \"f4714505-d8e0-43e0-a38e-0386360f3a42\") " pod="openshift-marketplace/redhat-marketplace-vb6qr" Jan 03 06:20:49 crc kubenswrapper[4854]: I0103 06:20:49.084913 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4714505-d8e0-43e0-a38e-0386360f3a42-utilities\") pod \"redhat-marketplace-vb6qr\" (UID: \"f4714505-d8e0-43e0-a38e-0386360f3a42\") " pod="openshift-marketplace/redhat-marketplace-vb6qr" Jan 03 06:20:49 crc kubenswrapper[4854]: I0103 06:20:49.106945 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmsd4\" (UniqueName: \"kubernetes.io/projected/f4714505-d8e0-43e0-a38e-0386360f3a42-kube-api-access-tmsd4\") pod \"redhat-marketplace-vb6qr\" (UID: \"f4714505-d8e0-43e0-a38e-0386360f3a42\") " pod="openshift-marketplace/redhat-marketplace-vb6qr" Jan 03 06:20:49 crc kubenswrapper[4854]: I0103 06:20:49.167033 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vb6qr" Jan 03 06:20:49 crc kubenswrapper[4854]: I0103 06:20:49.779153 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vb6qr"] Jan 03 06:20:50 crc kubenswrapper[4854]: I0103 06:20:50.387190 4854 generic.go:334] "Generic (PLEG): container finished" podID="f4714505-d8e0-43e0-a38e-0386360f3a42" containerID="5e1813c39e3b5c61ea4a9f0248f2f5f6ad2d98fbccc8e00a6f763ba557e1846e" exitCode=0 Jan 03 06:20:50 crc kubenswrapper[4854]: I0103 06:20:50.387258 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vb6qr" event={"ID":"f4714505-d8e0-43e0-a38e-0386360f3a42","Type":"ContainerDied","Data":"5e1813c39e3b5c61ea4a9f0248f2f5f6ad2d98fbccc8e00a6f763ba557e1846e"} Jan 03 06:20:50 crc kubenswrapper[4854]: I0103 06:20:50.387589 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vb6qr" event={"ID":"f4714505-d8e0-43e0-a38e-0386360f3a42","Type":"ContainerStarted","Data":"fa60be724606cb452a03976b5ebc8581c3fc86fed5740d2e3c793a9cdede0723"} Jan 03 06:20:51 crc kubenswrapper[4854]: I0103 06:20:51.406698 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vb6qr" event={"ID":"f4714505-d8e0-43e0-a38e-0386360f3a42","Type":"ContainerStarted","Data":"276979ae06a0bbb69c82920a703e95b92308430ec54df59ccd45671735e9ca12"} Jan 03 06:20:52 crc kubenswrapper[4854]: I0103 06:20:52.431161 4854 generic.go:334] "Generic (PLEG): container finished" podID="f4714505-d8e0-43e0-a38e-0386360f3a42" containerID="276979ae06a0bbb69c82920a703e95b92308430ec54df59ccd45671735e9ca12" exitCode=0 Jan 03 06:20:52 crc kubenswrapper[4854]: I0103 06:20:52.431216 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vb6qr" event={"ID":"f4714505-d8e0-43e0-a38e-0386360f3a42","Type":"ContainerDied","Data":"276979ae06a0bbb69c82920a703e95b92308430ec54df59ccd45671735e9ca12"} Jan 03 06:20:53 crc kubenswrapper[4854]: I0103 06:20:53.451506 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vb6qr" event={"ID":"f4714505-d8e0-43e0-a38e-0386360f3a42","Type":"ContainerStarted","Data":"e621949823ec5b8b6475f3faf95e6661d01767daeada5523f6a7e70d9210b898"} Jan 03 06:20:53 crc kubenswrapper[4854]: I0103 06:20:53.471486 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vb6qr" podStartSLOduration=3.019318712 podStartE2EDuration="5.471464534s" podCreationTimestamp="2026-01-03 06:20:48 +0000 UTC" firstStartedPulling="2026-01-03 06:20:50.392511515 +0000 UTC m=+2428.719088127" lastFinishedPulling="2026-01-03 06:20:52.844657367 +0000 UTC m=+2431.171233949" observedRunningTime="2026-01-03 06:20:53.469773713 +0000 UTC m=+2431.796350295" watchObservedRunningTime="2026-01-03 06:20:53.471464534 +0000 UTC m=+2431.798041106" Jan 03 06:20:57 crc kubenswrapper[4854]: I0103 06:20:57.119064 4854 scope.go:117] "RemoveContainer" containerID="e16522f8da35959e8a35a9556715d70f86468625ee1058a24772e4459c7e4658" Jan 03 06:20:57 crc kubenswrapper[4854]: E0103 06:20:57.120191 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:20:59 crc kubenswrapper[4854]: I0103 06:20:59.168321 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vb6qr" Jan 03 06:20:59 crc kubenswrapper[4854]: I0103 06:20:59.168770 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vb6qr" Jan 03 06:20:59 crc kubenswrapper[4854]: I0103 06:20:59.242124 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vb6qr" Jan 03 06:20:59 crc kubenswrapper[4854]: I0103 06:20:59.557706 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vb6qr" Jan 03 06:20:59 crc kubenswrapper[4854]: I0103 06:20:59.646233 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vb6qr"] Jan 03 06:21:01 crc kubenswrapper[4854]: I0103 06:21:01.536128 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vb6qr" podUID="f4714505-d8e0-43e0-a38e-0386360f3a42" containerName="registry-server" containerID="cri-o://e621949823ec5b8b6475f3faf95e6661d01767daeada5523f6a7e70d9210b898" gracePeriod=2 Jan 03 06:21:01 crc kubenswrapper[4854]: E0103 06:21:01.756115 4854 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4714505_d8e0_43e0_a38e_0386360f3a42.slice/crio-conmon-e621949823ec5b8b6475f3faf95e6661d01767daeada5523f6a7e70d9210b898.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4714505_d8e0_43e0_a38e_0386360f3a42.slice/crio-e621949823ec5b8b6475f3faf95e6661d01767daeada5523f6a7e70d9210b898.scope\": RecentStats: unable to find data in memory cache]" Jan 03 06:21:02 crc kubenswrapper[4854]: I0103 06:21:02.101384 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vb6qr" Jan 03 06:21:02 crc kubenswrapper[4854]: I0103 06:21:02.277805 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmsd4\" (UniqueName: \"kubernetes.io/projected/f4714505-d8e0-43e0-a38e-0386360f3a42-kube-api-access-tmsd4\") pod \"f4714505-d8e0-43e0-a38e-0386360f3a42\" (UID: \"f4714505-d8e0-43e0-a38e-0386360f3a42\") " Jan 03 06:21:02 crc kubenswrapper[4854]: I0103 06:21:02.278122 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4714505-d8e0-43e0-a38e-0386360f3a42-utilities\") pod \"f4714505-d8e0-43e0-a38e-0386360f3a42\" (UID: \"f4714505-d8e0-43e0-a38e-0386360f3a42\") " Jan 03 06:21:02 crc kubenswrapper[4854]: I0103 06:21:02.278264 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4714505-d8e0-43e0-a38e-0386360f3a42-catalog-content\") pod \"f4714505-d8e0-43e0-a38e-0386360f3a42\" (UID: \"f4714505-d8e0-43e0-a38e-0386360f3a42\") " Jan 03 06:21:02 crc kubenswrapper[4854]: I0103 06:21:02.278921 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4714505-d8e0-43e0-a38e-0386360f3a42-utilities" (OuterVolumeSpecName: "utilities") pod "f4714505-d8e0-43e0-a38e-0386360f3a42" (UID: "f4714505-d8e0-43e0-a38e-0386360f3a42"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:21:02 crc kubenswrapper[4854]: I0103 06:21:02.279915 4854 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4714505-d8e0-43e0-a38e-0386360f3a42-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 06:21:02 crc kubenswrapper[4854]: I0103 06:21:02.283539 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4714505-d8e0-43e0-a38e-0386360f3a42-kube-api-access-tmsd4" (OuterVolumeSpecName: "kube-api-access-tmsd4") pod "f4714505-d8e0-43e0-a38e-0386360f3a42" (UID: "f4714505-d8e0-43e0-a38e-0386360f3a42"). InnerVolumeSpecName "kube-api-access-tmsd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:21:02 crc kubenswrapper[4854]: I0103 06:21:02.301795 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4714505-d8e0-43e0-a38e-0386360f3a42-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f4714505-d8e0-43e0-a38e-0386360f3a42" (UID: "f4714505-d8e0-43e0-a38e-0386360f3a42"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:21:02 crc kubenswrapper[4854]: I0103 06:21:02.384793 4854 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4714505-d8e0-43e0-a38e-0386360f3a42-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 06:21:02 crc kubenswrapper[4854]: I0103 06:21:02.384828 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmsd4\" (UniqueName: \"kubernetes.io/projected/f4714505-d8e0-43e0-a38e-0386360f3a42-kube-api-access-tmsd4\") on node \"crc\" DevicePath \"\"" Jan 03 06:21:02 crc kubenswrapper[4854]: I0103 06:21:02.549294 4854 generic.go:334] "Generic (PLEG): container finished" podID="f4714505-d8e0-43e0-a38e-0386360f3a42" containerID="e621949823ec5b8b6475f3faf95e6661d01767daeada5523f6a7e70d9210b898" exitCode=0 Jan 03 06:21:02 crc kubenswrapper[4854]: I0103 06:21:02.549334 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vb6qr" event={"ID":"f4714505-d8e0-43e0-a38e-0386360f3a42","Type":"ContainerDied","Data":"e621949823ec5b8b6475f3faf95e6661d01767daeada5523f6a7e70d9210b898"} Jan 03 06:21:02 crc kubenswrapper[4854]: I0103 06:21:02.549362 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vb6qr" event={"ID":"f4714505-d8e0-43e0-a38e-0386360f3a42","Type":"ContainerDied","Data":"fa60be724606cb452a03976b5ebc8581c3fc86fed5740d2e3c793a9cdede0723"} Jan 03 06:21:02 crc kubenswrapper[4854]: I0103 06:21:02.549381 4854 scope.go:117] "RemoveContainer" containerID="e621949823ec5b8b6475f3faf95e6661d01767daeada5523f6a7e70d9210b898" Jan 03 06:21:02 crc kubenswrapper[4854]: I0103 06:21:02.549495 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vb6qr" Jan 03 06:21:02 crc kubenswrapper[4854]: I0103 06:21:02.595555 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vb6qr"] Jan 03 06:21:02 crc kubenswrapper[4854]: I0103 06:21:02.595875 4854 scope.go:117] "RemoveContainer" containerID="276979ae06a0bbb69c82920a703e95b92308430ec54df59ccd45671735e9ca12" Jan 03 06:21:02 crc kubenswrapper[4854]: I0103 06:21:02.607076 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vb6qr"] Jan 03 06:21:02 crc kubenswrapper[4854]: I0103 06:21:02.649208 4854 scope.go:117] "RemoveContainer" containerID="5e1813c39e3b5c61ea4a9f0248f2f5f6ad2d98fbccc8e00a6f763ba557e1846e" Jan 03 06:21:02 crc kubenswrapper[4854]: I0103 06:21:02.702861 4854 scope.go:117] "RemoveContainer" containerID="e621949823ec5b8b6475f3faf95e6661d01767daeada5523f6a7e70d9210b898" Jan 03 06:21:02 crc kubenswrapper[4854]: E0103 06:21:02.703371 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e621949823ec5b8b6475f3faf95e6661d01767daeada5523f6a7e70d9210b898\": container with ID starting with e621949823ec5b8b6475f3faf95e6661d01767daeada5523f6a7e70d9210b898 not found: ID does not exist" containerID="e621949823ec5b8b6475f3faf95e6661d01767daeada5523f6a7e70d9210b898" Jan 03 06:21:02 crc kubenswrapper[4854]: I0103 06:21:02.703408 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e621949823ec5b8b6475f3faf95e6661d01767daeada5523f6a7e70d9210b898"} err="failed to get container status \"e621949823ec5b8b6475f3faf95e6661d01767daeada5523f6a7e70d9210b898\": rpc error: code = NotFound desc = could not find container \"e621949823ec5b8b6475f3faf95e6661d01767daeada5523f6a7e70d9210b898\": container with ID starting with e621949823ec5b8b6475f3faf95e6661d01767daeada5523f6a7e70d9210b898 not found: ID does not exist" Jan 03 06:21:02 crc kubenswrapper[4854]: I0103 06:21:02.703436 4854 scope.go:117] "RemoveContainer" containerID="276979ae06a0bbb69c82920a703e95b92308430ec54df59ccd45671735e9ca12" Jan 03 06:21:02 crc kubenswrapper[4854]: E0103 06:21:02.703760 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"276979ae06a0bbb69c82920a703e95b92308430ec54df59ccd45671735e9ca12\": container with ID starting with 276979ae06a0bbb69c82920a703e95b92308430ec54df59ccd45671735e9ca12 not found: ID does not exist" containerID="276979ae06a0bbb69c82920a703e95b92308430ec54df59ccd45671735e9ca12" Jan 03 06:21:02 crc kubenswrapper[4854]: I0103 06:21:02.703793 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"276979ae06a0bbb69c82920a703e95b92308430ec54df59ccd45671735e9ca12"} err="failed to get container status \"276979ae06a0bbb69c82920a703e95b92308430ec54df59ccd45671735e9ca12\": rpc error: code = NotFound desc = could not find container \"276979ae06a0bbb69c82920a703e95b92308430ec54df59ccd45671735e9ca12\": container with ID starting with 276979ae06a0bbb69c82920a703e95b92308430ec54df59ccd45671735e9ca12 not found: ID does not exist" Jan 03 06:21:02 crc kubenswrapper[4854]: I0103 06:21:02.703813 4854 scope.go:117] "RemoveContainer" containerID="5e1813c39e3b5c61ea4a9f0248f2f5f6ad2d98fbccc8e00a6f763ba557e1846e" Jan 03 06:21:02 crc kubenswrapper[4854]: E0103 06:21:02.704091 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e1813c39e3b5c61ea4a9f0248f2f5f6ad2d98fbccc8e00a6f763ba557e1846e\": container with ID starting with 5e1813c39e3b5c61ea4a9f0248f2f5f6ad2d98fbccc8e00a6f763ba557e1846e not found: ID does not exist" containerID="5e1813c39e3b5c61ea4a9f0248f2f5f6ad2d98fbccc8e00a6f763ba557e1846e" Jan 03 06:21:02 crc kubenswrapper[4854]: I0103 06:21:02.704117 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e1813c39e3b5c61ea4a9f0248f2f5f6ad2d98fbccc8e00a6f763ba557e1846e"} err="failed to get container status \"5e1813c39e3b5c61ea4a9f0248f2f5f6ad2d98fbccc8e00a6f763ba557e1846e\": rpc error: code = NotFound desc = could not find container \"5e1813c39e3b5c61ea4a9f0248f2f5f6ad2d98fbccc8e00a6f763ba557e1846e\": container with ID starting with 5e1813c39e3b5c61ea4a9f0248f2f5f6ad2d98fbccc8e00a6f763ba557e1846e not found: ID does not exist" Jan 03 06:21:04 crc kubenswrapper[4854]: I0103 06:21:04.135579 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4714505-d8e0-43e0-a38e-0386360f3a42" path="/var/lib/kubelet/pods/f4714505-d8e0-43e0-a38e-0386360f3a42/volumes" Jan 03 06:21:10 crc kubenswrapper[4854]: I0103 06:21:10.118787 4854 scope.go:117] "RemoveContainer" containerID="e16522f8da35959e8a35a9556715d70f86468625ee1058a24772e4459c7e4658" Jan 03 06:21:10 crc kubenswrapper[4854]: E0103 06:21:10.119699 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:21:14 crc kubenswrapper[4854]: I0103 06:21:14.709568 4854 generic.go:334] "Generic (PLEG): container finished" podID="5a9333f4-367f-4624-93ed-5b4161b1fb4d" containerID="a273402807fe2a4e4e8edb5221b56ac231539e41176db7360780403dc3e5e958" exitCode=0 Jan 03 06:21:14 crc kubenswrapper[4854]: I0103 06:21:14.709714 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz" event={"ID":"5a9333f4-367f-4624-93ed-5b4161b1fb4d","Type":"ContainerDied","Data":"a273402807fe2a4e4e8edb5221b56ac231539e41176db7360780403dc3e5e958"} Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.199982 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz" Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.278744 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5a9333f4-367f-4624-93ed-5b4161b1fb4d-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.278841 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5a9333f4-367f-4624-93ed-5b4161b1fb4d-openstack-edpm-ipam-ovn-default-certs-0\") pod \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.278879 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5a9333f4-367f-4624-93ed-5b4161b1fb4d-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.278935 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-libvirt-combined-ca-bundle\") pod \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.279032 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-telemetry-power-monitoring-combined-ca-bundle\") pod \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.279091 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-inventory\") pod \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.279447 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5a9333f4-367f-4624-93ed-5b4161b1fb4d-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.279526 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-ovn-combined-ca-bundle\") pod \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.279555 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5a9333f4-367f-4624-93ed-5b4161b1fb4d-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.279593 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-ssh-key\") pod \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.279693 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-neutron-metadata-combined-ca-bundle\") pod \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.279747 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-repo-setup-combined-ca-bundle\") pod \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.279911 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-telemetry-combined-ca-bundle\") pod \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.279938 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-bootstrap-combined-ca-bundle\") pod \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.279961 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjtnl\" (UniqueName: \"kubernetes.io/projected/5a9333f4-367f-4624-93ed-5b4161b1fb4d-kube-api-access-kjtnl\") pod \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.279997 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-nova-combined-ca-bundle\") pod \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\" (UID: \"5a9333f4-367f-4624-93ed-5b4161b1fb4d\") " Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.286315 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "5a9333f4-367f-4624-93ed-5b4161b1fb4d" (UID: "5a9333f4-367f-4624-93ed-5b4161b1fb4d"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.286814 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a9333f4-367f-4624-93ed-5b4161b1fb4d-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "5a9333f4-367f-4624-93ed-5b4161b1fb4d" (UID: "5a9333f4-367f-4624-93ed-5b4161b1fb4d"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.287030 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a9333f4-367f-4624-93ed-5b4161b1fb4d-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "5a9333f4-367f-4624-93ed-5b4161b1fb4d" (UID: "5a9333f4-367f-4624-93ed-5b4161b1fb4d"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.288072 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "5a9333f4-367f-4624-93ed-5b4161b1fb4d" (UID: "5a9333f4-367f-4624-93ed-5b4161b1fb4d"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.288278 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "5a9333f4-367f-4624-93ed-5b4161b1fb4d" (UID: "5a9333f4-367f-4624-93ed-5b4161b1fb4d"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.292235 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "5a9333f4-367f-4624-93ed-5b4161b1fb4d" (UID: "5a9333f4-367f-4624-93ed-5b4161b1fb4d"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.292396 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "5a9333f4-367f-4624-93ed-5b4161b1fb4d" (UID: "5a9333f4-367f-4624-93ed-5b4161b1fb4d"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.292466 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a9333f4-367f-4624-93ed-5b4161b1fb4d-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "5a9333f4-367f-4624-93ed-5b4161b1fb4d" (UID: "5a9333f4-367f-4624-93ed-5b4161b1fb4d"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.292498 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a9333f4-367f-4624-93ed-5b4161b1fb4d-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0") pod "5a9333f4-367f-4624-93ed-5b4161b1fb4d" (UID: "5a9333f4-367f-4624-93ed-5b4161b1fb4d"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.293042 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "5a9333f4-367f-4624-93ed-5b4161b1fb4d" (UID: "5a9333f4-367f-4624-93ed-5b4161b1fb4d"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.300741 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "5a9333f4-367f-4624-93ed-5b4161b1fb4d" (UID: "5a9333f4-367f-4624-93ed-5b4161b1fb4d"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.300884 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a9333f4-367f-4624-93ed-5b4161b1fb4d-kube-api-access-kjtnl" (OuterVolumeSpecName: "kube-api-access-kjtnl") pod "5a9333f4-367f-4624-93ed-5b4161b1fb4d" (UID: "5a9333f4-367f-4624-93ed-5b4161b1fb4d"). InnerVolumeSpecName "kube-api-access-kjtnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.302358 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a9333f4-367f-4624-93ed-5b4161b1fb4d-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "5a9333f4-367f-4624-93ed-5b4161b1fb4d" (UID: "5a9333f4-367f-4624-93ed-5b4161b1fb4d"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.311285 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "5a9333f4-367f-4624-93ed-5b4161b1fb4d" (UID: "5a9333f4-367f-4624-93ed-5b4161b1fb4d"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.323521 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5a9333f4-367f-4624-93ed-5b4161b1fb4d" (UID: "5a9333f4-367f-4624-93ed-5b4161b1fb4d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.328274 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-inventory" (OuterVolumeSpecName: "inventory") pod "5a9333f4-367f-4624-93ed-5b4161b1fb4d" (UID: "5a9333f4-367f-4624-93ed-5b4161b1fb4d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.383323 4854 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.383538 4854 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.383603 4854 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.383689 4854 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.383751 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjtnl\" (UniqueName: \"kubernetes.io/projected/5a9333f4-367f-4624-93ed-5b4161b1fb4d-kube-api-access-kjtnl\") on node \"crc\" DevicePath \"\"" Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.383820 4854 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.383881 4854 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5a9333f4-367f-4624-93ed-5b4161b1fb4d-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.383972 4854 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5a9333f4-367f-4624-93ed-5b4161b1fb4d-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.384036 4854 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5a9333f4-367f-4624-93ed-5b4161b1fb4d-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.384143 4854 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.384233 4854 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.384294 4854 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-inventory\") on node \"crc\" DevicePath \"\"" Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.384365 4854 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5a9333f4-367f-4624-93ed-5b4161b1fb4d-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.384434 4854 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5a9333f4-367f-4624-93ed-5b4161b1fb4d-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.384500 4854 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.384566 4854 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5a9333f4-367f-4624-93ed-5b4161b1fb4d-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.738773 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz" event={"ID":"5a9333f4-367f-4624-93ed-5b4161b1fb4d","Type":"ContainerDied","Data":"24ec9692d2c13fa8f67fa437c8dc2bd22bef707075328497b9bd4d9eae133358"} Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.738838 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24ec9692d2c13fa8f67fa437c8dc2bd22bef707075328497b9bd4d9eae133358" Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.738852 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dfxtz" Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.851970 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-ws9pr"] Jan 03 06:21:16 crc kubenswrapper[4854]: E0103 06:21:16.852645 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a9333f4-367f-4624-93ed-5b4161b1fb4d" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.852670 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a9333f4-367f-4624-93ed-5b4161b1fb4d" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 03 06:21:16 crc kubenswrapper[4854]: E0103 06:21:16.852691 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4714505-d8e0-43e0-a38e-0386360f3a42" containerName="extract-content" Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.852699 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4714505-d8e0-43e0-a38e-0386360f3a42" containerName="extract-content" Jan 03 06:21:16 crc kubenswrapper[4854]: E0103 06:21:16.852720 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4714505-d8e0-43e0-a38e-0386360f3a42" containerName="registry-server" Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.852737 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4714505-d8e0-43e0-a38e-0386360f3a42" containerName="registry-server" Jan 03 06:21:16 crc kubenswrapper[4854]: E0103 06:21:16.852791 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4714505-d8e0-43e0-a38e-0386360f3a42" containerName="extract-utilities" Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.852802 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4714505-d8e0-43e0-a38e-0386360f3a42" containerName="extract-utilities" Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.853093 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a9333f4-367f-4624-93ed-5b4161b1fb4d" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.853123 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4714505-d8e0-43e0-a38e-0386360f3a42" containerName="registry-server" Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.854439 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ws9pr" Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.857330 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.857358 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.857646 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4bl62" Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.858057 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.858608 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.864711 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-ws9pr"] Jan 03 06:21:16 crc kubenswrapper[4854]: I0103 06:21:16.999866 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a1440aa8-3e92-422a-95ea-8dda204f13fd-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ws9pr\" (UID: \"a1440aa8-3e92-422a-95ea-8dda204f13fd\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ws9pr" Jan 03 06:21:17 crc kubenswrapper[4854]: I0103 06:21:17.000201 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f79p7\" (UniqueName: \"kubernetes.io/projected/a1440aa8-3e92-422a-95ea-8dda204f13fd-kube-api-access-f79p7\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ws9pr\" (UID: \"a1440aa8-3e92-422a-95ea-8dda204f13fd\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ws9pr" Jan 03 06:21:17 crc kubenswrapper[4854]: I0103 06:21:17.000384 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a1440aa8-3e92-422a-95ea-8dda204f13fd-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ws9pr\" (UID: \"a1440aa8-3e92-422a-95ea-8dda204f13fd\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ws9pr" Jan 03 06:21:17 crc kubenswrapper[4854]: I0103 06:21:17.000551 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a1440aa8-3e92-422a-95ea-8dda204f13fd-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ws9pr\" (UID: \"a1440aa8-3e92-422a-95ea-8dda204f13fd\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ws9pr" Jan 03 06:21:17 crc kubenswrapper[4854]: I0103 06:21:17.000773 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1440aa8-3e92-422a-95ea-8dda204f13fd-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ws9pr\" (UID: \"a1440aa8-3e92-422a-95ea-8dda204f13fd\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ws9pr" Jan 03 06:21:17 crc kubenswrapper[4854]: I0103 06:21:17.104415 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a1440aa8-3e92-422a-95ea-8dda204f13fd-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ws9pr\" (UID: \"a1440aa8-3e92-422a-95ea-8dda204f13fd\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ws9pr" Jan 03 06:21:17 crc kubenswrapper[4854]: I0103 06:21:17.104676 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f79p7\" (UniqueName: \"kubernetes.io/projected/a1440aa8-3e92-422a-95ea-8dda204f13fd-kube-api-access-f79p7\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ws9pr\" (UID: \"a1440aa8-3e92-422a-95ea-8dda204f13fd\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ws9pr" Jan 03 06:21:17 crc kubenswrapper[4854]: I0103 06:21:17.104800 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a1440aa8-3e92-422a-95ea-8dda204f13fd-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ws9pr\" (UID: \"a1440aa8-3e92-422a-95ea-8dda204f13fd\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ws9pr" Jan 03 06:21:17 crc kubenswrapper[4854]: I0103 06:21:17.104960 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a1440aa8-3e92-422a-95ea-8dda204f13fd-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ws9pr\" (UID: \"a1440aa8-3e92-422a-95ea-8dda204f13fd\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ws9pr" Jan 03 06:21:17 crc kubenswrapper[4854]: I0103 06:21:17.105204 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1440aa8-3e92-422a-95ea-8dda204f13fd-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ws9pr\" (UID: \"a1440aa8-3e92-422a-95ea-8dda204f13fd\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ws9pr" Jan 03 06:21:17 crc kubenswrapper[4854]: I0103 06:21:17.106286 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a1440aa8-3e92-422a-95ea-8dda204f13fd-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ws9pr\" (UID: \"a1440aa8-3e92-422a-95ea-8dda204f13fd\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ws9pr" Jan 03 06:21:17 crc kubenswrapper[4854]: I0103 06:21:17.109471 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a1440aa8-3e92-422a-95ea-8dda204f13fd-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ws9pr\" (UID: \"a1440aa8-3e92-422a-95ea-8dda204f13fd\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ws9pr" Jan 03 06:21:17 crc kubenswrapper[4854]: I0103 06:21:17.111885 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a1440aa8-3e92-422a-95ea-8dda204f13fd-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ws9pr\" (UID: \"a1440aa8-3e92-422a-95ea-8dda204f13fd\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ws9pr" Jan 03 06:21:17 crc kubenswrapper[4854]: I0103 06:21:17.112407 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1440aa8-3e92-422a-95ea-8dda204f13fd-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ws9pr\" (UID: \"a1440aa8-3e92-422a-95ea-8dda204f13fd\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ws9pr" Jan 03 06:21:17 crc kubenswrapper[4854]: I0103 06:21:17.137803 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f79p7\" (UniqueName: \"kubernetes.io/projected/a1440aa8-3e92-422a-95ea-8dda204f13fd-kube-api-access-f79p7\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ws9pr\" (UID: \"a1440aa8-3e92-422a-95ea-8dda204f13fd\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ws9pr" Jan 03 06:21:17 crc kubenswrapper[4854]: I0103 06:21:17.181870 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ws9pr" Jan 03 06:21:18 crc kubenswrapper[4854]: I0103 06:21:17.825749 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-ws9pr"] Jan 03 06:21:18 crc kubenswrapper[4854]: I0103 06:21:18.770961 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ws9pr" event={"ID":"a1440aa8-3e92-422a-95ea-8dda204f13fd","Type":"ContainerStarted","Data":"ddfa5c30a00058f18b3e438e9b83a083615f16781fce5c10d52477e3bb085cae"} Jan 03 06:21:18 crc kubenswrapper[4854]: I0103 06:21:18.771307 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ws9pr" event={"ID":"a1440aa8-3e92-422a-95ea-8dda204f13fd","Type":"ContainerStarted","Data":"e4081a4ef3b4da96ed70d96262f5262a1c0e06362c90e95026cfa8ec3332ea15"} Jan 03 06:21:18 crc kubenswrapper[4854]: I0103 06:21:18.795951 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ws9pr" podStartSLOduration=2.266878693 podStartE2EDuration="2.795930936s" podCreationTimestamp="2026-01-03 06:21:16 +0000 UTC" firstStartedPulling="2026-01-03 06:21:17.835150884 +0000 UTC m=+2456.161727496" lastFinishedPulling="2026-01-03 06:21:18.364203177 +0000 UTC m=+2456.690779739" observedRunningTime="2026-01-03 06:21:18.788047192 +0000 UTC m=+2457.114623774" watchObservedRunningTime="2026-01-03 06:21:18.795930936 +0000 UTC m=+2457.122507528" Jan 03 06:21:24 crc kubenswrapper[4854]: I0103 06:21:24.119133 4854 scope.go:117] "RemoveContainer" containerID="e16522f8da35959e8a35a9556715d70f86468625ee1058a24772e4459c7e4658" Jan 03 06:21:24 crc kubenswrapper[4854]: E0103 06:21:24.119835 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:21:37 crc kubenswrapper[4854]: I0103 06:21:37.118827 4854 scope.go:117] "RemoveContainer" containerID="e16522f8da35959e8a35a9556715d70f86468625ee1058a24772e4459c7e4658" Jan 03 06:21:37 crc kubenswrapper[4854]: E0103 06:21:37.119766 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:21:51 crc kubenswrapper[4854]: I0103 06:21:51.117936 4854 scope.go:117] "RemoveContainer" containerID="e16522f8da35959e8a35a9556715d70f86468625ee1058a24772e4459c7e4658" Jan 03 06:21:51 crc kubenswrapper[4854]: E0103 06:21:51.119668 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:22:02 crc kubenswrapper[4854]: I0103 06:22:02.138125 4854 scope.go:117] "RemoveContainer" containerID="e16522f8da35959e8a35a9556715d70f86468625ee1058a24772e4459c7e4658" Jan 03 06:22:02 crc kubenswrapper[4854]: E0103 06:22:02.139668 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:22:13 crc kubenswrapper[4854]: I0103 06:22:13.118445 4854 scope.go:117] "RemoveContainer" containerID="e16522f8da35959e8a35a9556715d70f86468625ee1058a24772e4459c7e4658" Jan 03 06:22:13 crc kubenswrapper[4854]: E0103 06:22:13.119359 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:22:27 crc kubenswrapper[4854]: I0103 06:22:27.119476 4854 scope.go:117] "RemoveContainer" containerID="e16522f8da35959e8a35a9556715d70f86468625ee1058a24772e4459c7e4658" Jan 03 06:22:27 crc kubenswrapper[4854]: E0103 06:22:27.120412 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:22:39 crc kubenswrapper[4854]: I0103 06:22:39.118470 4854 scope.go:117] "RemoveContainer" containerID="e16522f8da35959e8a35a9556715d70f86468625ee1058a24772e4459c7e4658" Jan 03 06:22:39 crc kubenswrapper[4854]: E0103 06:22:39.121008 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:22:46 crc kubenswrapper[4854]: I0103 06:22:46.198350 4854 generic.go:334] "Generic (PLEG): container finished" podID="a1440aa8-3e92-422a-95ea-8dda204f13fd" containerID="ddfa5c30a00058f18b3e438e9b83a083615f16781fce5c10d52477e3bb085cae" exitCode=0 Jan 03 06:22:46 crc kubenswrapper[4854]: I0103 06:22:46.198445 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ws9pr" event={"ID":"a1440aa8-3e92-422a-95ea-8dda204f13fd","Type":"ContainerDied","Data":"ddfa5c30a00058f18b3e438e9b83a083615f16781fce5c10d52477e3bb085cae"} Jan 03 06:22:47 crc kubenswrapper[4854]: I0103 06:22:47.778403 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ws9pr" Jan 03 06:22:47 crc kubenswrapper[4854]: I0103 06:22:47.890110 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1440aa8-3e92-422a-95ea-8dda204f13fd-ovn-combined-ca-bundle\") pod \"a1440aa8-3e92-422a-95ea-8dda204f13fd\" (UID: \"a1440aa8-3e92-422a-95ea-8dda204f13fd\") " Jan 03 06:22:47 crc kubenswrapper[4854]: I0103 06:22:47.890233 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a1440aa8-3e92-422a-95ea-8dda204f13fd-ovncontroller-config-0\") pod \"a1440aa8-3e92-422a-95ea-8dda204f13fd\" (UID: \"a1440aa8-3e92-422a-95ea-8dda204f13fd\") " Jan 03 06:22:47 crc kubenswrapper[4854]: I0103 06:22:47.890402 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a1440aa8-3e92-422a-95ea-8dda204f13fd-inventory\") pod \"a1440aa8-3e92-422a-95ea-8dda204f13fd\" (UID: \"a1440aa8-3e92-422a-95ea-8dda204f13fd\") " Jan 03 06:22:47 crc kubenswrapper[4854]: I0103 06:22:47.890443 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f79p7\" (UniqueName: \"kubernetes.io/projected/a1440aa8-3e92-422a-95ea-8dda204f13fd-kube-api-access-f79p7\") pod \"a1440aa8-3e92-422a-95ea-8dda204f13fd\" (UID: \"a1440aa8-3e92-422a-95ea-8dda204f13fd\") " Jan 03 06:22:47 crc kubenswrapper[4854]: I0103 06:22:47.890494 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a1440aa8-3e92-422a-95ea-8dda204f13fd-ssh-key\") pod \"a1440aa8-3e92-422a-95ea-8dda204f13fd\" (UID: \"a1440aa8-3e92-422a-95ea-8dda204f13fd\") " Jan 03 06:22:47 crc kubenswrapper[4854]: I0103 06:22:47.896819 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1440aa8-3e92-422a-95ea-8dda204f13fd-kube-api-access-f79p7" (OuterVolumeSpecName: "kube-api-access-f79p7") pod "a1440aa8-3e92-422a-95ea-8dda204f13fd" (UID: "a1440aa8-3e92-422a-95ea-8dda204f13fd"). InnerVolumeSpecName "kube-api-access-f79p7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:22:47 crc kubenswrapper[4854]: I0103 06:22:47.897480 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1440aa8-3e92-422a-95ea-8dda204f13fd-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "a1440aa8-3e92-422a-95ea-8dda204f13fd" (UID: "a1440aa8-3e92-422a-95ea-8dda204f13fd"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:22:47 crc kubenswrapper[4854]: I0103 06:22:47.923278 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1440aa8-3e92-422a-95ea-8dda204f13fd-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "a1440aa8-3e92-422a-95ea-8dda204f13fd" (UID: "a1440aa8-3e92-422a-95ea-8dda204f13fd"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:22:47 crc kubenswrapper[4854]: I0103 06:22:47.926300 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1440aa8-3e92-422a-95ea-8dda204f13fd-inventory" (OuterVolumeSpecName: "inventory") pod "a1440aa8-3e92-422a-95ea-8dda204f13fd" (UID: "a1440aa8-3e92-422a-95ea-8dda204f13fd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:22:47 crc kubenswrapper[4854]: I0103 06:22:47.930803 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1440aa8-3e92-422a-95ea-8dda204f13fd-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a1440aa8-3e92-422a-95ea-8dda204f13fd" (UID: "a1440aa8-3e92-422a-95ea-8dda204f13fd"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:22:47 crc kubenswrapper[4854]: I0103 06:22:47.993212 4854 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1440aa8-3e92-422a-95ea-8dda204f13fd-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:22:47 crc kubenswrapper[4854]: I0103 06:22:47.993247 4854 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a1440aa8-3e92-422a-95ea-8dda204f13fd-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 03 06:22:47 crc kubenswrapper[4854]: I0103 06:22:47.993258 4854 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a1440aa8-3e92-422a-95ea-8dda204f13fd-inventory\") on node \"crc\" DevicePath \"\"" Jan 03 06:22:47 crc kubenswrapper[4854]: I0103 06:22:47.993269 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f79p7\" (UniqueName: \"kubernetes.io/projected/a1440aa8-3e92-422a-95ea-8dda204f13fd-kube-api-access-f79p7\") on node \"crc\" DevicePath \"\"" Jan 03 06:22:47 crc kubenswrapper[4854]: I0103 06:22:47.993278 4854 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a1440aa8-3e92-422a-95ea-8dda204f13fd-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 03 06:22:48 crc kubenswrapper[4854]: I0103 06:22:48.229394 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ws9pr" event={"ID":"a1440aa8-3e92-422a-95ea-8dda204f13fd","Type":"ContainerDied","Data":"e4081a4ef3b4da96ed70d96262f5262a1c0e06362c90e95026cfa8ec3332ea15"} Jan 03 06:22:48 crc kubenswrapper[4854]: I0103 06:22:48.229435 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4081a4ef3b4da96ed70d96262f5262a1c0e06362c90e95026cfa8ec3332ea15" Jan 03 06:22:48 crc kubenswrapper[4854]: I0103 06:22:48.229494 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ws9pr" Jan 03 06:22:48 crc kubenswrapper[4854]: E0103 06:22:48.264528 4854 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1440aa8_3e92_422a_95ea_8dda204f13fd.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1440aa8_3e92_422a_95ea_8dda204f13fd.slice/crio-e4081a4ef3b4da96ed70d96262f5262a1c0e06362c90e95026cfa8ec3332ea15\": RecentStats: unable to find data in memory cache]" Jan 03 06:22:48 crc kubenswrapper[4854]: E0103 06:22:48.264538 4854 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1440aa8_3e92_422a_95ea_8dda204f13fd.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1440aa8_3e92_422a_95ea_8dda204f13fd.slice/crio-e4081a4ef3b4da96ed70d96262f5262a1c0e06362c90e95026cfa8ec3332ea15\": RecentStats: unable to find data in memory cache]" Jan 03 06:22:48 crc kubenswrapper[4854]: I0103 06:22:48.346457 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7xglf"] Jan 03 06:22:48 crc kubenswrapper[4854]: E0103 06:22:48.354720 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1440aa8-3e92-422a-95ea-8dda204f13fd" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 03 06:22:48 crc kubenswrapper[4854]: I0103 06:22:48.355035 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1440aa8-3e92-422a-95ea-8dda204f13fd" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 03 06:22:48 crc kubenswrapper[4854]: I0103 06:22:48.355606 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1440aa8-3e92-422a-95ea-8dda204f13fd" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 03 06:22:48 crc kubenswrapper[4854]: I0103 06:22:48.357631 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7xglf" Jan 03 06:22:48 crc kubenswrapper[4854]: I0103 06:22:48.364240 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 03 06:22:48 crc kubenswrapper[4854]: I0103 06:22:48.364566 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4bl62" Jan 03 06:22:48 crc kubenswrapper[4854]: I0103 06:22:48.364929 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 03 06:22:48 crc kubenswrapper[4854]: I0103 06:22:48.371109 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7xglf"] Jan 03 06:22:48 crc kubenswrapper[4854]: I0103 06:22:48.372531 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 03 06:22:48 crc kubenswrapper[4854]: I0103 06:22:48.373365 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 03 06:22:48 crc kubenswrapper[4854]: I0103 06:22:48.373743 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 03 06:22:48 crc kubenswrapper[4854]: I0103 06:22:48.510193 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c5791209-cb96-482e-ae8b-5e93517a6901-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7xglf\" (UID: \"c5791209-cb96-482e-ae8b-5e93517a6901\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7xglf" Jan 03 06:22:48 crc kubenswrapper[4854]: I0103 06:22:48.510426 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c5791209-cb96-482e-ae8b-5e93517a6901-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7xglf\" (UID: \"c5791209-cb96-482e-ae8b-5e93517a6901\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7xglf" Jan 03 06:22:48 crc kubenswrapper[4854]: I0103 06:22:48.510469 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js6mr\" (UniqueName: \"kubernetes.io/projected/c5791209-cb96-482e-ae8b-5e93517a6901-kube-api-access-js6mr\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7xglf\" (UID: \"c5791209-cb96-482e-ae8b-5e93517a6901\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7xglf" Jan 03 06:22:48 crc kubenswrapper[4854]: I0103 06:22:48.510504 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c5791209-cb96-482e-ae8b-5e93517a6901-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7xglf\" (UID: \"c5791209-cb96-482e-ae8b-5e93517a6901\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7xglf" Jan 03 06:22:48 crc kubenswrapper[4854]: I0103 06:22:48.510540 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c5791209-cb96-482e-ae8b-5e93517a6901-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7xglf\" (UID: \"c5791209-cb96-482e-ae8b-5e93517a6901\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7xglf" Jan 03 06:22:48 crc kubenswrapper[4854]: I0103 06:22:48.510578 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5791209-cb96-482e-ae8b-5e93517a6901-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7xglf\" (UID: \"c5791209-cb96-482e-ae8b-5e93517a6901\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7xglf" Jan 03 06:22:48 crc kubenswrapper[4854]: I0103 06:22:48.614885 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c5791209-cb96-482e-ae8b-5e93517a6901-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7xglf\" (UID: \"c5791209-cb96-482e-ae8b-5e93517a6901\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7xglf" Jan 03 06:22:48 crc kubenswrapper[4854]: I0103 06:22:48.614937 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-js6mr\" (UniqueName: \"kubernetes.io/projected/c5791209-cb96-482e-ae8b-5e93517a6901-kube-api-access-js6mr\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7xglf\" (UID: \"c5791209-cb96-482e-ae8b-5e93517a6901\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7xglf" Jan 03 06:22:48 crc kubenswrapper[4854]: I0103 06:22:48.614966 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c5791209-cb96-482e-ae8b-5e93517a6901-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7xglf\" (UID: \"c5791209-cb96-482e-ae8b-5e93517a6901\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7xglf" Jan 03 06:22:48 crc kubenswrapper[4854]: I0103 06:22:48.614999 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c5791209-cb96-482e-ae8b-5e93517a6901-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7xglf\" (UID: \"c5791209-cb96-482e-ae8b-5e93517a6901\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7xglf" Jan 03 06:22:48 crc kubenswrapper[4854]: I0103 06:22:48.615030 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5791209-cb96-482e-ae8b-5e93517a6901-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7xglf\" (UID: \"c5791209-cb96-482e-ae8b-5e93517a6901\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7xglf" Jan 03 06:22:48 crc kubenswrapper[4854]: I0103 06:22:48.615155 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c5791209-cb96-482e-ae8b-5e93517a6901-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7xglf\" (UID: \"c5791209-cb96-482e-ae8b-5e93517a6901\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7xglf" Jan 03 06:22:48 crc kubenswrapper[4854]: I0103 06:22:48.621058 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c5791209-cb96-482e-ae8b-5e93517a6901-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7xglf\" (UID: \"c5791209-cb96-482e-ae8b-5e93517a6901\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7xglf" Jan 03 06:22:48 crc kubenswrapper[4854]: I0103 06:22:48.624549 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c5791209-cb96-482e-ae8b-5e93517a6901-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7xglf\" (UID: \"c5791209-cb96-482e-ae8b-5e93517a6901\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7xglf" Jan 03 06:22:48 crc kubenswrapper[4854]: I0103 06:22:48.624657 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c5791209-cb96-482e-ae8b-5e93517a6901-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7xglf\" (UID: \"c5791209-cb96-482e-ae8b-5e93517a6901\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7xglf" Jan 03 06:22:48 crc kubenswrapper[4854]: I0103 06:22:48.624680 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c5791209-cb96-482e-ae8b-5e93517a6901-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7xglf\" (UID: \"c5791209-cb96-482e-ae8b-5e93517a6901\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7xglf" Jan 03 06:22:48 crc kubenswrapper[4854]: I0103 06:22:48.625212 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5791209-cb96-482e-ae8b-5e93517a6901-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7xglf\" (UID: \"c5791209-cb96-482e-ae8b-5e93517a6901\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7xglf" Jan 03 06:22:48 crc kubenswrapper[4854]: I0103 06:22:48.635942 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-js6mr\" (UniqueName: \"kubernetes.io/projected/c5791209-cb96-482e-ae8b-5e93517a6901-kube-api-access-js6mr\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7xglf\" (UID: \"c5791209-cb96-482e-ae8b-5e93517a6901\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7xglf" Jan 03 06:22:48 crc kubenswrapper[4854]: I0103 06:22:48.680920 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7xglf" Jan 03 06:22:49 crc kubenswrapper[4854]: I0103 06:22:49.350639 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7xglf"] Jan 03 06:22:50 crc kubenswrapper[4854]: I0103 06:22:50.252509 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7xglf" event={"ID":"c5791209-cb96-482e-ae8b-5e93517a6901","Type":"ContainerStarted","Data":"86f8858e6f65e509c914ec4c44db9705f4519e8fbfa8863e61a1cd1c1ab1cf34"} Jan 03 06:22:50 crc kubenswrapper[4854]: I0103 06:22:50.252910 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7xglf" event={"ID":"c5791209-cb96-482e-ae8b-5e93517a6901","Type":"ContainerStarted","Data":"c629c6aa2758284f63d2dd46c94ba752a9d703b72d776740a938b70e909788de"} Jan 03 06:22:50 crc kubenswrapper[4854]: I0103 06:22:50.325439 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7xglf" podStartSLOduration=1.78416529 podStartE2EDuration="2.325421937s" podCreationTimestamp="2026-01-03 06:22:48 +0000 UTC" firstStartedPulling="2026-01-03 06:22:49.359698928 +0000 UTC m=+2547.686275500" lastFinishedPulling="2026-01-03 06:22:49.900955575 +0000 UTC m=+2548.227532147" observedRunningTime="2026-01-03 06:22:50.323335375 +0000 UTC m=+2548.649911967" watchObservedRunningTime="2026-01-03 06:22:50.325421937 +0000 UTC m=+2548.651998499" Jan 03 06:22:53 crc kubenswrapper[4854]: I0103 06:22:53.119110 4854 scope.go:117] "RemoveContainer" containerID="e16522f8da35959e8a35a9556715d70f86468625ee1058a24772e4459c7e4658" Jan 03 06:22:53 crc kubenswrapper[4854]: E0103 06:22:53.119826 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:23:06 crc kubenswrapper[4854]: I0103 06:23:06.119510 4854 scope.go:117] "RemoveContainer" containerID="e16522f8da35959e8a35a9556715d70f86468625ee1058a24772e4459c7e4658" Jan 03 06:23:06 crc kubenswrapper[4854]: E0103 06:23:06.120835 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:23:18 crc kubenswrapper[4854]: I0103 06:23:18.118536 4854 scope.go:117] "RemoveContainer" containerID="e16522f8da35959e8a35a9556715d70f86468625ee1058a24772e4459c7e4658" Jan 03 06:23:18 crc kubenswrapper[4854]: E0103 06:23:18.119578 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:23:29 crc kubenswrapper[4854]: I0103 06:23:29.118812 4854 scope.go:117] "RemoveContainer" containerID="e16522f8da35959e8a35a9556715d70f86468625ee1058a24772e4459c7e4658" Jan 03 06:23:29 crc kubenswrapper[4854]: E0103 06:23:29.119731 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:23:44 crc kubenswrapper[4854]: I0103 06:23:44.118809 4854 scope.go:117] "RemoveContainer" containerID="e16522f8da35959e8a35a9556715d70f86468625ee1058a24772e4459c7e4658" Jan 03 06:23:44 crc kubenswrapper[4854]: E0103 06:23:44.119656 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:23:54 crc kubenswrapper[4854]: I0103 06:23:54.065811 4854 generic.go:334] "Generic (PLEG): container finished" podID="c5791209-cb96-482e-ae8b-5e93517a6901" containerID="86f8858e6f65e509c914ec4c44db9705f4519e8fbfa8863e61a1cd1c1ab1cf34" exitCode=0 Jan 03 06:23:54 crc kubenswrapper[4854]: I0103 06:23:54.065927 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7xglf" event={"ID":"c5791209-cb96-482e-ae8b-5e93517a6901","Type":"ContainerDied","Data":"86f8858e6f65e509c914ec4c44db9705f4519e8fbfa8863e61a1cd1c1ab1cf34"} Jan 03 06:23:55 crc kubenswrapper[4854]: I0103 06:23:55.621183 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7xglf" Jan 03 06:23:55 crc kubenswrapper[4854]: I0103 06:23:55.963997 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c5791209-cb96-482e-ae8b-5e93517a6901-nova-metadata-neutron-config-0\") pod \"c5791209-cb96-482e-ae8b-5e93517a6901\" (UID: \"c5791209-cb96-482e-ae8b-5e93517a6901\") " Jan 03 06:23:55 crc kubenswrapper[4854]: I0103 06:23:55.964233 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-js6mr\" (UniqueName: \"kubernetes.io/projected/c5791209-cb96-482e-ae8b-5e93517a6901-kube-api-access-js6mr\") pod \"c5791209-cb96-482e-ae8b-5e93517a6901\" (UID: \"c5791209-cb96-482e-ae8b-5e93517a6901\") " Jan 03 06:23:55 crc kubenswrapper[4854]: I0103 06:23:55.964414 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5791209-cb96-482e-ae8b-5e93517a6901-neutron-metadata-combined-ca-bundle\") pod \"c5791209-cb96-482e-ae8b-5e93517a6901\" (UID: \"c5791209-cb96-482e-ae8b-5e93517a6901\") " Jan 03 06:23:55 crc kubenswrapper[4854]: I0103 06:23:55.964513 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c5791209-cb96-482e-ae8b-5e93517a6901-neutron-ovn-metadata-agent-neutron-config-0\") pod \"c5791209-cb96-482e-ae8b-5e93517a6901\" (UID: \"c5791209-cb96-482e-ae8b-5e93517a6901\") " Jan 03 06:23:55 crc kubenswrapper[4854]: I0103 06:23:55.964580 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c5791209-cb96-482e-ae8b-5e93517a6901-inventory\") pod \"c5791209-cb96-482e-ae8b-5e93517a6901\" (UID: \"c5791209-cb96-482e-ae8b-5e93517a6901\") " Jan 03 06:23:55 crc kubenswrapper[4854]: I0103 06:23:55.964703 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c5791209-cb96-482e-ae8b-5e93517a6901-ssh-key\") pod \"c5791209-cb96-482e-ae8b-5e93517a6901\" (UID: \"c5791209-cb96-482e-ae8b-5e93517a6901\") " Jan 03 06:23:55 crc kubenswrapper[4854]: I0103 06:23:55.972712 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5791209-cb96-482e-ae8b-5e93517a6901-kube-api-access-js6mr" (OuterVolumeSpecName: "kube-api-access-js6mr") pod "c5791209-cb96-482e-ae8b-5e93517a6901" (UID: "c5791209-cb96-482e-ae8b-5e93517a6901"). InnerVolumeSpecName "kube-api-access-js6mr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:23:55 crc kubenswrapper[4854]: I0103 06:23:55.974387 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5791209-cb96-482e-ae8b-5e93517a6901-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "c5791209-cb96-482e-ae8b-5e93517a6901" (UID: "c5791209-cb96-482e-ae8b-5e93517a6901"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:23:56 crc kubenswrapper[4854]: I0103 06:23:56.014603 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5791209-cb96-482e-ae8b-5e93517a6901-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "c5791209-cb96-482e-ae8b-5e93517a6901" (UID: "c5791209-cb96-482e-ae8b-5e93517a6901"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:23:56 crc kubenswrapper[4854]: I0103 06:23:56.019590 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5791209-cb96-482e-ae8b-5e93517a6901-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "c5791209-cb96-482e-ae8b-5e93517a6901" (UID: "c5791209-cb96-482e-ae8b-5e93517a6901"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:23:56 crc kubenswrapper[4854]: I0103 06:23:56.029301 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5791209-cb96-482e-ae8b-5e93517a6901-inventory" (OuterVolumeSpecName: "inventory") pod "c5791209-cb96-482e-ae8b-5e93517a6901" (UID: "c5791209-cb96-482e-ae8b-5e93517a6901"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:23:56 crc kubenswrapper[4854]: I0103 06:23:56.050088 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5791209-cb96-482e-ae8b-5e93517a6901-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c5791209-cb96-482e-ae8b-5e93517a6901" (UID: "c5791209-cb96-482e-ae8b-5e93517a6901"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:23:56 crc kubenswrapper[4854]: I0103 06:23:56.069378 4854 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c5791209-cb96-482e-ae8b-5e93517a6901-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 03 06:23:56 crc kubenswrapper[4854]: I0103 06:23:56.069546 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-js6mr\" (UniqueName: \"kubernetes.io/projected/c5791209-cb96-482e-ae8b-5e93517a6901-kube-api-access-js6mr\") on node \"crc\" DevicePath \"\"" Jan 03 06:23:56 crc kubenswrapper[4854]: I0103 06:23:56.069635 4854 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5791209-cb96-482e-ae8b-5e93517a6901-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:23:56 crc kubenswrapper[4854]: I0103 06:23:56.069693 4854 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c5791209-cb96-482e-ae8b-5e93517a6901-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 03 06:23:56 crc kubenswrapper[4854]: I0103 06:23:56.069758 4854 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c5791209-cb96-482e-ae8b-5e93517a6901-inventory\") on node \"crc\" DevicePath \"\"" Jan 03 06:23:56 crc kubenswrapper[4854]: I0103 06:23:56.069818 4854 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c5791209-cb96-482e-ae8b-5e93517a6901-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 03 06:23:56 crc kubenswrapper[4854]: I0103 06:23:56.138819 4854 scope.go:117] "RemoveContainer" containerID="e16522f8da35959e8a35a9556715d70f86468625ee1058a24772e4459c7e4658" Jan 03 06:23:56 crc kubenswrapper[4854]: E0103 06:23:56.139414 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:23:56 crc kubenswrapper[4854]: I0103 06:23:56.147897 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7xglf" Jan 03 06:23:56 crc kubenswrapper[4854]: I0103 06:23:56.205257 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7xglf" event={"ID":"c5791209-cb96-482e-ae8b-5e93517a6901","Type":"ContainerDied","Data":"c629c6aa2758284f63d2dd46c94ba752a9d703b72d776740a938b70e909788de"} Jan 03 06:23:56 crc kubenswrapper[4854]: I0103 06:23:56.205318 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c629c6aa2758284f63d2dd46c94ba752a9d703b72d776740a938b70e909788de" Jan 03 06:23:56 crc kubenswrapper[4854]: I0103 06:23:56.358133 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bms4g"] Jan 03 06:23:56 crc kubenswrapper[4854]: E0103 06:23:56.358859 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5791209-cb96-482e-ae8b-5e93517a6901" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 03 06:23:56 crc kubenswrapper[4854]: I0103 06:23:56.358920 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5791209-cb96-482e-ae8b-5e93517a6901" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 03 06:23:56 crc kubenswrapper[4854]: I0103 06:23:56.359242 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5791209-cb96-482e-ae8b-5e93517a6901" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 03 06:23:56 crc kubenswrapper[4854]: I0103 06:23:56.360142 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bms4g" Jan 03 06:23:56 crc kubenswrapper[4854]: I0103 06:23:56.373475 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 03 06:23:56 crc kubenswrapper[4854]: I0103 06:23:56.373736 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 03 06:23:56 crc kubenswrapper[4854]: I0103 06:23:56.373868 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 03 06:23:56 crc kubenswrapper[4854]: I0103 06:23:56.374448 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 03 06:23:56 crc kubenswrapper[4854]: I0103 06:23:56.374558 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4bl62" Jan 03 06:23:56 crc kubenswrapper[4854]: I0103 06:23:56.408135 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bms4g"] Jan 03 06:23:56 crc kubenswrapper[4854]: I0103 06:23:56.496106 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhbxb\" (UniqueName: \"kubernetes.io/projected/9fa94443-3657-4b52-857f-a8bc752ab28c-kube-api-access-jhbxb\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bms4g\" (UID: \"9fa94443-3657-4b52-857f-a8bc752ab28c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bms4g" Jan 03 06:23:56 crc kubenswrapper[4854]: I0103 06:23:56.496176 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9fa94443-3657-4b52-857f-a8bc752ab28c-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bms4g\" (UID: \"9fa94443-3657-4b52-857f-a8bc752ab28c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bms4g" Jan 03 06:23:56 crc kubenswrapper[4854]: I0103 06:23:56.496438 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fa94443-3657-4b52-857f-a8bc752ab28c-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bms4g\" (UID: \"9fa94443-3657-4b52-857f-a8bc752ab28c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bms4g" Jan 03 06:23:56 crc kubenswrapper[4854]: I0103 06:23:56.496622 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9fa94443-3657-4b52-857f-a8bc752ab28c-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bms4g\" (UID: \"9fa94443-3657-4b52-857f-a8bc752ab28c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bms4g" Jan 03 06:23:56 crc kubenswrapper[4854]: I0103 06:23:56.496967 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9fa94443-3657-4b52-857f-a8bc752ab28c-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bms4g\" (UID: \"9fa94443-3657-4b52-857f-a8bc752ab28c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bms4g" Jan 03 06:23:56 crc kubenswrapper[4854]: I0103 06:23:56.599577 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fa94443-3657-4b52-857f-a8bc752ab28c-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bms4g\" (UID: \"9fa94443-3657-4b52-857f-a8bc752ab28c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bms4g" Jan 03 06:23:56 crc kubenswrapper[4854]: I0103 06:23:56.599994 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9fa94443-3657-4b52-857f-a8bc752ab28c-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bms4g\" (UID: \"9fa94443-3657-4b52-857f-a8bc752ab28c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bms4g" Jan 03 06:23:56 crc kubenswrapper[4854]: I0103 06:23:56.600142 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9fa94443-3657-4b52-857f-a8bc752ab28c-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bms4g\" (UID: \"9fa94443-3657-4b52-857f-a8bc752ab28c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bms4g" Jan 03 06:23:56 crc kubenswrapper[4854]: I0103 06:23:56.600429 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhbxb\" (UniqueName: \"kubernetes.io/projected/9fa94443-3657-4b52-857f-a8bc752ab28c-kube-api-access-jhbxb\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bms4g\" (UID: \"9fa94443-3657-4b52-857f-a8bc752ab28c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bms4g" Jan 03 06:23:56 crc kubenswrapper[4854]: I0103 06:23:56.600510 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9fa94443-3657-4b52-857f-a8bc752ab28c-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bms4g\" (UID: \"9fa94443-3657-4b52-857f-a8bc752ab28c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bms4g" Jan 03 06:23:56 crc kubenswrapper[4854]: I0103 06:23:56.605766 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9fa94443-3657-4b52-857f-a8bc752ab28c-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bms4g\" (UID: \"9fa94443-3657-4b52-857f-a8bc752ab28c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bms4g" Jan 03 06:23:56 crc kubenswrapper[4854]: I0103 06:23:56.606628 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fa94443-3657-4b52-857f-a8bc752ab28c-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bms4g\" (UID: \"9fa94443-3657-4b52-857f-a8bc752ab28c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bms4g" Jan 03 06:23:56 crc kubenswrapper[4854]: I0103 06:23:56.606777 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9fa94443-3657-4b52-857f-a8bc752ab28c-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bms4g\" (UID: \"9fa94443-3657-4b52-857f-a8bc752ab28c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bms4g" Jan 03 06:23:56 crc kubenswrapper[4854]: I0103 06:23:56.621369 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9fa94443-3657-4b52-857f-a8bc752ab28c-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bms4g\" (UID: \"9fa94443-3657-4b52-857f-a8bc752ab28c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bms4g" Jan 03 06:23:56 crc kubenswrapper[4854]: I0103 06:23:56.638325 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhbxb\" (UniqueName: \"kubernetes.io/projected/9fa94443-3657-4b52-857f-a8bc752ab28c-kube-api-access-jhbxb\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bms4g\" (UID: \"9fa94443-3657-4b52-857f-a8bc752ab28c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bms4g" Jan 03 06:23:56 crc kubenswrapper[4854]: I0103 06:23:56.693806 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bms4g" Jan 03 06:23:57 crc kubenswrapper[4854]: I0103 06:23:57.271456 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bms4g"] Jan 03 06:23:57 crc kubenswrapper[4854]: I0103 06:23:57.275765 4854 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 03 06:23:58 crc kubenswrapper[4854]: I0103 06:23:58.175069 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bms4g" event={"ID":"9fa94443-3657-4b52-857f-a8bc752ab28c","Type":"ContainerStarted","Data":"3f7da5a5be8c9a0f2e73cbb0a8276adce66a20ab6e36798eeca9274bb309efea"} Jan 03 06:23:58 crc kubenswrapper[4854]: I0103 06:23:58.175460 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bms4g" event={"ID":"9fa94443-3657-4b52-857f-a8bc752ab28c","Type":"ContainerStarted","Data":"9e320d6d4d6fef7ed5c9f8d990357abbfb749d169ab2f7839abda4a5c149ea9c"} Jan 03 06:23:58 crc kubenswrapper[4854]: I0103 06:23:58.198745 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bms4g" podStartSLOduration=1.712651581 podStartE2EDuration="2.198722704s" podCreationTimestamp="2026-01-03 06:23:56 +0000 UTC" firstStartedPulling="2026-01-03 06:23:57.275503393 +0000 UTC m=+2615.602079975" lastFinishedPulling="2026-01-03 06:23:57.761574526 +0000 UTC m=+2616.088151098" observedRunningTime="2026-01-03 06:23:58.191593056 +0000 UTC m=+2616.518169658" watchObservedRunningTime="2026-01-03 06:23:58.198722704 +0000 UTC m=+2616.525299276" Jan 03 06:24:05 crc kubenswrapper[4854]: I0103 06:24:05.587860 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lzbj9"] Jan 03 06:24:05 crc kubenswrapper[4854]: I0103 06:24:05.591843 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lzbj9" Jan 03 06:24:05 crc kubenswrapper[4854]: I0103 06:24:05.615448 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lzbj9"] Jan 03 06:24:05 crc kubenswrapper[4854]: I0103 06:24:05.737827 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/598012b7-4300-41c3-ba98-88623a918b86-utilities\") pod \"redhat-operators-lzbj9\" (UID: \"598012b7-4300-41c3-ba98-88623a918b86\") " pod="openshift-marketplace/redhat-operators-lzbj9" Jan 03 06:24:05 crc kubenswrapper[4854]: I0103 06:24:05.738940 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/598012b7-4300-41c3-ba98-88623a918b86-catalog-content\") pod \"redhat-operators-lzbj9\" (UID: \"598012b7-4300-41c3-ba98-88623a918b86\") " pod="openshift-marketplace/redhat-operators-lzbj9" Jan 03 06:24:05 crc kubenswrapper[4854]: I0103 06:24:05.739130 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6rs7\" (UniqueName: \"kubernetes.io/projected/598012b7-4300-41c3-ba98-88623a918b86-kube-api-access-n6rs7\") pod \"redhat-operators-lzbj9\" (UID: \"598012b7-4300-41c3-ba98-88623a918b86\") " pod="openshift-marketplace/redhat-operators-lzbj9" Jan 03 06:24:05 crc kubenswrapper[4854]: I0103 06:24:05.842620 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/598012b7-4300-41c3-ba98-88623a918b86-utilities\") pod \"redhat-operators-lzbj9\" (UID: \"598012b7-4300-41c3-ba98-88623a918b86\") " pod="openshift-marketplace/redhat-operators-lzbj9" Jan 03 06:24:05 crc kubenswrapper[4854]: I0103 06:24:05.842707 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/598012b7-4300-41c3-ba98-88623a918b86-catalog-content\") pod \"redhat-operators-lzbj9\" (UID: \"598012b7-4300-41c3-ba98-88623a918b86\") " pod="openshift-marketplace/redhat-operators-lzbj9" Jan 03 06:24:05 crc kubenswrapper[4854]: I0103 06:24:05.842776 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6rs7\" (UniqueName: \"kubernetes.io/projected/598012b7-4300-41c3-ba98-88623a918b86-kube-api-access-n6rs7\") pod \"redhat-operators-lzbj9\" (UID: \"598012b7-4300-41c3-ba98-88623a918b86\") " pod="openshift-marketplace/redhat-operators-lzbj9" Jan 03 06:24:05 crc kubenswrapper[4854]: I0103 06:24:05.843201 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/598012b7-4300-41c3-ba98-88623a918b86-utilities\") pod \"redhat-operators-lzbj9\" (UID: \"598012b7-4300-41c3-ba98-88623a918b86\") " pod="openshift-marketplace/redhat-operators-lzbj9" Jan 03 06:24:05 crc kubenswrapper[4854]: I0103 06:24:05.843442 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/598012b7-4300-41c3-ba98-88623a918b86-catalog-content\") pod \"redhat-operators-lzbj9\" (UID: \"598012b7-4300-41c3-ba98-88623a918b86\") " pod="openshift-marketplace/redhat-operators-lzbj9" Jan 03 06:24:05 crc kubenswrapper[4854]: I0103 06:24:05.867022 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6rs7\" (UniqueName: \"kubernetes.io/projected/598012b7-4300-41c3-ba98-88623a918b86-kube-api-access-n6rs7\") pod \"redhat-operators-lzbj9\" (UID: \"598012b7-4300-41c3-ba98-88623a918b86\") " pod="openshift-marketplace/redhat-operators-lzbj9" Jan 03 06:24:05 crc kubenswrapper[4854]: I0103 06:24:05.920400 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lzbj9" Jan 03 06:24:06 crc kubenswrapper[4854]: I0103 06:24:06.494039 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lzbj9"] Jan 03 06:24:07 crc kubenswrapper[4854]: I0103 06:24:07.118799 4854 scope.go:117] "RemoveContainer" containerID="e16522f8da35959e8a35a9556715d70f86468625ee1058a24772e4459c7e4658" Jan 03 06:24:07 crc kubenswrapper[4854]: E0103 06:24:07.119429 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:24:07 crc kubenswrapper[4854]: I0103 06:24:07.263534 4854 generic.go:334] "Generic (PLEG): container finished" podID="598012b7-4300-41c3-ba98-88623a918b86" containerID="62844d39f32cf8926db1047fefb798b10dd339af2ad02018306cd703426b3f15" exitCode=0 Jan 03 06:24:07 crc kubenswrapper[4854]: I0103 06:24:07.263575 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lzbj9" event={"ID":"598012b7-4300-41c3-ba98-88623a918b86","Type":"ContainerDied","Data":"62844d39f32cf8926db1047fefb798b10dd339af2ad02018306cd703426b3f15"} Jan 03 06:24:07 crc kubenswrapper[4854]: I0103 06:24:07.263599 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lzbj9" event={"ID":"598012b7-4300-41c3-ba98-88623a918b86","Type":"ContainerStarted","Data":"54deea1e7802ac413a53ae35da35627a9c643062d1226226ffe4a5d39a82e5de"} Jan 03 06:24:09 crc kubenswrapper[4854]: I0103 06:24:09.287484 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lzbj9" event={"ID":"598012b7-4300-41c3-ba98-88623a918b86","Type":"ContainerStarted","Data":"1e1ff4c2f7950418edf51b5ed0bcb7b90be0a02dff61587abd8d40283d3fe2eb"} Jan 03 06:24:13 crc kubenswrapper[4854]: I0103 06:24:13.337537 4854 generic.go:334] "Generic (PLEG): container finished" podID="598012b7-4300-41c3-ba98-88623a918b86" containerID="1e1ff4c2f7950418edf51b5ed0bcb7b90be0a02dff61587abd8d40283d3fe2eb" exitCode=0 Jan 03 06:24:13 crc kubenswrapper[4854]: I0103 06:24:13.337613 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lzbj9" event={"ID":"598012b7-4300-41c3-ba98-88623a918b86","Type":"ContainerDied","Data":"1e1ff4c2f7950418edf51b5ed0bcb7b90be0a02dff61587abd8d40283d3fe2eb"} Jan 03 06:24:14 crc kubenswrapper[4854]: I0103 06:24:14.354426 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lzbj9" event={"ID":"598012b7-4300-41c3-ba98-88623a918b86","Type":"ContainerStarted","Data":"4ba3c19befd9d50aa3cccc6b3ea7b7c97997aaa1f2f827400674db4a606a36e5"} Jan 03 06:24:14 crc kubenswrapper[4854]: I0103 06:24:14.382221 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lzbj9" podStartSLOduration=2.8769776 podStartE2EDuration="9.382195555s" podCreationTimestamp="2026-01-03 06:24:05 +0000 UTC" firstStartedPulling="2026-01-03 06:24:07.265653059 +0000 UTC m=+2625.592229631" lastFinishedPulling="2026-01-03 06:24:13.770871004 +0000 UTC m=+2632.097447586" observedRunningTime="2026-01-03 06:24:14.373006257 +0000 UTC m=+2632.699582849" watchObservedRunningTime="2026-01-03 06:24:14.382195555 +0000 UTC m=+2632.708772137" Jan 03 06:24:15 crc kubenswrapper[4854]: I0103 06:24:15.921135 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lzbj9" Jan 03 06:24:15 crc kubenswrapper[4854]: I0103 06:24:15.921694 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lzbj9" Jan 03 06:24:16 crc kubenswrapper[4854]: I0103 06:24:16.994622 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lzbj9" podUID="598012b7-4300-41c3-ba98-88623a918b86" containerName="registry-server" probeResult="failure" output=< Jan 03 06:24:16 crc kubenswrapper[4854]: timeout: failed to connect service ":50051" within 1s Jan 03 06:24:16 crc kubenswrapper[4854]: > Jan 03 06:24:21 crc kubenswrapper[4854]: I0103 06:24:21.120521 4854 scope.go:117] "RemoveContainer" containerID="e16522f8da35959e8a35a9556715d70f86468625ee1058a24772e4459c7e4658" Jan 03 06:24:21 crc kubenswrapper[4854]: E0103 06:24:21.121683 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:24:25 crc kubenswrapper[4854]: I0103 06:24:25.981337 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lzbj9" Jan 03 06:24:26 crc kubenswrapper[4854]: I0103 06:24:26.050749 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lzbj9" Jan 03 06:24:26 crc kubenswrapper[4854]: I0103 06:24:26.225928 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lzbj9"] Jan 03 06:24:27 crc kubenswrapper[4854]: I0103 06:24:27.502258 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lzbj9" podUID="598012b7-4300-41c3-ba98-88623a918b86" containerName="registry-server" containerID="cri-o://4ba3c19befd9d50aa3cccc6b3ea7b7c97997aaa1f2f827400674db4a606a36e5" gracePeriod=2 Jan 03 06:24:28 crc kubenswrapper[4854]: I0103 06:24:28.098873 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lzbj9" Jan 03 06:24:28 crc kubenswrapper[4854]: I0103 06:24:28.244018 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/598012b7-4300-41c3-ba98-88623a918b86-utilities\") pod \"598012b7-4300-41c3-ba98-88623a918b86\" (UID: \"598012b7-4300-41c3-ba98-88623a918b86\") " Jan 03 06:24:28 crc kubenswrapper[4854]: I0103 06:24:28.244123 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/598012b7-4300-41c3-ba98-88623a918b86-catalog-content\") pod \"598012b7-4300-41c3-ba98-88623a918b86\" (UID: \"598012b7-4300-41c3-ba98-88623a918b86\") " Jan 03 06:24:28 crc kubenswrapper[4854]: I0103 06:24:28.244219 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6rs7\" (UniqueName: \"kubernetes.io/projected/598012b7-4300-41c3-ba98-88623a918b86-kube-api-access-n6rs7\") pod \"598012b7-4300-41c3-ba98-88623a918b86\" (UID: \"598012b7-4300-41c3-ba98-88623a918b86\") " Jan 03 06:24:28 crc kubenswrapper[4854]: I0103 06:24:28.245951 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/598012b7-4300-41c3-ba98-88623a918b86-utilities" (OuterVolumeSpecName: "utilities") pod "598012b7-4300-41c3-ba98-88623a918b86" (UID: "598012b7-4300-41c3-ba98-88623a918b86"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:24:28 crc kubenswrapper[4854]: I0103 06:24:28.257020 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/598012b7-4300-41c3-ba98-88623a918b86-kube-api-access-n6rs7" (OuterVolumeSpecName: "kube-api-access-n6rs7") pod "598012b7-4300-41c3-ba98-88623a918b86" (UID: "598012b7-4300-41c3-ba98-88623a918b86"). InnerVolumeSpecName "kube-api-access-n6rs7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:24:28 crc kubenswrapper[4854]: I0103 06:24:28.346658 4854 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/598012b7-4300-41c3-ba98-88623a918b86-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 06:24:28 crc kubenswrapper[4854]: I0103 06:24:28.346924 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6rs7\" (UniqueName: \"kubernetes.io/projected/598012b7-4300-41c3-ba98-88623a918b86-kube-api-access-n6rs7\") on node \"crc\" DevicePath \"\"" Jan 03 06:24:28 crc kubenswrapper[4854]: I0103 06:24:28.374487 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/598012b7-4300-41c3-ba98-88623a918b86-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "598012b7-4300-41c3-ba98-88623a918b86" (UID: "598012b7-4300-41c3-ba98-88623a918b86"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:24:28 crc kubenswrapper[4854]: I0103 06:24:28.449316 4854 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/598012b7-4300-41c3-ba98-88623a918b86-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 06:24:28 crc kubenswrapper[4854]: I0103 06:24:28.516618 4854 generic.go:334] "Generic (PLEG): container finished" podID="598012b7-4300-41c3-ba98-88623a918b86" containerID="4ba3c19befd9d50aa3cccc6b3ea7b7c97997aaa1f2f827400674db4a606a36e5" exitCode=0 Jan 03 06:24:28 crc kubenswrapper[4854]: I0103 06:24:28.516668 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lzbj9" event={"ID":"598012b7-4300-41c3-ba98-88623a918b86","Type":"ContainerDied","Data":"4ba3c19befd9d50aa3cccc6b3ea7b7c97997aaa1f2f827400674db4a606a36e5"} Jan 03 06:24:28 crc kubenswrapper[4854]: I0103 06:24:28.516692 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lzbj9" Jan 03 06:24:28 crc kubenswrapper[4854]: I0103 06:24:28.516712 4854 scope.go:117] "RemoveContainer" containerID="4ba3c19befd9d50aa3cccc6b3ea7b7c97997aaa1f2f827400674db4a606a36e5" Jan 03 06:24:28 crc kubenswrapper[4854]: I0103 06:24:28.516700 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lzbj9" event={"ID":"598012b7-4300-41c3-ba98-88623a918b86","Type":"ContainerDied","Data":"54deea1e7802ac413a53ae35da35627a9c643062d1226226ffe4a5d39a82e5de"} Jan 03 06:24:28 crc kubenswrapper[4854]: I0103 06:24:28.551394 4854 scope.go:117] "RemoveContainer" containerID="1e1ff4c2f7950418edf51b5ed0bcb7b90be0a02dff61587abd8d40283d3fe2eb" Jan 03 06:24:28 crc kubenswrapper[4854]: I0103 06:24:28.570896 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lzbj9"] Jan 03 06:24:28 crc kubenswrapper[4854]: I0103 06:24:28.579934 4854 scope.go:117] "RemoveContainer" containerID="62844d39f32cf8926db1047fefb798b10dd339af2ad02018306cd703426b3f15" Jan 03 06:24:28 crc kubenswrapper[4854]: I0103 06:24:28.581263 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lzbj9"] Jan 03 06:24:28 crc kubenswrapper[4854]: I0103 06:24:28.665341 4854 scope.go:117] "RemoveContainer" containerID="4ba3c19befd9d50aa3cccc6b3ea7b7c97997aaa1f2f827400674db4a606a36e5" Jan 03 06:24:28 crc kubenswrapper[4854]: E0103 06:24:28.673656 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ba3c19befd9d50aa3cccc6b3ea7b7c97997aaa1f2f827400674db4a606a36e5\": container with ID starting with 4ba3c19befd9d50aa3cccc6b3ea7b7c97997aaa1f2f827400674db4a606a36e5 not found: ID does not exist" containerID="4ba3c19befd9d50aa3cccc6b3ea7b7c97997aaa1f2f827400674db4a606a36e5" Jan 03 06:24:28 crc kubenswrapper[4854]: I0103 06:24:28.673709 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ba3c19befd9d50aa3cccc6b3ea7b7c97997aaa1f2f827400674db4a606a36e5"} err="failed to get container status \"4ba3c19befd9d50aa3cccc6b3ea7b7c97997aaa1f2f827400674db4a606a36e5\": rpc error: code = NotFound desc = could not find container \"4ba3c19befd9d50aa3cccc6b3ea7b7c97997aaa1f2f827400674db4a606a36e5\": container with ID starting with 4ba3c19befd9d50aa3cccc6b3ea7b7c97997aaa1f2f827400674db4a606a36e5 not found: ID does not exist" Jan 03 06:24:28 crc kubenswrapper[4854]: I0103 06:24:28.673759 4854 scope.go:117] "RemoveContainer" containerID="1e1ff4c2f7950418edf51b5ed0bcb7b90be0a02dff61587abd8d40283d3fe2eb" Jan 03 06:24:28 crc kubenswrapper[4854]: E0103 06:24:28.679254 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e1ff4c2f7950418edf51b5ed0bcb7b90be0a02dff61587abd8d40283d3fe2eb\": container with ID starting with 1e1ff4c2f7950418edf51b5ed0bcb7b90be0a02dff61587abd8d40283d3fe2eb not found: ID does not exist" containerID="1e1ff4c2f7950418edf51b5ed0bcb7b90be0a02dff61587abd8d40283d3fe2eb" Jan 03 06:24:28 crc kubenswrapper[4854]: I0103 06:24:28.679310 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e1ff4c2f7950418edf51b5ed0bcb7b90be0a02dff61587abd8d40283d3fe2eb"} err="failed to get container status \"1e1ff4c2f7950418edf51b5ed0bcb7b90be0a02dff61587abd8d40283d3fe2eb\": rpc error: code = NotFound desc = could not find container \"1e1ff4c2f7950418edf51b5ed0bcb7b90be0a02dff61587abd8d40283d3fe2eb\": container with ID starting with 1e1ff4c2f7950418edf51b5ed0bcb7b90be0a02dff61587abd8d40283d3fe2eb not found: ID does not exist" Jan 03 06:24:28 crc kubenswrapper[4854]: I0103 06:24:28.679352 4854 scope.go:117] "RemoveContainer" containerID="62844d39f32cf8926db1047fefb798b10dd339af2ad02018306cd703426b3f15" Jan 03 06:24:28 crc kubenswrapper[4854]: E0103 06:24:28.683253 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62844d39f32cf8926db1047fefb798b10dd339af2ad02018306cd703426b3f15\": container with ID starting with 62844d39f32cf8926db1047fefb798b10dd339af2ad02018306cd703426b3f15 not found: ID does not exist" containerID="62844d39f32cf8926db1047fefb798b10dd339af2ad02018306cd703426b3f15" Jan 03 06:24:28 crc kubenswrapper[4854]: I0103 06:24:28.683308 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62844d39f32cf8926db1047fefb798b10dd339af2ad02018306cd703426b3f15"} err="failed to get container status \"62844d39f32cf8926db1047fefb798b10dd339af2ad02018306cd703426b3f15\": rpc error: code = NotFound desc = could not find container \"62844d39f32cf8926db1047fefb798b10dd339af2ad02018306cd703426b3f15\": container with ID starting with 62844d39f32cf8926db1047fefb798b10dd339af2ad02018306cd703426b3f15 not found: ID does not exist" Jan 03 06:24:30 crc kubenswrapper[4854]: I0103 06:24:30.131164 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="598012b7-4300-41c3-ba98-88623a918b86" path="/var/lib/kubelet/pods/598012b7-4300-41c3-ba98-88623a918b86/volumes" Jan 03 06:24:32 crc kubenswrapper[4854]: I0103 06:24:32.162854 4854 scope.go:117] "RemoveContainer" containerID="e16522f8da35959e8a35a9556715d70f86468625ee1058a24772e4459c7e4658" Jan 03 06:24:32 crc kubenswrapper[4854]: E0103 06:24:32.164180 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:24:47 crc kubenswrapper[4854]: I0103 06:24:47.118407 4854 scope.go:117] "RemoveContainer" containerID="e16522f8da35959e8a35a9556715d70f86468625ee1058a24772e4459c7e4658" Jan 03 06:24:47 crc kubenswrapper[4854]: E0103 06:24:47.119142 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:24:58 crc kubenswrapper[4854]: I0103 06:24:58.119160 4854 scope.go:117] "RemoveContainer" containerID="e16522f8da35959e8a35a9556715d70f86468625ee1058a24772e4459c7e4658" Jan 03 06:24:58 crc kubenswrapper[4854]: E0103 06:24:58.119883 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:25:09 crc kubenswrapper[4854]: I0103 06:25:09.118499 4854 scope.go:117] "RemoveContainer" containerID="e16522f8da35959e8a35a9556715d70f86468625ee1058a24772e4459c7e4658" Jan 03 06:25:09 crc kubenswrapper[4854]: E0103 06:25:09.119450 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:25:21 crc kubenswrapper[4854]: I0103 06:25:21.123312 4854 scope.go:117] "RemoveContainer" containerID="e16522f8da35959e8a35a9556715d70f86468625ee1058a24772e4459c7e4658" Jan 03 06:25:21 crc kubenswrapper[4854]: E0103 06:25:21.124410 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:25:35 crc kubenswrapper[4854]: I0103 06:25:35.119815 4854 scope.go:117] "RemoveContainer" containerID="e16522f8da35959e8a35a9556715d70f86468625ee1058a24772e4459c7e4658" Jan 03 06:25:35 crc kubenswrapper[4854]: E0103 06:25:35.121197 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:25:48 crc kubenswrapper[4854]: I0103 06:25:48.121250 4854 scope.go:117] "RemoveContainer" containerID="e16522f8da35959e8a35a9556715d70f86468625ee1058a24772e4459c7e4658" Jan 03 06:25:48 crc kubenswrapper[4854]: I0103 06:25:48.681058 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" event={"ID":"e8c88d7d-092b-44f7-b4c8-3540be3c0e8b","Type":"ContainerStarted","Data":"571b591e68ec825c867cd032317b7d0af493afca30da7ad81b2ff7ae6daedf75"} Jan 03 06:26:59 crc kubenswrapper[4854]: I0103 06:26:59.423750 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lzz5s"] Jan 03 06:26:59 crc kubenswrapper[4854]: E0103 06:26:59.425049 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="598012b7-4300-41c3-ba98-88623a918b86" containerName="extract-utilities" Jan 03 06:26:59 crc kubenswrapper[4854]: I0103 06:26:59.425068 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="598012b7-4300-41c3-ba98-88623a918b86" containerName="extract-utilities" Jan 03 06:26:59 crc kubenswrapper[4854]: E0103 06:26:59.425089 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="598012b7-4300-41c3-ba98-88623a918b86" containerName="extract-content" Jan 03 06:26:59 crc kubenswrapper[4854]: I0103 06:26:59.425098 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="598012b7-4300-41c3-ba98-88623a918b86" containerName="extract-content" Jan 03 06:26:59 crc kubenswrapper[4854]: E0103 06:26:59.425151 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="598012b7-4300-41c3-ba98-88623a918b86" containerName="registry-server" Jan 03 06:26:59 crc kubenswrapper[4854]: I0103 06:26:59.425160 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="598012b7-4300-41c3-ba98-88623a918b86" containerName="registry-server" Jan 03 06:26:59 crc kubenswrapper[4854]: I0103 06:26:59.425469 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="598012b7-4300-41c3-ba98-88623a918b86" containerName="registry-server" Jan 03 06:26:59 crc kubenswrapper[4854]: I0103 06:26:59.427710 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lzz5s" Jan 03 06:26:59 crc kubenswrapper[4854]: I0103 06:26:59.442678 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lzz5s"] Jan 03 06:26:59 crc kubenswrapper[4854]: I0103 06:26:59.478571 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e09e781-8550-4ad5-8fd8-8c8bc492365b-catalog-content\") pod \"community-operators-lzz5s\" (UID: \"4e09e781-8550-4ad5-8fd8-8c8bc492365b\") " pod="openshift-marketplace/community-operators-lzz5s" Jan 03 06:26:59 crc kubenswrapper[4854]: I0103 06:26:59.478687 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e09e781-8550-4ad5-8fd8-8c8bc492365b-utilities\") pod \"community-operators-lzz5s\" (UID: \"4e09e781-8550-4ad5-8fd8-8c8bc492365b\") " pod="openshift-marketplace/community-operators-lzz5s" Jan 03 06:26:59 crc kubenswrapper[4854]: I0103 06:26:59.478842 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlm4v\" (UniqueName: \"kubernetes.io/projected/4e09e781-8550-4ad5-8fd8-8c8bc492365b-kube-api-access-wlm4v\") pod \"community-operators-lzz5s\" (UID: \"4e09e781-8550-4ad5-8fd8-8c8bc492365b\") " pod="openshift-marketplace/community-operators-lzz5s" Jan 03 06:26:59 crc kubenswrapper[4854]: I0103 06:26:59.580743 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e09e781-8550-4ad5-8fd8-8c8bc492365b-catalog-content\") pod \"community-operators-lzz5s\" (UID: \"4e09e781-8550-4ad5-8fd8-8c8bc492365b\") " pod="openshift-marketplace/community-operators-lzz5s" Jan 03 06:26:59 crc kubenswrapper[4854]: I0103 06:26:59.580832 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e09e781-8550-4ad5-8fd8-8c8bc492365b-utilities\") pod \"community-operators-lzz5s\" (UID: \"4e09e781-8550-4ad5-8fd8-8c8bc492365b\") " pod="openshift-marketplace/community-operators-lzz5s" Jan 03 06:26:59 crc kubenswrapper[4854]: I0103 06:26:59.580905 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlm4v\" (UniqueName: \"kubernetes.io/projected/4e09e781-8550-4ad5-8fd8-8c8bc492365b-kube-api-access-wlm4v\") pod \"community-operators-lzz5s\" (UID: \"4e09e781-8550-4ad5-8fd8-8c8bc492365b\") " pod="openshift-marketplace/community-operators-lzz5s" Jan 03 06:26:59 crc kubenswrapper[4854]: I0103 06:26:59.581252 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e09e781-8550-4ad5-8fd8-8c8bc492365b-catalog-content\") pod \"community-operators-lzz5s\" (UID: \"4e09e781-8550-4ad5-8fd8-8c8bc492365b\") " pod="openshift-marketplace/community-operators-lzz5s" Jan 03 06:26:59 crc kubenswrapper[4854]: I0103 06:26:59.581365 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e09e781-8550-4ad5-8fd8-8c8bc492365b-utilities\") pod \"community-operators-lzz5s\" (UID: \"4e09e781-8550-4ad5-8fd8-8c8bc492365b\") " pod="openshift-marketplace/community-operators-lzz5s" Jan 03 06:26:59 crc kubenswrapper[4854]: I0103 06:26:59.603202 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlm4v\" (UniqueName: \"kubernetes.io/projected/4e09e781-8550-4ad5-8fd8-8c8bc492365b-kube-api-access-wlm4v\") pod \"community-operators-lzz5s\" (UID: \"4e09e781-8550-4ad5-8fd8-8c8bc492365b\") " pod="openshift-marketplace/community-operators-lzz5s" Jan 03 06:26:59 crc kubenswrapper[4854]: I0103 06:26:59.753095 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lzz5s" Jan 03 06:27:00 crc kubenswrapper[4854]: I0103 06:27:00.146197 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lzz5s"] Jan 03 06:27:00 crc kubenswrapper[4854]: I0103 06:27:00.662428 4854 generic.go:334] "Generic (PLEG): container finished" podID="4e09e781-8550-4ad5-8fd8-8c8bc492365b" containerID="0b2e27f423420823c39bee65fa2b3589181513134ee0d595225590033279b3e6" exitCode=0 Jan 03 06:27:00 crc kubenswrapper[4854]: I0103 06:27:00.662481 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lzz5s" event={"ID":"4e09e781-8550-4ad5-8fd8-8c8bc492365b","Type":"ContainerDied","Data":"0b2e27f423420823c39bee65fa2b3589181513134ee0d595225590033279b3e6"} Jan 03 06:27:00 crc kubenswrapper[4854]: I0103 06:27:00.662513 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lzz5s" event={"ID":"4e09e781-8550-4ad5-8fd8-8c8bc492365b","Type":"ContainerStarted","Data":"07b0455f60cc9c79e2f582c7917ab78ddfcd60eef9762412361730f32d4e5349"} Jan 03 06:27:01 crc kubenswrapper[4854]: I0103 06:27:01.679621 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lzz5s" event={"ID":"4e09e781-8550-4ad5-8fd8-8c8bc492365b","Type":"ContainerStarted","Data":"3b0faa2349a24799224405af786191cfb5772d120a23d245ab46c0965f040bcb"} Jan 03 06:27:02 crc kubenswrapper[4854]: I0103 06:27:02.692284 4854 generic.go:334] "Generic (PLEG): container finished" podID="4e09e781-8550-4ad5-8fd8-8c8bc492365b" containerID="3b0faa2349a24799224405af786191cfb5772d120a23d245ab46c0965f040bcb" exitCode=0 Jan 03 06:27:02 crc kubenswrapper[4854]: I0103 06:27:02.692612 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lzz5s" event={"ID":"4e09e781-8550-4ad5-8fd8-8c8bc492365b","Type":"ContainerDied","Data":"3b0faa2349a24799224405af786191cfb5772d120a23d245ab46c0965f040bcb"} Jan 03 06:27:03 crc kubenswrapper[4854]: I0103 06:27:03.706982 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lzz5s" event={"ID":"4e09e781-8550-4ad5-8fd8-8c8bc492365b","Type":"ContainerStarted","Data":"c7204a5bd6ccaedbcd3c7938ca4f8937c5e9d080d447001d7b22ce689a27ba4e"} Jan 03 06:27:03 crc kubenswrapper[4854]: I0103 06:27:03.738627 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lzz5s" podStartSLOduration=2.234723214 podStartE2EDuration="4.738610738s" podCreationTimestamp="2026-01-03 06:26:59 +0000 UTC" firstStartedPulling="2026-01-03 06:27:00.664813244 +0000 UTC m=+2798.991389856" lastFinishedPulling="2026-01-03 06:27:03.168700808 +0000 UTC m=+2801.495277380" observedRunningTime="2026-01-03 06:27:03.732454424 +0000 UTC m=+2802.059031006" watchObservedRunningTime="2026-01-03 06:27:03.738610738 +0000 UTC m=+2802.065187310" Jan 03 06:27:09 crc kubenswrapper[4854]: I0103 06:27:09.754869 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lzz5s" Jan 03 06:27:09 crc kubenswrapper[4854]: I0103 06:27:09.755401 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lzz5s" Jan 03 06:27:09 crc kubenswrapper[4854]: I0103 06:27:09.813588 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lzz5s" Jan 03 06:27:09 crc kubenswrapper[4854]: I0103 06:27:09.867154 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lzz5s" Jan 03 06:27:10 crc kubenswrapper[4854]: I0103 06:27:10.064451 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lzz5s"] Jan 03 06:27:11 crc kubenswrapper[4854]: I0103 06:27:11.807812 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lzz5s" podUID="4e09e781-8550-4ad5-8fd8-8c8bc492365b" containerName="registry-server" containerID="cri-o://c7204a5bd6ccaedbcd3c7938ca4f8937c5e9d080d447001d7b22ce689a27ba4e" gracePeriod=2 Jan 03 06:27:12 crc kubenswrapper[4854]: I0103 06:27:12.358382 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lzz5s" Jan 03 06:27:12 crc kubenswrapper[4854]: I0103 06:27:12.466448 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e09e781-8550-4ad5-8fd8-8c8bc492365b-utilities\") pod \"4e09e781-8550-4ad5-8fd8-8c8bc492365b\" (UID: \"4e09e781-8550-4ad5-8fd8-8c8bc492365b\") " Jan 03 06:27:12 crc kubenswrapper[4854]: I0103 06:27:12.466525 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlm4v\" (UniqueName: \"kubernetes.io/projected/4e09e781-8550-4ad5-8fd8-8c8bc492365b-kube-api-access-wlm4v\") pod \"4e09e781-8550-4ad5-8fd8-8c8bc492365b\" (UID: \"4e09e781-8550-4ad5-8fd8-8c8bc492365b\") " Jan 03 06:27:12 crc kubenswrapper[4854]: I0103 06:27:12.466649 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e09e781-8550-4ad5-8fd8-8c8bc492365b-catalog-content\") pod \"4e09e781-8550-4ad5-8fd8-8c8bc492365b\" (UID: \"4e09e781-8550-4ad5-8fd8-8c8bc492365b\") " Jan 03 06:27:12 crc kubenswrapper[4854]: I0103 06:27:12.467465 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e09e781-8550-4ad5-8fd8-8c8bc492365b-utilities" (OuterVolumeSpecName: "utilities") pod "4e09e781-8550-4ad5-8fd8-8c8bc492365b" (UID: "4e09e781-8550-4ad5-8fd8-8c8bc492365b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:27:12 crc kubenswrapper[4854]: I0103 06:27:12.472807 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e09e781-8550-4ad5-8fd8-8c8bc492365b-kube-api-access-wlm4v" (OuterVolumeSpecName: "kube-api-access-wlm4v") pod "4e09e781-8550-4ad5-8fd8-8c8bc492365b" (UID: "4e09e781-8550-4ad5-8fd8-8c8bc492365b"). InnerVolumeSpecName "kube-api-access-wlm4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:27:12 crc kubenswrapper[4854]: I0103 06:27:12.538498 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e09e781-8550-4ad5-8fd8-8c8bc492365b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4e09e781-8550-4ad5-8fd8-8c8bc492365b" (UID: "4e09e781-8550-4ad5-8fd8-8c8bc492365b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:27:12 crc kubenswrapper[4854]: I0103 06:27:12.569644 4854 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e09e781-8550-4ad5-8fd8-8c8bc492365b-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 06:27:12 crc kubenswrapper[4854]: I0103 06:27:12.569688 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlm4v\" (UniqueName: \"kubernetes.io/projected/4e09e781-8550-4ad5-8fd8-8c8bc492365b-kube-api-access-wlm4v\") on node \"crc\" DevicePath \"\"" Jan 03 06:27:12 crc kubenswrapper[4854]: I0103 06:27:12.569700 4854 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e09e781-8550-4ad5-8fd8-8c8bc492365b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 06:27:12 crc kubenswrapper[4854]: I0103 06:27:12.820842 4854 generic.go:334] "Generic (PLEG): container finished" podID="4e09e781-8550-4ad5-8fd8-8c8bc492365b" containerID="c7204a5bd6ccaedbcd3c7938ca4f8937c5e9d080d447001d7b22ce689a27ba4e" exitCode=0 Jan 03 06:27:12 crc kubenswrapper[4854]: I0103 06:27:12.820942 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lzz5s" event={"ID":"4e09e781-8550-4ad5-8fd8-8c8bc492365b","Type":"ContainerDied","Data":"c7204a5bd6ccaedbcd3c7938ca4f8937c5e9d080d447001d7b22ce689a27ba4e"} Jan 03 06:27:12 crc kubenswrapper[4854]: I0103 06:27:12.820991 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lzz5s" Jan 03 06:27:12 crc kubenswrapper[4854]: I0103 06:27:12.821281 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lzz5s" event={"ID":"4e09e781-8550-4ad5-8fd8-8c8bc492365b","Type":"ContainerDied","Data":"07b0455f60cc9c79e2f582c7917ab78ddfcd60eef9762412361730f32d4e5349"} Jan 03 06:27:12 crc kubenswrapper[4854]: I0103 06:27:12.821314 4854 scope.go:117] "RemoveContainer" containerID="c7204a5bd6ccaedbcd3c7938ca4f8937c5e9d080d447001d7b22ce689a27ba4e" Jan 03 06:27:12 crc kubenswrapper[4854]: I0103 06:27:12.854016 4854 scope.go:117] "RemoveContainer" containerID="3b0faa2349a24799224405af786191cfb5772d120a23d245ab46c0965f040bcb" Jan 03 06:27:12 crc kubenswrapper[4854]: I0103 06:27:12.897603 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lzz5s"] Jan 03 06:27:12 crc kubenswrapper[4854]: I0103 06:27:12.916892 4854 scope.go:117] "RemoveContainer" containerID="0b2e27f423420823c39bee65fa2b3589181513134ee0d595225590033279b3e6" Jan 03 06:27:12 crc kubenswrapper[4854]: I0103 06:27:12.922104 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lzz5s"] Jan 03 06:27:12 crc kubenswrapper[4854]: I0103 06:27:12.940975 4854 scope.go:117] "RemoveContainer" containerID="c7204a5bd6ccaedbcd3c7938ca4f8937c5e9d080d447001d7b22ce689a27ba4e" Jan 03 06:27:12 crc kubenswrapper[4854]: E0103 06:27:12.941735 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7204a5bd6ccaedbcd3c7938ca4f8937c5e9d080d447001d7b22ce689a27ba4e\": container with ID starting with c7204a5bd6ccaedbcd3c7938ca4f8937c5e9d080d447001d7b22ce689a27ba4e not found: ID does not exist" containerID="c7204a5bd6ccaedbcd3c7938ca4f8937c5e9d080d447001d7b22ce689a27ba4e" Jan 03 06:27:12 crc kubenswrapper[4854]: I0103 06:27:12.941776 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7204a5bd6ccaedbcd3c7938ca4f8937c5e9d080d447001d7b22ce689a27ba4e"} err="failed to get container status \"c7204a5bd6ccaedbcd3c7938ca4f8937c5e9d080d447001d7b22ce689a27ba4e\": rpc error: code = NotFound desc = could not find container \"c7204a5bd6ccaedbcd3c7938ca4f8937c5e9d080d447001d7b22ce689a27ba4e\": container with ID starting with c7204a5bd6ccaedbcd3c7938ca4f8937c5e9d080d447001d7b22ce689a27ba4e not found: ID does not exist" Jan 03 06:27:12 crc kubenswrapper[4854]: I0103 06:27:12.941808 4854 scope.go:117] "RemoveContainer" containerID="3b0faa2349a24799224405af786191cfb5772d120a23d245ab46c0965f040bcb" Jan 03 06:27:12 crc kubenswrapper[4854]: E0103 06:27:12.942165 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b0faa2349a24799224405af786191cfb5772d120a23d245ab46c0965f040bcb\": container with ID starting with 3b0faa2349a24799224405af786191cfb5772d120a23d245ab46c0965f040bcb not found: ID does not exist" containerID="3b0faa2349a24799224405af786191cfb5772d120a23d245ab46c0965f040bcb" Jan 03 06:27:12 crc kubenswrapper[4854]: I0103 06:27:12.942209 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b0faa2349a24799224405af786191cfb5772d120a23d245ab46c0965f040bcb"} err="failed to get container status \"3b0faa2349a24799224405af786191cfb5772d120a23d245ab46c0965f040bcb\": rpc error: code = NotFound desc = could not find container \"3b0faa2349a24799224405af786191cfb5772d120a23d245ab46c0965f040bcb\": container with ID starting with 3b0faa2349a24799224405af786191cfb5772d120a23d245ab46c0965f040bcb not found: ID does not exist" Jan 03 06:27:12 crc kubenswrapper[4854]: I0103 06:27:12.942237 4854 scope.go:117] "RemoveContainer" containerID="0b2e27f423420823c39bee65fa2b3589181513134ee0d595225590033279b3e6" Jan 03 06:27:12 crc kubenswrapper[4854]: E0103 06:27:12.942527 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b2e27f423420823c39bee65fa2b3589181513134ee0d595225590033279b3e6\": container with ID starting with 0b2e27f423420823c39bee65fa2b3589181513134ee0d595225590033279b3e6 not found: ID does not exist" containerID="0b2e27f423420823c39bee65fa2b3589181513134ee0d595225590033279b3e6" Jan 03 06:27:12 crc kubenswrapper[4854]: I0103 06:27:12.942553 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b2e27f423420823c39bee65fa2b3589181513134ee0d595225590033279b3e6"} err="failed to get container status \"0b2e27f423420823c39bee65fa2b3589181513134ee0d595225590033279b3e6\": rpc error: code = NotFound desc = could not find container \"0b2e27f423420823c39bee65fa2b3589181513134ee0d595225590033279b3e6\": container with ID starting with 0b2e27f423420823c39bee65fa2b3589181513134ee0d595225590033279b3e6 not found: ID does not exist" Jan 03 06:27:14 crc kubenswrapper[4854]: I0103 06:27:14.130980 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e09e781-8550-4ad5-8fd8-8c8bc492365b" path="/var/lib/kubelet/pods/4e09e781-8550-4ad5-8fd8-8c8bc492365b/volumes" Jan 03 06:27:46 crc kubenswrapper[4854]: I0103 06:27:46.849888 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mzvzw"] Jan 03 06:27:46 crc kubenswrapper[4854]: E0103 06:27:46.850815 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e09e781-8550-4ad5-8fd8-8c8bc492365b" containerName="registry-server" Jan 03 06:27:46 crc kubenswrapper[4854]: I0103 06:27:46.850830 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e09e781-8550-4ad5-8fd8-8c8bc492365b" containerName="registry-server" Jan 03 06:27:46 crc kubenswrapper[4854]: E0103 06:27:46.850865 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e09e781-8550-4ad5-8fd8-8c8bc492365b" containerName="extract-utilities" Jan 03 06:27:46 crc kubenswrapper[4854]: I0103 06:27:46.850874 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e09e781-8550-4ad5-8fd8-8c8bc492365b" containerName="extract-utilities" Jan 03 06:27:46 crc kubenswrapper[4854]: E0103 06:27:46.850936 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e09e781-8550-4ad5-8fd8-8c8bc492365b" containerName="extract-content" Jan 03 06:27:46 crc kubenswrapper[4854]: I0103 06:27:46.850947 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e09e781-8550-4ad5-8fd8-8c8bc492365b" containerName="extract-content" Jan 03 06:27:46 crc kubenswrapper[4854]: I0103 06:27:46.851413 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e09e781-8550-4ad5-8fd8-8c8bc492365b" containerName="registry-server" Jan 03 06:27:46 crc kubenswrapper[4854]: I0103 06:27:46.853189 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mzvzw" Jan 03 06:27:46 crc kubenswrapper[4854]: I0103 06:27:46.878949 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mzvzw"] Jan 03 06:27:46 crc kubenswrapper[4854]: I0103 06:27:46.999776 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dd6t\" (UniqueName: \"kubernetes.io/projected/96167b92-22da-494f-bf14-53a0f843cb9d-kube-api-access-7dd6t\") pod \"certified-operators-mzvzw\" (UID: \"96167b92-22da-494f-bf14-53a0f843cb9d\") " pod="openshift-marketplace/certified-operators-mzvzw" Jan 03 06:27:47 crc kubenswrapper[4854]: I0103 06:27:46.999896 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96167b92-22da-494f-bf14-53a0f843cb9d-utilities\") pod \"certified-operators-mzvzw\" (UID: \"96167b92-22da-494f-bf14-53a0f843cb9d\") " pod="openshift-marketplace/certified-operators-mzvzw" Jan 03 06:27:47 crc kubenswrapper[4854]: I0103 06:27:46.999973 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96167b92-22da-494f-bf14-53a0f843cb9d-catalog-content\") pod \"certified-operators-mzvzw\" (UID: \"96167b92-22da-494f-bf14-53a0f843cb9d\") " pod="openshift-marketplace/certified-operators-mzvzw" Jan 03 06:27:47 crc kubenswrapper[4854]: I0103 06:27:47.102915 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dd6t\" (UniqueName: \"kubernetes.io/projected/96167b92-22da-494f-bf14-53a0f843cb9d-kube-api-access-7dd6t\") pod \"certified-operators-mzvzw\" (UID: \"96167b92-22da-494f-bf14-53a0f843cb9d\") " pod="openshift-marketplace/certified-operators-mzvzw" Jan 03 06:27:47 crc kubenswrapper[4854]: I0103 06:27:47.103005 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96167b92-22da-494f-bf14-53a0f843cb9d-utilities\") pod \"certified-operators-mzvzw\" (UID: \"96167b92-22da-494f-bf14-53a0f843cb9d\") " pod="openshift-marketplace/certified-operators-mzvzw" Jan 03 06:27:47 crc kubenswrapper[4854]: I0103 06:27:47.103074 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96167b92-22da-494f-bf14-53a0f843cb9d-catalog-content\") pod \"certified-operators-mzvzw\" (UID: \"96167b92-22da-494f-bf14-53a0f843cb9d\") " pod="openshift-marketplace/certified-operators-mzvzw" Jan 03 06:27:47 crc kubenswrapper[4854]: I0103 06:27:47.103680 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96167b92-22da-494f-bf14-53a0f843cb9d-catalog-content\") pod \"certified-operators-mzvzw\" (UID: \"96167b92-22da-494f-bf14-53a0f843cb9d\") " pod="openshift-marketplace/certified-operators-mzvzw" Jan 03 06:27:47 crc kubenswrapper[4854]: I0103 06:27:47.103697 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96167b92-22da-494f-bf14-53a0f843cb9d-utilities\") pod \"certified-operators-mzvzw\" (UID: \"96167b92-22da-494f-bf14-53a0f843cb9d\") " pod="openshift-marketplace/certified-operators-mzvzw" Jan 03 06:27:47 crc kubenswrapper[4854]: I0103 06:27:47.128423 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dd6t\" (UniqueName: \"kubernetes.io/projected/96167b92-22da-494f-bf14-53a0f843cb9d-kube-api-access-7dd6t\") pod \"certified-operators-mzvzw\" (UID: \"96167b92-22da-494f-bf14-53a0f843cb9d\") " pod="openshift-marketplace/certified-operators-mzvzw" Jan 03 06:27:47 crc kubenswrapper[4854]: I0103 06:27:47.202387 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mzvzw" Jan 03 06:27:47 crc kubenswrapper[4854]: I0103 06:27:47.761559 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mzvzw"] Jan 03 06:27:48 crc kubenswrapper[4854]: E0103 06:27:48.261144 4854 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96167b92_22da_494f_bf14_53a0f843cb9d.slice/crio-103cddf5e94a7776696136655326766deff4e5411faa8df58d02012b912321cf.scope\": RecentStats: unable to find data in memory cache]" Jan 03 06:27:48 crc kubenswrapper[4854]: E0103 06:27:48.261442 4854 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96167b92_22da_494f_bf14_53a0f843cb9d.slice/crio-103cddf5e94a7776696136655326766deff4e5411faa8df58d02012b912321cf.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96167b92_22da_494f_bf14_53a0f843cb9d.slice/crio-conmon-103cddf5e94a7776696136655326766deff4e5411faa8df58d02012b912321cf.scope\": RecentStats: unable to find data in memory cache]" Jan 03 06:27:48 crc kubenswrapper[4854]: I0103 06:27:48.297059 4854 generic.go:334] "Generic (PLEG): container finished" podID="96167b92-22da-494f-bf14-53a0f843cb9d" containerID="103cddf5e94a7776696136655326766deff4e5411faa8df58d02012b912321cf" exitCode=0 Jan 03 06:27:48 crc kubenswrapper[4854]: I0103 06:27:48.297130 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzvzw" event={"ID":"96167b92-22da-494f-bf14-53a0f843cb9d","Type":"ContainerDied","Data":"103cddf5e94a7776696136655326766deff4e5411faa8df58d02012b912321cf"} Jan 03 06:27:48 crc kubenswrapper[4854]: I0103 06:27:48.297167 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzvzw" event={"ID":"96167b92-22da-494f-bf14-53a0f843cb9d","Type":"ContainerStarted","Data":"41c074ab1493a8beabddf2c4fecc5df37e6ed178908bb39db61b167269bf6e39"} Jan 03 06:27:49 crc kubenswrapper[4854]: I0103 06:27:49.310695 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzvzw" event={"ID":"96167b92-22da-494f-bf14-53a0f843cb9d","Type":"ContainerStarted","Data":"1192482f941bf81609bd845ff632a233e89a68c65c3031a2d171b3f8514e524f"} Jan 03 06:27:50 crc kubenswrapper[4854]: I0103 06:27:50.337504 4854 generic.go:334] "Generic (PLEG): container finished" podID="96167b92-22da-494f-bf14-53a0f843cb9d" containerID="1192482f941bf81609bd845ff632a233e89a68c65c3031a2d171b3f8514e524f" exitCode=0 Jan 03 06:27:50 crc kubenswrapper[4854]: I0103 06:27:50.337571 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzvzw" event={"ID":"96167b92-22da-494f-bf14-53a0f843cb9d","Type":"ContainerDied","Data":"1192482f941bf81609bd845ff632a233e89a68c65c3031a2d171b3f8514e524f"} Jan 03 06:27:51 crc kubenswrapper[4854]: I0103 06:27:51.350447 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzvzw" event={"ID":"96167b92-22da-494f-bf14-53a0f843cb9d","Type":"ContainerStarted","Data":"3facb4514e6aa5e7c25715317537f729e5d5c579165a3d4f6ad250f568e1847e"} Jan 03 06:27:51 crc kubenswrapper[4854]: I0103 06:27:51.384017 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mzvzw" podStartSLOduration=2.883208051 podStartE2EDuration="5.383999696s" podCreationTimestamp="2026-01-03 06:27:46 +0000 UTC" firstStartedPulling="2026-01-03 06:27:48.301432945 +0000 UTC m=+2846.628009517" lastFinishedPulling="2026-01-03 06:27:50.80222457 +0000 UTC m=+2849.128801162" observedRunningTime="2026-01-03 06:27:51.381600526 +0000 UTC m=+2849.708177148" watchObservedRunningTime="2026-01-03 06:27:51.383999696 +0000 UTC m=+2849.710576268" Jan 03 06:27:57 crc kubenswrapper[4854]: I0103 06:27:57.202928 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mzvzw" Jan 03 06:27:57 crc kubenswrapper[4854]: I0103 06:27:57.203657 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mzvzw" Jan 03 06:27:57 crc kubenswrapper[4854]: I0103 06:27:57.276919 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mzvzw" Jan 03 06:27:57 crc kubenswrapper[4854]: I0103 06:27:57.506518 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mzvzw" Jan 03 06:27:57 crc kubenswrapper[4854]: I0103 06:27:57.561732 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mzvzw"] Jan 03 06:27:59 crc kubenswrapper[4854]: I0103 06:27:59.456222 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mzvzw" podUID="96167b92-22da-494f-bf14-53a0f843cb9d" containerName="registry-server" containerID="cri-o://3facb4514e6aa5e7c25715317537f729e5d5c579165a3d4f6ad250f568e1847e" gracePeriod=2 Jan 03 06:28:00 crc kubenswrapper[4854]: I0103 06:28:00.073889 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mzvzw" Jan 03 06:28:00 crc kubenswrapper[4854]: I0103 06:28:00.233512 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dd6t\" (UniqueName: \"kubernetes.io/projected/96167b92-22da-494f-bf14-53a0f843cb9d-kube-api-access-7dd6t\") pod \"96167b92-22da-494f-bf14-53a0f843cb9d\" (UID: \"96167b92-22da-494f-bf14-53a0f843cb9d\") " Jan 03 06:28:00 crc kubenswrapper[4854]: I0103 06:28:00.233934 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96167b92-22da-494f-bf14-53a0f843cb9d-utilities\") pod \"96167b92-22da-494f-bf14-53a0f843cb9d\" (UID: \"96167b92-22da-494f-bf14-53a0f843cb9d\") " Jan 03 06:28:00 crc kubenswrapper[4854]: I0103 06:28:00.234030 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96167b92-22da-494f-bf14-53a0f843cb9d-catalog-content\") pod \"96167b92-22da-494f-bf14-53a0f843cb9d\" (UID: \"96167b92-22da-494f-bf14-53a0f843cb9d\") " Jan 03 06:28:00 crc kubenswrapper[4854]: I0103 06:28:00.235074 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96167b92-22da-494f-bf14-53a0f843cb9d-utilities" (OuterVolumeSpecName: "utilities") pod "96167b92-22da-494f-bf14-53a0f843cb9d" (UID: "96167b92-22da-494f-bf14-53a0f843cb9d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:28:00 crc kubenswrapper[4854]: I0103 06:28:00.241122 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96167b92-22da-494f-bf14-53a0f843cb9d-kube-api-access-7dd6t" (OuterVolumeSpecName: "kube-api-access-7dd6t") pod "96167b92-22da-494f-bf14-53a0f843cb9d" (UID: "96167b92-22da-494f-bf14-53a0f843cb9d"). InnerVolumeSpecName "kube-api-access-7dd6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:28:00 crc kubenswrapper[4854]: I0103 06:28:00.299423 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96167b92-22da-494f-bf14-53a0f843cb9d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "96167b92-22da-494f-bf14-53a0f843cb9d" (UID: "96167b92-22da-494f-bf14-53a0f843cb9d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:28:00 crc kubenswrapper[4854]: I0103 06:28:00.338354 4854 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96167b92-22da-494f-bf14-53a0f843cb9d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 06:28:00 crc kubenswrapper[4854]: I0103 06:28:00.338667 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dd6t\" (UniqueName: \"kubernetes.io/projected/96167b92-22da-494f-bf14-53a0f843cb9d-kube-api-access-7dd6t\") on node \"crc\" DevicePath \"\"" Jan 03 06:28:00 crc kubenswrapper[4854]: I0103 06:28:00.338683 4854 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96167b92-22da-494f-bf14-53a0f843cb9d-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 06:28:00 crc kubenswrapper[4854]: I0103 06:28:00.474326 4854 generic.go:334] "Generic (PLEG): container finished" podID="96167b92-22da-494f-bf14-53a0f843cb9d" containerID="3facb4514e6aa5e7c25715317537f729e5d5c579165a3d4f6ad250f568e1847e" exitCode=0 Jan 03 06:28:00 crc kubenswrapper[4854]: I0103 06:28:00.474376 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzvzw" event={"ID":"96167b92-22da-494f-bf14-53a0f843cb9d","Type":"ContainerDied","Data":"3facb4514e6aa5e7c25715317537f729e5d5c579165a3d4f6ad250f568e1847e"} Jan 03 06:28:00 crc kubenswrapper[4854]: I0103 06:28:00.474394 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mzvzw" Jan 03 06:28:00 crc kubenswrapper[4854]: I0103 06:28:00.474424 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzvzw" event={"ID":"96167b92-22da-494f-bf14-53a0f843cb9d","Type":"ContainerDied","Data":"41c074ab1493a8beabddf2c4fecc5df37e6ed178908bb39db61b167269bf6e39"} Jan 03 06:28:00 crc kubenswrapper[4854]: I0103 06:28:00.474460 4854 scope.go:117] "RemoveContainer" containerID="3facb4514e6aa5e7c25715317537f729e5d5c579165a3d4f6ad250f568e1847e" Jan 03 06:28:00 crc kubenswrapper[4854]: I0103 06:28:00.498555 4854 scope.go:117] "RemoveContainer" containerID="1192482f941bf81609bd845ff632a233e89a68c65c3031a2d171b3f8514e524f" Jan 03 06:28:00 crc kubenswrapper[4854]: I0103 06:28:00.527605 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mzvzw"] Jan 03 06:28:00 crc kubenswrapper[4854]: I0103 06:28:00.528756 4854 scope.go:117] "RemoveContainer" containerID="103cddf5e94a7776696136655326766deff4e5411faa8df58d02012b912321cf" Jan 03 06:28:00 crc kubenswrapper[4854]: I0103 06:28:00.542625 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mzvzw"] Jan 03 06:28:00 crc kubenswrapper[4854]: I0103 06:28:00.589121 4854 scope.go:117] "RemoveContainer" containerID="3facb4514e6aa5e7c25715317537f729e5d5c579165a3d4f6ad250f568e1847e" Jan 03 06:28:00 crc kubenswrapper[4854]: E0103 06:28:00.589969 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3facb4514e6aa5e7c25715317537f729e5d5c579165a3d4f6ad250f568e1847e\": container with ID starting with 3facb4514e6aa5e7c25715317537f729e5d5c579165a3d4f6ad250f568e1847e not found: ID does not exist" containerID="3facb4514e6aa5e7c25715317537f729e5d5c579165a3d4f6ad250f568e1847e" Jan 03 06:28:00 crc kubenswrapper[4854]: I0103 06:28:00.590008 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3facb4514e6aa5e7c25715317537f729e5d5c579165a3d4f6ad250f568e1847e"} err="failed to get container status \"3facb4514e6aa5e7c25715317537f729e5d5c579165a3d4f6ad250f568e1847e\": rpc error: code = NotFound desc = could not find container \"3facb4514e6aa5e7c25715317537f729e5d5c579165a3d4f6ad250f568e1847e\": container with ID starting with 3facb4514e6aa5e7c25715317537f729e5d5c579165a3d4f6ad250f568e1847e not found: ID does not exist" Jan 03 06:28:00 crc kubenswrapper[4854]: I0103 06:28:00.590053 4854 scope.go:117] "RemoveContainer" containerID="1192482f941bf81609bd845ff632a233e89a68c65c3031a2d171b3f8514e524f" Jan 03 06:28:00 crc kubenswrapper[4854]: E0103 06:28:00.590742 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1192482f941bf81609bd845ff632a233e89a68c65c3031a2d171b3f8514e524f\": container with ID starting with 1192482f941bf81609bd845ff632a233e89a68c65c3031a2d171b3f8514e524f not found: ID does not exist" containerID="1192482f941bf81609bd845ff632a233e89a68c65c3031a2d171b3f8514e524f" Jan 03 06:28:00 crc kubenswrapper[4854]: I0103 06:28:00.590804 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1192482f941bf81609bd845ff632a233e89a68c65c3031a2d171b3f8514e524f"} err="failed to get container status \"1192482f941bf81609bd845ff632a233e89a68c65c3031a2d171b3f8514e524f\": rpc error: code = NotFound desc = could not find container \"1192482f941bf81609bd845ff632a233e89a68c65c3031a2d171b3f8514e524f\": container with ID starting with 1192482f941bf81609bd845ff632a233e89a68c65c3031a2d171b3f8514e524f not found: ID does not exist" Jan 03 06:28:00 crc kubenswrapper[4854]: I0103 06:28:00.590848 4854 scope.go:117] "RemoveContainer" containerID="103cddf5e94a7776696136655326766deff4e5411faa8df58d02012b912321cf" Jan 03 06:28:00 crc kubenswrapper[4854]: E0103 06:28:00.592202 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"103cddf5e94a7776696136655326766deff4e5411faa8df58d02012b912321cf\": container with ID starting with 103cddf5e94a7776696136655326766deff4e5411faa8df58d02012b912321cf not found: ID does not exist" containerID="103cddf5e94a7776696136655326766deff4e5411faa8df58d02012b912321cf" Jan 03 06:28:00 crc kubenswrapper[4854]: I0103 06:28:00.592242 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"103cddf5e94a7776696136655326766deff4e5411faa8df58d02012b912321cf"} err="failed to get container status \"103cddf5e94a7776696136655326766deff4e5411faa8df58d02012b912321cf\": rpc error: code = NotFound desc = could not find container \"103cddf5e94a7776696136655326766deff4e5411faa8df58d02012b912321cf\": container with ID starting with 103cddf5e94a7776696136655326766deff4e5411faa8df58d02012b912321cf not found: ID does not exist" Jan 03 06:28:02 crc kubenswrapper[4854]: I0103 06:28:02.143607 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96167b92-22da-494f-bf14-53a0f843cb9d" path="/var/lib/kubelet/pods/96167b92-22da-494f-bf14-53a0f843cb9d/volumes" Jan 03 06:28:11 crc kubenswrapper[4854]: I0103 06:28:11.755941 4854 patch_prober.go:28] interesting pod/machine-config-daemon-qdhfx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 06:28:11 crc kubenswrapper[4854]: I0103 06:28:11.756513 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 06:28:38 crc kubenswrapper[4854]: I0103 06:28:38.920325 4854 generic.go:334] "Generic (PLEG): container finished" podID="9fa94443-3657-4b52-857f-a8bc752ab28c" containerID="3f7da5a5be8c9a0f2e73cbb0a8276adce66a20ab6e36798eeca9274bb309efea" exitCode=0 Jan 03 06:28:38 crc kubenswrapper[4854]: I0103 06:28:38.920429 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bms4g" event={"ID":"9fa94443-3657-4b52-857f-a8bc752ab28c","Type":"ContainerDied","Data":"3f7da5a5be8c9a0f2e73cbb0a8276adce66a20ab6e36798eeca9274bb309efea"} Jan 03 06:28:40 crc kubenswrapper[4854]: I0103 06:28:40.609616 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bms4g" Jan 03 06:28:40 crc kubenswrapper[4854]: I0103 06:28:40.655027 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fa94443-3657-4b52-857f-a8bc752ab28c-libvirt-combined-ca-bundle\") pod \"9fa94443-3657-4b52-857f-a8bc752ab28c\" (UID: \"9fa94443-3657-4b52-857f-a8bc752ab28c\") " Jan 03 06:28:40 crc kubenswrapper[4854]: I0103 06:28:40.655122 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9fa94443-3657-4b52-857f-a8bc752ab28c-ssh-key\") pod \"9fa94443-3657-4b52-857f-a8bc752ab28c\" (UID: \"9fa94443-3657-4b52-857f-a8bc752ab28c\") " Jan 03 06:28:40 crc kubenswrapper[4854]: I0103 06:28:40.655416 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9fa94443-3657-4b52-857f-a8bc752ab28c-libvirt-secret-0\") pod \"9fa94443-3657-4b52-857f-a8bc752ab28c\" (UID: \"9fa94443-3657-4b52-857f-a8bc752ab28c\") " Jan 03 06:28:40 crc kubenswrapper[4854]: I0103 06:28:40.655715 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9fa94443-3657-4b52-857f-a8bc752ab28c-inventory\") pod \"9fa94443-3657-4b52-857f-a8bc752ab28c\" (UID: \"9fa94443-3657-4b52-857f-a8bc752ab28c\") " Jan 03 06:28:40 crc kubenswrapper[4854]: I0103 06:28:40.655781 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbxb\" (UniqueName: \"kubernetes.io/projected/9fa94443-3657-4b52-857f-a8bc752ab28c-kube-api-access-jhbxb\") pod \"9fa94443-3657-4b52-857f-a8bc752ab28c\" (UID: \"9fa94443-3657-4b52-857f-a8bc752ab28c\") " Jan 03 06:28:40 crc kubenswrapper[4854]: I0103 06:28:40.662755 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fa94443-3657-4b52-857f-a8bc752ab28c-kube-api-access-jhbxb" (OuterVolumeSpecName: "kube-api-access-jhbxb") pod "9fa94443-3657-4b52-857f-a8bc752ab28c" (UID: "9fa94443-3657-4b52-857f-a8bc752ab28c"). InnerVolumeSpecName "kube-api-access-jhbxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:28:40 crc kubenswrapper[4854]: I0103 06:28:40.664458 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fa94443-3657-4b52-857f-a8bc752ab28c-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "9fa94443-3657-4b52-857f-a8bc752ab28c" (UID: "9fa94443-3657-4b52-857f-a8bc752ab28c"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:28:40 crc kubenswrapper[4854]: I0103 06:28:40.691371 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fa94443-3657-4b52-857f-a8bc752ab28c-inventory" (OuterVolumeSpecName: "inventory") pod "9fa94443-3657-4b52-857f-a8bc752ab28c" (UID: "9fa94443-3657-4b52-857f-a8bc752ab28c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:28:40 crc kubenswrapper[4854]: I0103 06:28:40.717928 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fa94443-3657-4b52-857f-a8bc752ab28c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9fa94443-3657-4b52-857f-a8bc752ab28c" (UID: "9fa94443-3657-4b52-857f-a8bc752ab28c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:28:40 crc kubenswrapper[4854]: I0103 06:28:40.734773 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fa94443-3657-4b52-857f-a8bc752ab28c-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "9fa94443-3657-4b52-857f-a8bc752ab28c" (UID: "9fa94443-3657-4b52-857f-a8bc752ab28c"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:28:40 crc kubenswrapper[4854]: I0103 06:28:40.759562 4854 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9fa94443-3657-4b52-857f-a8bc752ab28c-inventory\") on node \"crc\" DevicePath \"\"" Jan 03 06:28:40 crc kubenswrapper[4854]: I0103 06:28:40.759600 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbxb\" (UniqueName: \"kubernetes.io/projected/9fa94443-3657-4b52-857f-a8bc752ab28c-kube-api-access-jhbxb\") on node \"crc\" DevicePath \"\"" Jan 03 06:28:40 crc kubenswrapper[4854]: I0103 06:28:40.759618 4854 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fa94443-3657-4b52-857f-a8bc752ab28c-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:28:40 crc kubenswrapper[4854]: I0103 06:28:40.759631 4854 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9fa94443-3657-4b52-857f-a8bc752ab28c-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 03 06:28:40 crc kubenswrapper[4854]: I0103 06:28:40.759643 4854 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9fa94443-3657-4b52-857f-a8bc752ab28c-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 03 06:28:40 crc kubenswrapper[4854]: I0103 06:28:40.956937 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bms4g" event={"ID":"9fa94443-3657-4b52-857f-a8bc752ab28c","Type":"ContainerDied","Data":"9e320d6d4d6fef7ed5c9f8d990357abbfb749d169ab2f7839abda4a5c149ea9c"} Jan 03 06:28:40 crc kubenswrapper[4854]: I0103 06:28:40.956994 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e320d6d4d6fef7ed5c9f8d990357abbfb749d169ab2f7839abda4a5c149ea9c" Jan 03 06:28:40 crc kubenswrapper[4854]: I0103 06:28:40.957267 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bms4g" Jan 03 06:28:41 crc kubenswrapper[4854]: I0103 06:28:41.050443 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-698bt"] Jan 03 06:28:41 crc kubenswrapper[4854]: E0103 06:28:41.051123 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fa94443-3657-4b52-857f-a8bc752ab28c" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 03 06:28:41 crc kubenswrapper[4854]: I0103 06:28:41.051146 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fa94443-3657-4b52-857f-a8bc752ab28c" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 03 06:28:41 crc kubenswrapper[4854]: E0103 06:28:41.051192 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96167b92-22da-494f-bf14-53a0f843cb9d" containerName="extract-content" Jan 03 06:28:41 crc kubenswrapper[4854]: I0103 06:28:41.051203 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="96167b92-22da-494f-bf14-53a0f843cb9d" containerName="extract-content" Jan 03 06:28:41 crc kubenswrapper[4854]: E0103 06:28:41.051226 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96167b92-22da-494f-bf14-53a0f843cb9d" containerName="extract-utilities" Jan 03 06:28:41 crc kubenswrapper[4854]: I0103 06:28:41.051236 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="96167b92-22da-494f-bf14-53a0f843cb9d" containerName="extract-utilities" Jan 03 06:28:41 crc kubenswrapper[4854]: E0103 06:28:41.051272 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96167b92-22da-494f-bf14-53a0f843cb9d" containerName="registry-server" Jan 03 06:28:41 crc kubenswrapper[4854]: I0103 06:28:41.051280 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="96167b92-22da-494f-bf14-53a0f843cb9d" containerName="registry-server" Jan 03 06:28:41 crc kubenswrapper[4854]: I0103 06:28:41.051580 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="96167b92-22da-494f-bf14-53a0f843cb9d" containerName="registry-server" Jan 03 06:28:41 crc kubenswrapper[4854]: I0103 06:28:41.051607 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fa94443-3657-4b52-857f-a8bc752ab28c" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 03 06:28:41 crc kubenswrapper[4854]: I0103 06:28:41.052766 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-698bt" Jan 03 06:28:41 crc kubenswrapper[4854]: I0103 06:28:41.058573 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 03 06:28:41 crc kubenswrapper[4854]: I0103 06:28:41.058653 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 03 06:28:41 crc kubenswrapper[4854]: I0103 06:28:41.058760 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 03 06:28:41 crc kubenswrapper[4854]: I0103 06:28:41.058894 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 03 06:28:41 crc kubenswrapper[4854]: I0103 06:28:41.059147 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 03 06:28:41 crc kubenswrapper[4854]: I0103 06:28:41.062255 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 03 06:28:41 crc kubenswrapper[4854]: I0103 06:28:41.062265 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4bl62" Jan 03 06:28:41 crc kubenswrapper[4854]: I0103 06:28:41.062754 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-698bt"] Jan 03 06:28:41 crc kubenswrapper[4854]: I0103 06:28:41.211891 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/696b866a-1cc3-40f3-90e2-1f9e7d44e4f2-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-698bt\" (UID: \"696b866a-1cc3-40f3-90e2-1f9e7d44e4f2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-698bt" Jan 03 06:28:41 crc kubenswrapper[4854]: I0103 06:28:41.212364 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/696b866a-1cc3-40f3-90e2-1f9e7d44e4f2-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-698bt\" (UID: \"696b866a-1cc3-40f3-90e2-1f9e7d44e4f2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-698bt" Jan 03 06:28:41 crc kubenswrapper[4854]: I0103 06:28:41.212475 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/696b866a-1cc3-40f3-90e2-1f9e7d44e4f2-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-698bt\" (UID: \"696b866a-1cc3-40f3-90e2-1f9e7d44e4f2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-698bt" Jan 03 06:28:41 crc kubenswrapper[4854]: I0103 06:28:41.212611 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/696b866a-1cc3-40f3-90e2-1f9e7d44e4f2-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-698bt\" (UID: \"696b866a-1cc3-40f3-90e2-1f9e7d44e4f2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-698bt" Jan 03 06:28:41 crc kubenswrapper[4854]: I0103 06:28:41.212676 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/696b866a-1cc3-40f3-90e2-1f9e7d44e4f2-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-698bt\" (UID: \"696b866a-1cc3-40f3-90e2-1f9e7d44e4f2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-698bt" Jan 03 06:28:41 crc kubenswrapper[4854]: I0103 06:28:41.212735 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/696b866a-1cc3-40f3-90e2-1f9e7d44e4f2-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-698bt\" (UID: \"696b866a-1cc3-40f3-90e2-1f9e7d44e4f2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-698bt" Jan 03 06:28:41 crc kubenswrapper[4854]: I0103 06:28:41.212771 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/696b866a-1cc3-40f3-90e2-1f9e7d44e4f2-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-698bt\" (UID: \"696b866a-1cc3-40f3-90e2-1f9e7d44e4f2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-698bt" Jan 03 06:28:41 crc kubenswrapper[4854]: I0103 06:28:41.212891 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/696b866a-1cc3-40f3-90e2-1f9e7d44e4f2-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-698bt\" (UID: \"696b866a-1cc3-40f3-90e2-1f9e7d44e4f2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-698bt" Jan 03 06:28:41 crc kubenswrapper[4854]: I0103 06:28:41.213275 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjxjf\" (UniqueName: \"kubernetes.io/projected/696b866a-1cc3-40f3-90e2-1f9e7d44e4f2-kube-api-access-mjxjf\") pod \"nova-edpm-deployment-openstack-edpm-ipam-698bt\" (UID: \"696b866a-1cc3-40f3-90e2-1f9e7d44e4f2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-698bt" Jan 03 06:28:41 crc kubenswrapper[4854]: I0103 06:28:41.315746 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/696b866a-1cc3-40f3-90e2-1f9e7d44e4f2-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-698bt\" (UID: \"696b866a-1cc3-40f3-90e2-1f9e7d44e4f2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-698bt" Jan 03 06:28:41 crc kubenswrapper[4854]: I0103 06:28:41.315808 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/696b866a-1cc3-40f3-90e2-1f9e7d44e4f2-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-698bt\" (UID: \"696b866a-1cc3-40f3-90e2-1f9e7d44e4f2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-698bt" Jan 03 06:28:41 crc kubenswrapper[4854]: I0103 06:28:41.315867 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/696b866a-1cc3-40f3-90e2-1f9e7d44e4f2-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-698bt\" (UID: \"696b866a-1cc3-40f3-90e2-1f9e7d44e4f2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-698bt" Jan 03 06:28:41 crc kubenswrapper[4854]: I0103 06:28:41.315908 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/696b866a-1cc3-40f3-90e2-1f9e7d44e4f2-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-698bt\" (UID: \"696b866a-1cc3-40f3-90e2-1f9e7d44e4f2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-698bt" Jan 03 06:28:41 crc kubenswrapper[4854]: I0103 06:28:41.315929 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/696b866a-1cc3-40f3-90e2-1f9e7d44e4f2-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-698bt\" (UID: \"696b866a-1cc3-40f3-90e2-1f9e7d44e4f2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-698bt" Jan 03 06:28:41 crc kubenswrapper[4854]: I0103 06:28:41.315946 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/696b866a-1cc3-40f3-90e2-1f9e7d44e4f2-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-698bt\" (UID: \"696b866a-1cc3-40f3-90e2-1f9e7d44e4f2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-698bt" Jan 03 06:28:41 crc kubenswrapper[4854]: I0103 06:28:41.315969 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/696b866a-1cc3-40f3-90e2-1f9e7d44e4f2-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-698bt\" (UID: \"696b866a-1cc3-40f3-90e2-1f9e7d44e4f2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-698bt" Jan 03 06:28:41 crc kubenswrapper[4854]: I0103 06:28:41.315993 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/696b866a-1cc3-40f3-90e2-1f9e7d44e4f2-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-698bt\" (UID: \"696b866a-1cc3-40f3-90e2-1f9e7d44e4f2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-698bt" Jan 03 06:28:41 crc kubenswrapper[4854]: I0103 06:28:41.316049 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjxjf\" (UniqueName: \"kubernetes.io/projected/696b866a-1cc3-40f3-90e2-1f9e7d44e4f2-kube-api-access-mjxjf\") pod \"nova-edpm-deployment-openstack-edpm-ipam-698bt\" (UID: \"696b866a-1cc3-40f3-90e2-1f9e7d44e4f2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-698bt" Jan 03 06:28:41 crc kubenswrapper[4854]: I0103 06:28:41.317925 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/696b866a-1cc3-40f3-90e2-1f9e7d44e4f2-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-698bt\" (UID: \"696b866a-1cc3-40f3-90e2-1f9e7d44e4f2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-698bt" Jan 03 06:28:41 crc kubenswrapper[4854]: I0103 06:28:41.320344 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/696b866a-1cc3-40f3-90e2-1f9e7d44e4f2-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-698bt\" (UID: \"696b866a-1cc3-40f3-90e2-1f9e7d44e4f2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-698bt" Jan 03 06:28:41 crc kubenswrapper[4854]: I0103 06:28:41.321910 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/696b866a-1cc3-40f3-90e2-1f9e7d44e4f2-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-698bt\" (UID: \"696b866a-1cc3-40f3-90e2-1f9e7d44e4f2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-698bt" Jan 03 06:28:41 crc kubenswrapper[4854]: I0103 06:28:41.322393 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/696b866a-1cc3-40f3-90e2-1f9e7d44e4f2-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-698bt\" (UID: \"696b866a-1cc3-40f3-90e2-1f9e7d44e4f2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-698bt" Jan 03 06:28:41 crc kubenswrapper[4854]: I0103 06:28:41.323664 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/696b866a-1cc3-40f3-90e2-1f9e7d44e4f2-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-698bt\" (UID: \"696b866a-1cc3-40f3-90e2-1f9e7d44e4f2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-698bt" Jan 03 06:28:41 crc kubenswrapper[4854]: I0103 06:28:41.324097 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/696b866a-1cc3-40f3-90e2-1f9e7d44e4f2-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-698bt\" (UID: \"696b866a-1cc3-40f3-90e2-1f9e7d44e4f2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-698bt" Jan 03 06:28:41 crc kubenswrapper[4854]: I0103 06:28:41.324485 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/696b866a-1cc3-40f3-90e2-1f9e7d44e4f2-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-698bt\" (UID: \"696b866a-1cc3-40f3-90e2-1f9e7d44e4f2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-698bt" Jan 03 06:28:41 crc kubenswrapper[4854]: I0103 06:28:41.326515 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/696b866a-1cc3-40f3-90e2-1f9e7d44e4f2-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-698bt\" (UID: \"696b866a-1cc3-40f3-90e2-1f9e7d44e4f2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-698bt" Jan 03 06:28:41 crc kubenswrapper[4854]: I0103 06:28:41.345884 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjxjf\" (UniqueName: \"kubernetes.io/projected/696b866a-1cc3-40f3-90e2-1f9e7d44e4f2-kube-api-access-mjxjf\") pod \"nova-edpm-deployment-openstack-edpm-ipam-698bt\" (UID: \"696b866a-1cc3-40f3-90e2-1f9e7d44e4f2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-698bt" Jan 03 06:28:41 crc kubenswrapper[4854]: I0103 06:28:41.422420 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-698bt" Jan 03 06:28:41 crc kubenswrapper[4854]: I0103 06:28:41.755360 4854 patch_prober.go:28] interesting pod/machine-config-daemon-qdhfx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 06:28:41 crc kubenswrapper[4854]: I0103 06:28:41.755724 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 06:28:42 crc kubenswrapper[4854]: I0103 06:28:42.146625 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-698bt"] Jan 03 06:28:43 crc kubenswrapper[4854]: I0103 06:28:43.041459 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-698bt" event={"ID":"696b866a-1cc3-40f3-90e2-1f9e7d44e4f2","Type":"ContainerStarted","Data":"3fac66893d37760953df6d0bda7ddc39f6cb9a4f0078ef8cdfa7fc89978af451"} Jan 03 06:28:44 crc kubenswrapper[4854]: I0103 06:28:44.065328 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-698bt" event={"ID":"696b866a-1cc3-40f3-90e2-1f9e7d44e4f2","Type":"ContainerStarted","Data":"141ae675ca6492a951951f5536e2cd1f8214fe864d3a85c4bb569798572ee9a9"} Jan 03 06:28:44 crc kubenswrapper[4854]: I0103 06:28:44.122357 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-698bt" podStartSLOduration=2.358639907 podStartE2EDuration="3.122337979s" podCreationTimestamp="2026-01-03 06:28:41 +0000 UTC" firstStartedPulling="2026-01-03 06:28:42.151734877 +0000 UTC m=+2900.478311459" lastFinishedPulling="2026-01-03 06:28:42.915432949 +0000 UTC m=+2901.242009531" observedRunningTime="2026-01-03 06:28:44.097119281 +0000 UTC m=+2902.423695863" watchObservedRunningTime="2026-01-03 06:28:44.122337979 +0000 UTC m=+2902.448914551" Jan 03 06:29:11 crc kubenswrapper[4854]: I0103 06:29:11.755308 4854 patch_prober.go:28] interesting pod/machine-config-daemon-qdhfx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 06:29:11 crc kubenswrapper[4854]: I0103 06:29:11.755827 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 06:29:11 crc kubenswrapper[4854]: I0103 06:29:11.755878 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" Jan 03 06:29:11 crc kubenswrapper[4854]: I0103 06:29:11.756900 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"571b591e68ec825c867cd032317b7d0af493afca30da7ad81b2ff7ae6daedf75"} pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 03 06:29:11 crc kubenswrapper[4854]: I0103 06:29:11.756959 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" containerID="cri-o://571b591e68ec825c867cd032317b7d0af493afca30da7ad81b2ff7ae6daedf75" gracePeriod=600 Jan 03 06:29:12 crc kubenswrapper[4854]: I0103 06:29:12.432882 4854 generic.go:334] "Generic (PLEG): container finished" podID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerID="571b591e68ec825c867cd032317b7d0af493afca30da7ad81b2ff7ae6daedf75" exitCode=0 Jan 03 06:29:12 crc kubenswrapper[4854]: I0103 06:29:12.432944 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" event={"ID":"e8c88d7d-092b-44f7-b4c8-3540be3c0e8b","Type":"ContainerDied","Data":"571b591e68ec825c867cd032317b7d0af493afca30da7ad81b2ff7ae6daedf75"} Jan 03 06:29:12 crc kubenswrapper[4854]: I0103 06:29:12.433373 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" event={"ID":"e8c88d7d-092b-44f7-b4c8-3540be3c0e8b","Type":"ContainerStarted","Data":"d1062222733ed9b45e74b489b85666fabd821cc1e4231cded66f368d3ddfe0d2"} Jan 03 06:29:12 crc kubenswrapper[4854]: I0103 06:29:12.433401 4854 scope.go:117] "RemoveContainer" containerID="e16522f8da35959e8a35a9556715d70f86468625ee1058a24772e4459c7e4658" Jan 03 06:30:00 crc kubenswrapper[4854]: I0103 06:30:00.169530 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29457030-dk7jz"] Jan 03 06:30:00 crc kubenswrapper[4854]: I0103 06:30:00.173929 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29457030-dk7jz" Jan 03 06:30:00 crc kubenswrapper[4854]: I0103 06:30:00.180496 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29457030-dk7jz"] Jan 03 06:30:00 crc kubenswrapper[4854]: I0103 06:30:00.206706 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 03 06:30:00 crc kubenswrapper[4854]: I0103 06:30:00.207059 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 03 06:30:00 crc kubenswrapper[4854]: I0103 06:30:00.356688 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7dd904a2-23af-44ae-9332-b55a0d373d4f-config-volume\") pod \"collect-profiles-29457030-dk7jz\" (UID: \"7dd904a2-23af-44ae-9332-b55a0d373d4f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29457030-dk7jz" Jan 03 06:30:00 crc kubenswrapper[4854]: I0103 06:30:00.356770 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7dd904a2-23af-44ae-9332-b55a0d373d4f-secret-volume\") pod \"collect-profiles-29457030-dk7jz\" (UID: \"7dd904a2-23af-44ae-9332-b55a0d373d4f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29457030-dk7jz" Jan 03 06:30:00 crc kubenswrapper[4854]: I0103 06:30:00.356956 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdknw\" (UniqueName: \"kubernetes.io/projected/7dd904a2-23af-44ae-9332-b55a0d373d4f-kube-api-access-vdknw\") pod \"collect-profiles-29457030-dk7jz\" (UID: \"7dd904a2-23af-44ae-9332-b55a0d373d4f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29457030-dk7jz" Jan 03 06:30:00 crc kubenswrapper[4854]: I0103 06:30:00.460878 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7dd904a2-23af-44ae-9332-b55a0d373d4f-config-volume\") pod \"collect-profiles-29457030-dk7jz\" (UID: \"7dd904a2-23af-44ae-9332-b55a0d373d4f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29457030-dk7jz" Jan 03 06:30:00 crc kubenswrapper[4854]: I0103 06:30:00.460997 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7dd904a2-23af-44ae-9332-b55a0d373d4f-secret-volume\") pod \"collect-profiles-29457030-dk7jz\" (UID: \"7dd904a2-23af-44ae-9332-b55a0d373d4f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29457030-dk7jz" Jan 03 06:30:00 crc kubenswrapper[4854]: I0103 06:30:00.461276 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdknw\" (UniqueName: \"kubernetes.io/projected/7dd904a2-23af-44ae-9332-b55a0d373d4f-kube-api-access-vdknw\") pod \"collect-profiles-29457030-dk7jz\" (UID: \"7dd904a2-23af-44ae-9332-b55a0d373d4f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29457030-dk7jz" Jan 03 06:30:00 crc kubenswrapper[4854]: I0103 06:30:00.462171 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7dd904a2-23af-44ae-9332-b55a0d373d4f-config-volume\") pod \"collect-profiles-29457030-dk7jz\" (UID: \"7dd904a2-23af-44ae-9332-b55a0d373d4f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29457030-dk7jz" Jan 03 06:30:00 crc kubenswrapper[4854]: I0103 06:30:00.473237 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7dd904a2-23af-44ae-9332-b55a0d373d4f-secret-volume\") pod \"collect-profiles-29457030-dk7jz\" (UID: \"7dd904a2-23af-44ae-9332-b55a0d373d4f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29457030-dk7jz" Jan 03 06:30:00 crc kubenswrapper[4854]: I0103 06:30:00.502491 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdknw\" (UniqueName: \"kubernetes.io/projected/7dd904a2-23af-44ae-9332-b55a0d373d4f-kube-api-access-vdknw\") pod \"collect-profiles-29457030-dk7jz\" (UID: \"7dd904a2-23af-44ae-9332-b55a0d373d4f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29457030-dk7jz" Jan 03 06:30:00 crc kubenswrapper[4854]: I0103 06:30:00.530999 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29457030-dk7jz" Jan 03 06:30:01 crc kubenswrapper[4854]: I0103 06:30:01.025722 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29457030-dk7jz"] Jan 03 06:30:01 crc kubenswrapper[4854]: I0103 06:30:01.158112 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29457030-dk7jz" event={"ID":"7dd904a2-23af-44ae-9332-b55a0d373d4f","Type":"ContainerStarted","Data":"a0e811d60618ad63738dfe72015a9d90b13c077c6d5f64856722d6db473dcc3a"} Jan 03 06:30:01 crc kubenswrapper[4854]: E0103 06:30:01.679141 4854 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7dd904a2_23af_44ae_9332_b55a0d373d4f.slice/crio-conmon-8ed9a4c9d05e9db5fef7e203a1cfe1e01c74f144894c8b0e144285b94e6f4264.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7dd904a2_23af_44ae_9332_b55a0d373d4f.slice/crio-8ed9a4c9d05e9db5fef7e203a1cfe1e01c74f144894c8b0e144285b94e6f4264.scope\": RecentStats: unable to find data in memory cache]" Jan 03 06:30:02 crc kubenswrapper[4854]: I0103 06:30:02.177157 4854 generic.go:334] "Generic (PLEG): container finished" podID="7dd904a2-23af-44ae-9332-b55a0d373d4f" containerID="8ed9a4c9d05e9db5fef7e203a1cfe1e01c74f144894c8b0e144285b94e6f4264" exitCode=0 Jan 03 06:30:02 crc kubenswrapper[4854]: I0103 06:30:02.177232 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29457030-dk7jz" event={"ID":"7dd904a2-23af-44ae-9332-b55a0d373d4f","Type":"ContainerDied","Data":"8ed9a4c9d05e9db5fef7e203a1cfe1e01c74f144894c8b0e144285b94e6f4264"} Jan 03 06:30:03 crc kubenswrapper[4854]: I0103 06:30:03.604957 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29457030-dk7jz" Jan 03 06:30:03 crc kubenswrapper[4854]: I0103 06:30:03.753810 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdknw\" (UniqueName: \"kubernetes.io/projected/7dd904a2-23af-44ae-9332-b55a0d373d4f-kube-api-access-vdknw\") pod \"7dd904a2-23af-44ae-9332-b55a0d373d4f\" (UID: \"7dd904a2-23af-44ae-9332-b55a0d373d4f\") " Jan 03 06:30:03 crc kubenswrapper[4854]: I0103 06:30:03.753932 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7dd904a2-23af-44ae-9332-b55a0d373d4f-secret-volume\") pod \"7dd904a2-23af-44ae-9332-b55a0d373d4f\" (UID: \"7dd904a2-23af-44ae-9332-b55a0d373d4f\") " Jan 03 06:30:03 crc kubenswrapper[4854]: I0103 06:30:03.754073 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7dd904a2-23af-44ae-9332-b55a0d373d4f-config-volume\") pod \"7dd904a2-23af-44ae-9332-b55a0d373d4f\" (UID: \"7dd904a2-23af-44ae-9332-b55a0d373d4f\") " Jan 03 06:30:03 crc kubenswrapper[4854]: I0103 06:30:03.754755 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dd904a2-23af-44ae-9332-b55a0d373d4f-config-volume" (OuterVolumeSpecName: "config-volume") pod "7dd904a2-23af-44ae-9332-b55a0d373d4f" (UID: "7dd904a2-23af-44ae-9332-b55a0d373d4f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:30:03 crc kubenswrapper[4854]: I0103 06:30:03.761136 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dd904a2-23af-44ae-9332-b55a0d373d4f-kube-api-access-vdknw" (OuterVolumeSpecName: "kube-api-access-vdknw") pod "7dd904a2-23af-44ae-9332-b55a0d373d4f" (UID: "7dd904a2-23af-44ae-9332-b55a0d373d4f"). InnerVolumeSpecName "kube-api-access-vdknw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:30:03 crc kubenswrapper[4854]: I0103 06:30:03.762842 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dd904a2-23af-44ae-9332-b55a0d373d4f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7dd904a2-23af-44ae-9332-b55a0d373d4f" (UID: "7dd904a2-23af-44ae-9332-b55a0d373d4f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:30:03 crc kubenswrapper[4854]: I0103 06:30:03.857618 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdknw\" (UniqueName: \"kubernetes.io/projected/7dd904a2-23af-44ae-9332-b55a0d373d4f-kube-api-access-vdknw\") on node \"crc\" DevicePath \"\"" Jan 03 06:30:03 crc kubenswrapper[4854]: I0103 06:30:03.857653 4854 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7dd904a2-23af-44ae-9332-b55a0d373d4f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 03 06:30:03 crc kubenswrapper[4854]: I0103 06:30:03.857666 4854 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7dd904a2-23af-44ae-9332-b55a0d373d4f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 03 06:30:04 crc kubenswrapper[4854]: I0103 06:30:04.200754 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29457030-dk7jz" event={"ID":"7dd904a2-23af-44ae-9332-b55a0d373d4f","Type":"ContainerDied","Data":"a0e811d60618ad63738dfe72015a9d90b13c077c6d5f64856722d6db473dcc3a"} Jan 03 06:30:04 crc kubenswrapper[4854]: I0103 06:30:04.200809 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0e811d60618ad63738dfe72015a9d90b13c077c6d5f64856722d6db473dcc3a" Jan 03 06:30:04 crc kubenswrapper[4854]: I0103 06:30:04.200823 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29457030-dk7jz" Jan 03 06:30:04 crc kubenswrapper[4854]: I0103 06:30:04.700856 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29456985-29drk"] Jan 03 06:30:04 crc kubenswrapper[4854]: I0103 06:30:04.717370 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29456985-29drk"] Jan 03 06:30:06 crc kubenswrapper[4854]: I0103 06:30:06.138691 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00ca6b9b-1bf3-4fa5-b787-449b8f8bbfec" path="/var/lib/kubelet/pods/00ca6b9b-1bf3-4fa5-b787-449b8f8bbfec/volumes" Jan 03 06:31:03 crc kubenswrapper[4854]: I0103 06:31:03.997267 4854 scope.go:117] "RemoveContainer" containerID="5c0d1907442a34cea381a800b574e65c8acedc1c4956e5e0bc228ca9a747b07e" Jan 03 06:31:17 crc kubenswrapper[4854]: I0103 06:31:17.288724 4854 generic.go:334] "Generic (PLEG): container finished" podID="696b866a-1cc3-40f3-90e2-1f9e7d44e4f2" containerID="141ae675ca6492a951951f5536e2cd1f8214fe864d3a85c4bb569798572ee9a9" exitCode=0 Jan 03 06:31:17 crc kubenswrapper[4854]: I0103 06:31:17.288805 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-698bt" event={"ID":"696b866a-1cc3-40f3-90e2-1f9e7d44e4f2","Type":"ContainerDied","Data":"141ae675ca6492a951951f5536e2cd1f8214fe864d3a85c4bb569798572ee9a9"} Jan 03 06:31:18 crc kubenswrapper[4854]: I0103 06:31:18.854684 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-698bt" Jan 03 06:31:18 crc kubenswrapper[4854]: I0103 06:31:18.873697 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/696b866a-1cc3-40f3-90e2-1f9e7d44e4f2-inventory\") pod \"696b866a-1cc3-40f3-90e2-1f9e7d44e4f2\" (UID: \"696b866a-1cc3-40f3-90e2-1f9e7d44e4f2\") " Jan 03 06:31:18 crc kubenswrapper[4854]: I0103 06:31:18.874989 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/696b866a-1cc3-40f3-90e2-1f9e7d44e4f2-nova-migration-ssh-key-1\") pod \"696b866a-1cc3-40f3-90e2-1f9e7d44e4f2\" (UID: \"696b866a-1cc3-40f3-90e2-1f9e7d44e4f2\") " Jan 03 06:31:18 crc kubenswrapper[4854]: I0103 06:31:18.875175 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/696b866a-1cc3-40f3-90e2-1f9e7d44e4f2-nova-migration-ssh-key-0\") pod \"696b866a-1cc3-40f3-90e2-1f9e7d44e4f2\" (UID: \"696b866a-1cc3-40f3-90e2-1f9e7d44e4f2\") " Jan 03 06:31:18 crc kubenswrapper[4854]: I0103 06:31:18.875322 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/696b866a-1cc3-40f3-90e2-1f9e7d44e4f2-nova-cell1-compute-config-1\") pod \"696b866a-1cc3-40f3-90e2-1f9e7d44e4f2\" (UID: \"696b866a-1cc3-40f3-90e2-1f9e7d44e4f2\") " Jan 03 06:31:18 crc kubenswrapper[4854]: I0103 06:31:18.875444 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/696b866a-1cc3-40f3-90e2-1f9e7d44e4f2-ssh-key\") pod \"696b866a-1cc3-40f3-90e2-1f9e7d44e4f2\" (UID: \"696b866a-1cc3-40f3-90e2-1f9e7d44e4f2\") " Jan 03 06:31:18 crc kubenswrapper[4854]: I0103 06:31:18.875633 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/696b866a-1cc3-40f3-90e2-1f9e7d44e4f2-nova-combined-ca-bundle\") pod \"696b866a-1cc3-40f3-90e2-1f9e7d44e4f2\" (UID: \"696b866a-1cc3-40f3-90e2-1f9e7d44e4f2\") " Jan 03 06:31:18 crc kubenswrapper[4854]: I0103 06:31:18.875936 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/696b866a-1cc3-40f3-90e2-1f9e7d44e4f2-nova-extra-config-0\") pod \"696b866a-1cc3-40f3-90e2-1f9e7d44e4f2\" (UID: \"696b866a-1cc3-40f3-90e2-1f9e7d44e4f2\") " Jan 03 06:31:18 crc kubenswrapper[4854]: I0103 06:31:18.876124 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjxjf\" (UniqueName: \"kubernetes.io/projected/696b866a-1cc3-40f3-90e2-1f9e7d44e4f2-kube-api-access-mjxjf\") pod \"696b866a-1cc3-40f3-90e2-1f9e7d44e4f2\" (UID: \"696b866a-1cc3-40f3-90e2-1f9e7d44e4f2\") " Jan 03 06:31:18 crc kubenswrapper[4854]: I0103 06:31:18.876297 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/696b866a-1cc3-40f3-90e2-1f9e7d44e4f2-nova-cell1-compute-config-0\") pod \"696b866a-1cc3-40f3-90e2-1f9e7d44e4f2\" (UID: \"696b866a-1cc3-40f3-90e2-1f9e7d44e4f2\") " Jan 03 06:31:18 crc kubenswrapper[4854]: I0103 06:31:18.909546 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/696b866a-1cc3-40f3-90e2-1f9e7d44e4f2-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "696b866a-1cc3-40f3-90e2-1f9e7d44e4f2" (UID: "696b866a-1cc3-40f3-90e2-1f9e7d44e4f2"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:31:18 crc kubenswrapper[4854]: I0103 06:31:18.920119 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/696b866a-1cc3-40f3-90e2-1f9e7d44e4f2-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "696b866a-1cc3-40f3-90e2-1f9e7d44e4f2" (UID: "696b866a-1cc3-40f3-90e2-1f9e7d44e4f2"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:31:18 crc kubenswrapper[4854]: I0103 06:31:18.920838 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/696b866a-1cc3-40f3-90e2-1f9e7d44e4f2-kube-api-access-mjxjf" (OuterVolumeSpecName: "kube-api-access-mjxjf") pod "696b866a-1cc3-40f3-90e2-1f9e7d44e4f2" (UID: "696b866a-1cc3-40f3-90e2-1f9e7d44e4f2"). InnerVolumeSpecName "kube-api-access-mjxjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:31:18 crc kubenswrapper[4854]: I0103 06:31:18.935298 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/696b866a-1cc3-40f3-90e2-1f9e7d44e4f2-inventory" (OuterVolumeSpecName: "inventory") pod "696b866a-1cc3-40f3-90e2-1f9e7d44e4f2" (UID: "696b866a-1cc3-40f3-90e2-1f9e7d44e4f2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:31:18 crc kubenswrapper[4854]: I0103 06:31:18.940847 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/696b866a-1cc3-40f3-90e2-1f9e7d44e4f2-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "696b866a-1cc3-40f3-90e2-1f9e7d44e4f2" (UID: "696b866a-1cc3-40f3-90e2-1f9e7d44e4f2"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:31:18 crc kubenswrapper[4854]: I0103 06:31:18.942374 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/696b866a-1cc3-40f3-90e2-1f9e7d44e4f2-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "696b866a-1cc3-40f3-90e2-1f9e7d44e4f2" (UID: "696b866a-1cc3-40f3-90e2-1f9e7d44e4f2"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:31:18 crc kubenswrapper[4854]: I0103 06:31:18.945130 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/696b866a-1cc3-40f3-90e2-1f9e7d44e4f2-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "696b866a-1cc3-40f3-90e2-1f9e7d44e4f2" (UID: "696b866a-1cc3-40f3-90e2-1f9e7d44e4f2"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:31:18 crc kubenswrapper[4854]: I0103 06:31:18.958028 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/696b866a-1cc3-40f3-90e2-1f9e7d44e4f2-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "696b866a-1cc3-40f3-90e2-1f9e7d44e4f2" (UID: "696b866a-1cc3-40f3-90e2-1f9e7d44e4f2"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:31:18 crc kubenswrapper[4854]: I0103 06:31:18.964242 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/696b866a-1cc3-40f3-90e2-1f9e7d44e4f2-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "696b866a-1cc3-40f3-90e2-1f9e7d44e4f2" (UID: "696b866a-1cc3-40f3-90e2-1f9e7d44e4f2"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:31:18 crc kubenswrapper[4854]: I0103 06:31:18.986644 4854 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/696b866a-1cc3-40f3-90e2-1f9e7d44e4f2-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:31:18 crc kubenswrapper[4854]: I0103 06:31:18.986703 4854 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/696b866a-1cc3-40f3-90e2-1f9e7d44e4f2-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 03 06:31:18 crc kubenswrapper[4854]: I0103 06:31:18.986712 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjxjf\" (UniqueName: \"kubernetes.io/projected/696b866a-1cc3-40f3-90e2-1f9e7d44e4f2-kube-api-access-mjxjf\") on node \"crc\" DevicePath \"\"" Jan 03 06:31:18 crc kubenswrapper[4854]: I0103 06:31:18.986722 4854 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/696b866a-1cc3-40f3-90e2-1f9e7d44e4f2-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 03 06:31:18 crc kubenswrapper[4854]: I0103 06:31:18.986732 4854 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/696b866a-1cc3-40f3-90e2-1f9e7d44e4f2-inventory\") on node \"crc\" DevicePath \"\"" Jan 03 06:31:18 crc kubenswrapper[4854]: I0103 06:31:18.986745 4854 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/696b866a-1cc3-40f3-90e2-1f9e7d44e4f2-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 03 06:31:18 crc kubenswrapper[4854]: I0103 06:31:18.986753 4854 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/696b866a-1cc3-40f3-90e2-1f9e7d44e4f2-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 03 06:31:18 crc kubenswrapper[4854]: I0103 06:31:18.986761 4854 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/696b866a-1cc3-40f3-90e2-1f9e7d44e4f2-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 03 06:31:18 crc kubenswrapper[4854]: I0103 06:31:18.986771 4854 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/696b866a-1cc3-40f3-90e2-1f9e7d44e4f2-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 03 06:31:19 crc kubenswrapper[4854]: I0103 06:31:19.317001 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-698bt" event={"ID":"696b866a-1cc3-40f3-90e2-1f9e7d44e4f2","Type":"ContainerDied","Data":"3fac66893d37760953df6d0bda7ddc39f6cb9a4f0078ef8cdfa7fc89978af451"} Jan 03 06:31:19 crc kubenswrapper[4854]: I0103 06:31:19.317299 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fac66893d37760953df6d0bda7ddc39f6cb9a4f0078ef8cdfa7fc89978af451" Jan 03 06:31:19 crc kubenswrapper[4854]: I0103 06:31:19.317093 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-698bt" Jan 03 06:31:19 crc kubenswrapper[4854]: I0103 06:31:19.430872 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6m698"] Jan 03 06:31:19 crc kubenswrapper[4854]: E0103 06:31:19.431348 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dd904a2-23af-44ae-9332-b55a0d373d4f" containerName="collect-profiles" Jan 03 06:31:19 crc kubenswrapper[4854]: I0103 06:31:19.431364 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dd904a2-23af-44ae-9332-b55a0d373d4f" containerName="collect-profiles" Jan 03 06:31:19 crc kubenswrapper[4854]: E0103 06:31:19.431393 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="696b866a-1cc3-40f3-90e2-1f9e7d44e4f2" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 03 06:31:19 crc kubenswrapper[4854]: I0103 06:31:19.431400 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="696b866a-1cc3-40f3-90e2-1f9e7d44e4f2" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 03 06:31:19 crc kubenswrapper[4854]: I0103 06:31:19.431651 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dd904a2-23af-44ae-9332-b55a0d373d4f" containerName="collect-profiles" Jan 03 06:31:19 crc kubenswrapper[4854]: I0103 06:31:19.431672 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="696b866a-1cc3-40f3-90e2-1f9e7d44e4f2" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 03 06:31:19 crc kubenswrapper[4854]: I0103 06:31:19.432463 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6m698" Jan 03 06:31:19 crc kubenswrapper[4854]: I0103 06:31:19.435653 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 03 06:31:19 crc kubenswrapper[4854]: I0103 06:31:19.435952 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4bl62" Jan 03 06:31:19 crc kubenswrapper[4854]: I0103 06:31:19.436043 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 03 06:31:19 crc kubenswrapper[4854]: I0103 06:31:19.435965 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 03 06:31:19 crc kubenswrapper[4854]: I0103 06:31:19.437180 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 03 06:31:19 crc kubenswrapper[4854]: I0103 06:31:19.448064 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6m698"] Jan 03 06:31:19 crc kubenswrapper[4854]: I0103 06:31:19.497566 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z2vb\" (UniqueName: \"kubernetes.io/projected/9643ceca-5e7b-4dfd-a68a-27ff08e51c26-kube-api-access-5z2vb\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6m698\" (UID: \"9643ceca-5e7b-4dfd-a68a-27ff08e51c26\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6m698" Jan 03 06:31:19 crc kubenswrapper[4854]: I0103 06:31:19.497617 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9643ceca-5e7b-4dfd-a68a-27ff08e51c26-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6m698\" (UID: \"9643ceca-5e7b-4dfd-a68a-27ff08e51c26\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6m698" Jan 03 06:31:19 crc kubenswrapper[4854]: I0103 06:31:19.497668 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9643ceca-5e7b-4dfd-a68a-27ff08e51c26-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6m698\" (UID: \"9643ceca-5e7b-4dfd-a68a-27ff08e51c26\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6m698" Jan 03 06:31:19 crc kubenswrapper[4854]: I0103 06:31:19.497729 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/9643ceca-5e7b-4dfd-a68a-27ff08e51c26-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6m698\" (UID: \"9643ceca-5e7b-4dfd-a68a-27ff08e51c26\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6m698" Jan 03 06:31:19 crc kubenswrapper[4854]: I0103 06:31:19.497964 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/9643ceca-5e7b-4dfd-a68a-27ff08e51c26-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6m698\" (UID: \"9643ceca-5e7b-4dfd-a68a-27ff08e51c26\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6m698" Jan 03 06:31:19 crc kubenswrapper[4854]: I0103 06:31:19.498069 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9643ceca-5e7b-4dfd-a68a-27ff08e51c26-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6m698\" (UID: \"9643ceca-5e7b-4dfd-a68a-27ff08e51c26\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6m698" Jan 03 06:31:19 crc kubenswrapper[4854]: I0103 06:31:19.498222 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/9643ceca-5e7b-4dfd-a68a-27ff08e51c26-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6m698\" (UID: \"9643ceca-5e7b-4dfd-a68a-27ff08e51c26\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6m698" Jan 03 06:31:19 crc kubenswrapper[4854]: I0103 06:31:19.601020 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9643ceca-5e7b-4dfd-a68a-27ff08e51c26-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6m698\" (UID: \"9643ceca-5e7b-4dfd-a68a-27ff08e51c26\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6m698" Jan 03 06:31:19 crc kubenswrapper[4854]: I0103 06:31:19.601123 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9643ceca-5e7b-4dfd-a68a-27ff08e51c26-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6m698\" (UID: \"9643ceca-5e7b-4dfd-a68a-27ff08e51c26\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6m698" Jan 03 06:31:19 crc kubenswrapper[4854]: I0103 06:31:19.601155 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/9643ceca-5e7b-4dfd-a68a-27ff08e51c26-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6m698\" (UID: \"9643ceca-5e7b-4dfd-a68a-27ff08e51c26\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6m698" Jan 03 06:31:19 crc kubenswrapper[4854]: I0103 06:31:19.601267 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/9643ceca-5e7b-4dfd-a68a-27ff08e51c26-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6m698\" (UID: \"9643ceca-5e7b-4dfd-a68a-27ff08e51c26\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6m698" Jan 03 06:31:19 crc kubenswrapper[4854]: I0103 06:31:19.601303 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9643ceca-5e7b-4dfd-a68a-27ff08e51c26-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6m698\" (UID: \"9643ceca-5e7b-4dfd-a68a-27ff08e51c26\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6m698" Jan 03 06:31:19 crc kubenswrapper[4854]: I0103 06:31:19.601349 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/9643ceca-5e7b-4dfd-a68a-27ff08e51c26-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6m698\" (UID: \"9643ceca-5e7b-4dfd-a68a-27ff08e51c26\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6m698" Jan 03 06:31:19 crc kubenswrapper[4854]: I0103 06:31:19.601491 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z2vb\" (UniqueName: \"kubernetes.io/projected/9643ceca-5e7b-4dfd-a68a-27ff08e51c26-kube-api-access-5z2vb\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6m698\" (UID: \"9643ceca-5e7b-4dfd-a68a-27ff08e51c26\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6m698" Jan 03 06:31:19 crc kubenswrapper[4854]: I0103 06:31:19.605826 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/9643ceca-5e7b-4dfd-a68a-27ff08e51c26-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6m698\" (UID: \"9643ceca-5e7b-4dfd-a68a-27ff08e51c26\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6m698" Jan 03 06:31:19 crc kubenswrapper[4854]: I0103 06:31:19.607199 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/9643ceca-5e7b-4dfd-a68a-27ff08e51c26-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6m698\" (UID: \"9643ceca-5e7b-4dfd-a68a-27ff08e51c26\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6m698" Jan 03 06:31:19 crc kubenswrapper[4854]: I0103 06:31:19.607283 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9643ceca-5e7b-4dfd-a68a-27ff08e51c26-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6m698\" (UID: \"9643ceca-5e7b-4dfd-a68a-27ff08e51c26\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6m698" Jan 03 06:31:19 crc kubenswrapper[4854]: I0103 06:31:19.611607 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9643ceca-5e7b-4dfd-a68a-27ff08e51c26-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6m698\" (UID: \"9643ceca-5e7b-4dfd-a68a-27ff08e51c26\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6m698" Jan 03 06:31:19 crc kubenswrapper[4854]: I0103 06:31:19.612252 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9643ceca-5e7b-4dfd-a68a-27ff08e51c26-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6m698\" (UID: \"9643ceca-5e7b-4dfd-a68a-27ff08e51c26\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6m698" Jan 03 06:31:19 crc kubenswrapper[4854]: I0103 06:31:19.613017 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/9643ceca-5e7b-4dfd-a68a-27ff08e51c26-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6m698\" (UID: \"9643ceca-5e7b-4dfd-a68a-27ff08e51c26\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6m698" Jan 03 06:31:19 crc kubenswrapper[4854]: I0103 06:31:19.620641 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z2vb\" (UniqueName: \"kubernetes.io/projected/9643ceca-5e7b-4dfd-a68a-27ff08e51c26-kube-api-access-5z2vb\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6m698\" (UID: \"9643ceca-5e7b-4dfd-a68a-27ff08e51c26\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6m698" Jan 03 06:31:19 crc kubenswrapper[4854]: I0103 06:31:19.749021 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6m698" Jan 03 06:31:20 crc kubenswrapper[4854]: I0103 06:31:20.202027 4854 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 03 06:31:20 crc kubenswrapper[4854]: I0103 06:31:20.221702 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6m698"] Jan 03 06:31:20 crc kubenswrapper[4854]: I0103 06:31:20.339547 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6m698" event={"ID":"9643ceca-5e7b-4dfd-a68a-27ff08e51c26","Type":"ContainerStarted","Data":"88a915b2aca0ecccbd8cd238bacda418beb6e19f72b35f2806aad7ab32d146e2"} Jan 03 06:31:21 crc kubenswrapper[4854]: I0103 06:31:21.352383 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6m698" event={"ID":"9643ceca-5e7b-4dfd-a68a-27ff08e51c26","Type":"ContainerStarted","Data":"b80acf4276c04936f29cafcbef0078f9368a93714444eb5eecd54a9cb2aafa50"} Jan 03 06:31:21 crc kubenswrapper[4854]: I0103 06:31:21.373684 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6m698" podStartSLOduration=1.8805299070000001 podStartE2EDuration="2.373664365s" podCreationTimestamp="2026-01-03 06:31:19 +0000 UTC" firstStartedPulling="2026-01-03 06:31:20.201820355 +0000 UTC m=+3058.528396927" lastFinishedPulling="2026-01-03 06:31:20.694954813 +0000 UTC m=+3059.021531385" observedRunningTime="2026-01-03 06:31:21.371276547 +0000 UTC m=+3059.697853119" watchObservedRunningTime="2026-01-03 06:31:21.373664365 +0000 UTC m=+3059.700240937" Jan 03 06:31:41 crc kubenswrapper[4854]: I0103 06:31:41.764495 4854 patch_prober.go:28] interesting pod/machine-config-daemon-qdhfx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 06:31:41 crc kubenswrapper[4854]: I0103 06:31:41.765230 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 06:32:11 crc kubenswrapper[4854]: I0103 06:32:11.756271 4854 patch_prober.go:28] interesting pod/machine-config-daemon-qdhfx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 06:32:11 crc kubenswrapper[4854]: I0103 06:32:11.757196 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 06:32:41 crc kubenswrapper[4854]: I0103 06:32:41.756144 4854 patch_prober.go:28] interesting pod/machine-config-daemon-qdhfx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 06:32:41 crc kubenswrapper[4854]: I0103 06:32:41.756936 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 06:32:41 crc kubenswrapper[4854]: I0103 06:32:41.756995 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" Jan 03 06:32:41 crc kubenswrapper[4854]: I0103 06:32:41.758060 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d1062222733ed9b45e74b489b85666fabd821cc1e4231cded66f368d3ddfe0d2"} pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 03 06:32:41 crc kubenswrapper[4854]: I0103 06:32:41.758162 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" containerID="cri-o://d1062222733ed9b45e74b489b85666fabd821cc1e4231cded66f368d3ddfe0d2" gracePeriod=600 Jan 03 06:32:41 crc kubenswrapper[4854]: E0103 06:32:41.881423 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:32:42 crc kubenswrapper[4854]: I0103 06:32:42.442845 4854 generic.go:334] "Generic (PLEG): container finished" podID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerID="d1062222733ed9b45e74b489b85666fabd821cc1e4231cded66f368d3ddfe0d2" exitCode=0 Jan 03 06:32:42 crc kubenswrapper[4854]: I0103 06:32:42.442889 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" event={"ID":"e8c88d7d-092b-44f7-b4c8-3540be3c0e8b","Type":"ContainerDied","Data":"d1062222733ed9b45e74b489b85666fabd821cc1e4231cded66f368d3ddfe0d2"} Jan 03 06:32:42 crc kubenswrapper[4854]: I0103 06:32:42.442935 4854 scope.go:117] "RemoveContainer" containerID="571b591e68ec825c867cd032317b7d0af493afca30da7ad81b2ff7ae6daedf75" Jan 03 06:32:42 crc kubenswrapper[4854]: I0103 06:32:42.443738 4854 scope.go:117] "RemoveContainer" containerID="d1062222733ed9b45e74b489b85666fabd821cc1e4231cded66f368d3ddfe0d2" Jan 03 06:32:42 crc kubenswrapper[4854]: E0103 06:32:42.443996 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:32:54 crc kubenswrapper[4854]: I0103 06:32:54.118438 4854 scope.go:117] "RemoveContainer" containerID="d1062222733ed9b45e74b489b85666fabd821cc1e4231cded66f368d3ddfe0d2" Jan 03 06:32:54 crc kubenswrapper[4854]: E0103 06:32:54.119418 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:33:07 crc kubenswrapper[4854]: I0103 06:33:07.118936 4854 scope.go:117] "RemoveContainer" containerID="d1062222733ed9b45e74b489b85666fabd821cc1e4231cded66f368d3ddfe0d2" Jan 03 06:33:07 crc kubenswrapper[4854]: E0103 06:33:07.119929 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:33:20 crc kubenswrapper[4854]: I0103 06:33:20.119194 4854 scope.go:117] "RemoveContainer" containerID="d1062222733ed9b45e74b489b85666fabd821cc1e4231cded66f368d3ddfe0d2" Jan 03 06:33:20 crc kubenswrapper[4854]: E0103 06:33:20.120327 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:33:34 crc kubenswrapper[4854]: I0103 06:33:34.132438 4854 scope.go:117] "RemoveContainer" containerID="d1062222733ed9b45e74b489b85666fabd821cc1e4231cded66f368d3ddfe0d2" Jan 03 06:33:34 crc kubenswrapper[4854]: E0103 06:33:34.133132 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:33:41 crc kubenswrapper[4854]: I0103 06:33:41.987245 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-g56rc"] Jan 03 06:33:41 crc kubenswrapper[4854]: I0103 06:33:41.993045 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g56rc" Jan 03 06:33:42 crc kubenswrapper[4854]: I0103 06:33:42.001661 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g56rc"] Jan 03 06:33:42 crc kubenswrapper[4854]: I0103 06:33:42.159747 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nqsn\" (UniqueName: \"kubernetes.io/projected/64345244-2d92-4d0a-b58b-509269353b9d-kube-api-access-7nqsn\") pod \"redhat-marketplace-g56rc\" (UID: \"64345244-2d92-4d0a-b58b-509269353b9d\") " pod="openshift-marketplace/redhat-marketplace-g56rc" Jan 03 06:33:42 crc kubenswrapper[4854]: I0103 06:33:42.161055 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64345244-2d92-4d0a-b58b-509269353b9d-utilities\") pod \"redhat-marketplace-g56rc\" (UID: \"64345244-2d92-4d0a-b58b-509269353b9d\") " pod="openshift-marketplace/redhat-marketplace-g56rc" Jan 03 06:33:42 crc kubenswrapper[4854]: I0103 06:33:42.161160 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64345244-2d92-4d0a-b58b-509269353b9d-catalog-content\") pod \"redhat-marketplace-g56rc\" (UID: \"64345244-2d92-4d0a-b58b-509269353b9d\") " pod="openshift-marketplace/redhat-marketplace-g56rc" Jan 03 06:33:42 crc kubenswrapper[4854]: I0103 06:33:42.263764 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64345244-2d92-4d0a-b58b-509269353b9d-utilities\") pod \"redhat-marketplace-g56rc\" (UID: \"64345244-2d92-4d0a-b58b-509269353b9d\") " pod="openshift-marketplace/redhat-marketplace-g56rc" Jan 03 06:33:42 crc kubenswrapper[4854]: I0103 06:33:42.263837 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64345244-2d92-4d0a-b58b-509269353b9d-catalog-content\") pod \"redhat-marketplace-g56rc\" (UID: \"64345244-2d92-4d0a-b58b-509269353b9d\") " pod="openshift-marketplace/redhat-marketplace-g56rc" Jan 03 06:33:42 crc kubenswrapper[4854]: I0103 06:33:42.263961 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nqsn\" (UniqueName: \"kubernetes.io/projected/64345244-2d92-4d0a-b58b-509269353b9d-kube-api-access-7nqsn\") pod \"redhat-marketplace-g56rc\" (UID: \"64345244-2d92-4d0a-b58b-509269353b9d\") " pod="openshift-marketplace/redhat-marketplace-g56rc" Jan 03 06:33:42 crc kubenswrapper[4854]: I0103 06:33:42.264435 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64345244-2d92-4d0a-b58b-509269353b9d-utilities\") pod \"redhat-marketplace-g56rc\" (UID: \"64345244-2d92-4d0a-b58b-509269353b9d\") " pod="openshift-marketplace/redhat-marketplace-g56rc" Jan 03 06:33:42 crc kubenswrapper[4854]: I0103 06:33:42.264553 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64345244-2d92-4d0a-b58b-509269353b9d-catalog-content\") pod \"redhat-marketplace-g56rc\" (UID: \"64345244-2d92-4d0a-b58b-509269353b9d\") " pod="openshift-marketplace/redhat-marketplace-g56rc" Jan 03 06:33:42 crc kubenswrapper[4854]: I0103 06:33:42.285891 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nqsn\" (UniqueName: \"kubernetes.io/projected/64345244-2d92-4d0a-b58b-509269353b9d-kube-api-access-7nqsn\") pod \"redhat-marketplace-g56rc\" (UID: \"64345244-2d92-4d0a-b58b-509269353b9d\") " pod="openshift-marketplace/redhat-marketplace-g56rc" Jan 03 06:33:42 crc kubenswrapper[4854]: I0103 06:33:42.319582 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g56rc" Jan 03 06:33:43 crc kubenswrapper[4854]: I0103 06:33:42.852382 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g56rc"] Jan 03 06:33:43 crc kubenswrapper[4854]: I0103 06:33:43.526845 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g56rc" event={"ID":"64345244-2d92-4d0a-b58b-509269353b9d","Type":"ContainerStarted","Data":"1efb7a18c0595034985c2ee042ec90d5a6cac6bfe7acb9d5bcd861c5cadc1813"} Jan 03 06:33:44 crc kubenswrapper[4854]: I0103 06:33:44.540351 4854 generic.go:334] "Generic (PLEG): container finished" podID="64345244-2d92-4d0a-b58b-509269353b9d" containerID="d3a4371e7ed880d48d28d9445c634554c1cc9268a2f054e40ba5798c17819e70" exitCode=0 Jan 03 06:33:44 crc kubenswrapper[4854]: I0103 06:33:44.540432 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g56rc" event={"ID":"64345244-2d92-4d0a-b58b-509269353b9d","Type":"ContainerDied","Data":"d3a4371e7ed880d48d28d9445c634554c1cc9268a2f054e40ba5798c17819e70"} Jan 03 06:33:45 crc kubenswrapper[4854]: I0103 06:33:45.118312 4854 scope.go:117] "RemoveContainer" containerID="d1062222733ed9b45e74b489b85666fabd821cc1e4231cded66f368d3ddfe0d2" Jan 03 06:33:45 crc kubenswrapper[4854]: E0103 06:33:45.118681 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:33:46 crc kubenswrapper[4854]: I0103 06:33:46.568910 4854 generic.go:334] "Generic (PLEG): container finished" podID="64345244-2d92-4d0a-b58b-509269353b9d" containerID="096721dabd8c2ff3a0acc8650df7939729438c1221576cf311afe5f95b383c37" exitCode=0 Jan 03 06:33:46 crc kubenswrapper[4854]: I0103 06:33:46.569519 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g56rc" event={"ID":"64345244-2d92-4d0a-b58b-509269353b9d","Type":"ContainerDied","Data":"096721dabd8c2ff3a0acc8650df7939729438c1221576cf311afe5f95b383c37"} Jan 03 06:33:47 crc kubenswrapper[4854]: I0103 06:33:47.581135 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g56rc" event={"ID":"64345244-2d92-4d0a-b58b-509269353b9d","Type":"ContainerStarted","Data":"7008690c60885f3b7f7426005a7e35578270474cf518d6a42e9e701ebd435cb9"} Jan 03 06:33:47 crc kubenswrapper[4854]: I0103 06:33:47.606243 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-g56rc" podStartSLOduration=4.160878067 podStartE2EDuration="6.606224869s" podCreationTimestamp="2026-01-03 06:33:41 +0000 UTC" firstStartedPulling="2026-01-03 06:33:44.542254997 +0000 UTC m=+3202.868831589" lastFinishedPulling="2026-01-03 06:33:46.987601819 +0000 UTC m=+3205.314178391" observedRunningTime="2026-01-03 06:33:47.596731818 +0000 UTC m=+3205.923308400" watchObservedRunningTime="2026-01-03 06:33:47.606224869 +0000 UTC m=+3205.932801431" Jan 03 06:33:52 crc kubenswrapper[4854]: I0103 06:33:52.321075 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-g56rc" Jan 03 06:33:52 crc kubenswrapper[4854]: I0103 06:33:52.321536 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-g56rc" Jan 03 06:33:52 crc kubenswrapper[4854]: I0103 06:33:52.406267 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-g56rc" Jan 03 06:33:52 crc kubenswrapper[4854]: I0103 06:33:52.693906 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-g56rc" Jan 03 06:33:53 crc kubenswrapper[4854]: I0103 06:33:53.666034 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g56rc"] Jan 03 06:33:54 crc kubenswrapper[4854]: I0103 06:33:54.654257 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-g56rc" podUID="64345244-2d92-4d0a-b58b-509269353b9d" containerName="registry-server" containerID="cri-o://7008690c60885f3b7f7426005a7e35578270474cf518d6a42e9e701ebd435cb9" gracePeriod=2 Jan 03 06:33:55 crc kubenswrapper[4854]: I0103 06:33:55.233258 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g56rc" Jan 03 06:33:55 crc kubenswrapper[4854]: I0103 06:33:55.419366 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64345244-2d92-4d0a-b58b-509269353b9d-catalog-content\") pod \"64345244-2d92-4d0a-b58b-509269353b9d\" (UID: \"64345244-2d92-4d0a-b58b-509269353b9d\") " Jan 03 06:33:55 crc kubenswrapper[4854]: I0103 06:33:55.419509 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64345244-2d92-4d0a-b58b-509269353b9d-utilities\") pod \"64345244-2d92-4d0a-b58b-509269353b9d\" (UID: \"64345244-2d92-4d0a-b58b-509269353b9d\") " Jan 03 06:33:55 crc kubenswrapper[4854]: I0103 06:33:55.419789 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nqsn\" (UniqueName: \"kubernetes.io/projected/64345244-2d92-4d0a-b58b-509269353b9d-kube-api-access-7nqsn\") pod \"64345244-2d92-4d0a-b58b-509269353b9d\" (UID: \"64345244-2d92-4d0a-b58b-509269353b9d\") " Jan 03 06:33:55 crc kubenswrapper[4854]: I0103 06:33:55.421106 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64345244-2d92-4d0a-b58b-509269353b9d-utilities" (OuterVolumeSpecName: "utilities") pod "64345244-2d92-4d0a-b58b-509269353b9d" (UID: "64345244-2d92-4d0a-b58b-509269353b9d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:33:55 crc kubenswrapper[4854]: I0103 06:33:55.430335 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64345244-2d92-4d0a-b58b-509269353b9d-kube-api-access-7nqsn" (OuterVolumeSpecName: "kube-api-access-7nqsn") pod "64345244-2d92-4d0a-b58b-509269353b9d" (UID: "64345244-2d92-4d0a-b58b-509269353b9d"). InnerVolumeSpecName "kube-api-access-7nqsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:33:55 crc kubenswrapper[4854]: I0103 06:33:55.475544 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64345244-2d92-4d0a-b58b-509269353b9d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "64345244-2d92-4d0a-b58b-509269353b9d" (UID: "64345244-2d92-4d0a-b58b-509269353b9d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:33:55 crc kubenswrapper[4854]: I0103 06:33:55.522783 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7nqsn\" (UniqueName: \"kubernetes.io/projected/64345244-2d92-4d0a-b58b-509269353b9d-kube-api-access-7nqsn\") on node \"crc\" DevicePath \"\"" Jan 03 06:33:55 crc kubenswrapper[4854]: I0103 06:33:55.522825 4854 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64345244-2d92-4d0a-b58b-509269353b9d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 06:33:55 crc kubenswrapper[4854]: I0103 06:33:55.522836 4854 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64345244-2d92-4d0a-b58b-509269353b9d-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 06:33:55 crc kubenswrapper[4854]: I0103 06:33:55.665481 4854 generic.go:334] "Generic (PLEG): container finished" podID="64345244-2d92-4d0a-b58b-509269353b9d" containerID="7008690c60885f3b7f7426005a7e35578270474cf518d6a42e9e701ebd435cb9" exitCode=0 Jan 03 06:33:55 crc kubenswrapper[4854]: I0103 06:33:55.665532 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g56rc" event={"ID":"64345244-2d92-4d0a-b58b-509269353b9d","Type":"ContainerDied","Data":"7008690c60885f3b7f7426005a7e35578270474cf518d6a42e9e701ebd435cb9"} Jan 03 06:33:55 crc kubenswrapper[4854]: I0103 06:33:55.665561 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g56rc" event={"ID":"64345244-2d92-4d0a-b58b-509269353b9d","Type":"ContainerDied","Data":"1efb7a18c0595034985c2ee042ec90d5a6cac6bfe7acb9d5bcd861c5cadc1813"} Jan 03 06:33:55 crc kubenswrapper[4854]: I0103 06:33:55.665578 4854 scope.go:117] "RemoveContainer" containerID="7008690c60885f3b7f7426005a7e35578270474cf518d6a42e9e701ebd435cb9" Jan 03 06:33:55 crc kubenswrapper[4854]: I0103 06:33:55.665755 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g56rc" Jan 03 06:33:55 crc kubenswrapper[4854]: I0103 06:33:55.712212 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g56rc"] Jan 03 06:33:55 crc kubenswrapper[4854]: I0103 06:33:55.719941 4854 scope.go:117] "RemoveContainer" containerID="096721dabd8c2ff3a0acc8650df7939729438c1221576cf311afe5f95b383c37" Jan 03 06:33:55 crc kubenswrapper[4854]: I0103 06:33:55.724567 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-g56rc"] Jan 03 06:33:55 crc kubenswrapper[4854]: I0103 06:33:55.746062 4854 scope.go:117] "RemoveContainer" containerID="d3a4371e7ed880d48d28d9445c634554c1cc9268a2f054e40ba5798c17819e70" Jan 03 06:33:55 crc kubenswrapper[4854]: I0103 06:33:55.796640 4854 scope.go:117] "RemoveContainer" containerID="7008690c60885f3b7f7426005a7e35578270474cf518d6a42e9e701ebd435cb9" Jan 03 06:33:55 crc kubenswrapper[4854]: E0103 06:33:55.797124 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7008690c60885f3b7f7426005a7e35578270474cf518d6a42e9e701ebd435cb9\": container with ID starting with 7008690c60885f3b7f7426005a7e35578270474cf518d6a42e9e701ebd435cb9 not found: ID does not exist" containerID="7008690c60885f3b7f7426005a7e35578270474cf518d6a42e9e701ebd435cb9" Jan 03 06:33:55 crc kubenswrapper[4854]: I0103 06:33:55.797166 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7008690c60885f3b7f7426005a7e35578270474cf518d6a42e9e701ebd435cb9"} err="failed to get container status \"7008690c60885f3b7f7426005a7e35578270474cf518d6a42e9e701ebd435cb9\": rpc error: code = NotFound desc = could not find container \"7008690c60885f3b7f7426005a7e35578270474cf518d6a42e9e701ebd435cb9\": container with ID starting with 7008690c60885f3b7f7426005a7e35578270474cf518d6a42e9e701ebd435cb9 not found: ID does not exist" Jan 03 06:33:55 crc kubenswrapper[4854]: I0103 06:33:55.797204 4854 scope.go:117] "RemoveContainer" containerID="096721dabd8c2ff3a0acc8650df7939729438c1221576cf311afe5f95b383c37" Jan 03 06:33:55 crc kubenswrapper[4854]: E0103 06:33:55.797642 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"096721dabd8c2ff3a0acc8650df7939729438c1221576cf311afe5f95b383c37\": container with ID starting with 096721dabd8c2ff3a0acc8650df7939729438c1221576cf311afe5f95b383c37 not found: ID does not exist" containerID="096721dabd8c2ff3a0acc8650df7939729438c1221576cf311afe5f95b383c37" Jan 03 06:33:55 crc kubenswrapper[4854]: I0103 06:33:55.797685 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"096721dabd8c2ff3a0acc8650df7939729438c1221576cf311afe5f95b383c37"} err="failed to get container status \"096721dabd8c2ff3a0acc8650df7939729438c1221576cf311afe5f95b383c37\": rpc error: code = NotFound desc = could not find container \"096721dabd8c2ff3a0acc8650df7939729438c1221576cf311afe5f95b383c37\": container with ID starting with 096721dabd8c2ff3a0acc8650df7939729438c1221576cf311afe5f95b383c37 not found: ID does not exist" Jan 03 06:33:55 crc kubenswrapper[4854]: I0103 06:33:55.797715 4854 scope.go:117] "RemoveContainer" containerID="d3a4371e7ed880d48d28d9445c634554c1cc9268a2f054e40ba5798c17819e70" Jan 03 06:33:55 crc kubenswrapper[4854]: E0103 06:33:55.797988 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3a4371e7ed880d48d28d9445c634554c1cc9268a2f054e40ba5798c17819e70\": container with ID starting with d3a4371e7ed880d48d28d9445c634554c1cc9268a2f054e40ba5798c17819e70 not found: ID does not exist" containerID="d3a4371e7ed880d48d28d9445c634554c1cc9268a2f054e40ba5798c17819e70" Jan 03 06:33:55 crc kubenswrapper[4854]: I0103 06:33:55.798016 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3a4371e7ed880d48d28d9445c634554c1cc9268a2f054e40ba5798c17819e70"} err="failed to get container status \"d3a4371e7ed880d48d28d9445c634554c1cc9268a2f054e40ba5798c17819e70\": rpc error: code = NotFound desc = could not find container \"d3a4371e7ed880d48d28d9445c634554c1cc9268a2f054e40ba5798c17819e70\": container with ID starting with d3a4371e7ed880d48d28d9445c634554c1cc9268a2f054e40ba5798c17819e70 not found: ID does not exist" Jan 03 06:33:56 crc kubenswrapper[4854]: I0103 06:33:56.132684 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64345244-2d92-4d0a-b58b-509269353b9d" path="/var/lib/kubelet/pods/64345244-2d92-4d0a-b58b-509269353b9d/volumes" Jan 03 06:33:58 crc kubenswrapper[4854]: I0103 06:33:58.118726 4854 scope.go:117] "RemoveContainer" containerID="d1062222733ed9b45e74b489b85666fabd821cc1e4231cded66f368d3ddfe0d2" Jan 03 06:33:58 crc kubenswrapper[4854]: E0103 06:33:58.119391 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:34:11 crc kubenswrapper[4854]: I0103 06:34:11.118378 4854 scope.go:117] "RemoveContainer" containerID="d1062222733ed9b45e74b489b85666fabd821cc1e4231cded66f368d3ddfe0d2" Jan 03 06:34:11 crc kubenswrapper[4854]: E0103 06:34:11.119051 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:34:15 crc kubenswrapper[4854]: I0103 06:34:15.605111 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wn7nt"] Jan 03 06:34:15 crc kubenswrapper[4854]: E0103 06:34:15.606307 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64345244-2d92-4d0a-b58b-509269353b9d" containerName="extract-content" Jan 03 06:34:15 crc kubenswrapper[4854]: I0103 06:34:15.606328 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="64345244-2d92-4d0a-b58b-509269353b9d" containerName="extract-content" Jan 03 06:34:15 crc kubenswrapper[4854]: E0103 06:34:15.606355 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64345244-2d92-4d0a-b58b-509269353b9d" containerName="extract-utilities" Jan 03 06:34:15 crc kubenswrapper[4854]: I0103 06:34:15.606363 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="64345244-2d92-4d0a-b58b-509269353b9d" containerName="extract-utilities" Jan 03 06:34:15 crc kubenswrapper[4854]: E0103 06:34:15.606394 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64345244-2d92-4d0a-b58b-509269353b9d" containerName="registry-server" Jan 03 06:34:15 crc kubenswrapper[4854]: I0103 06:34:15.606402 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="64345244-2d92-4d0a-b58b-509269353b9d" containerName="registry-server" Jan 03 06:34:15 crc kubenswrapper[4854]: I0103 06:34:15.606680 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="64345244-2d92-4d0a-b58b-509269353b9d" containerName="registry-server" Jan 03 06:34:15 crc kubenswrapper[4854]: I0103 06:34:15.608857 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wn7nt" Jan 03 06:34:15 crc kubenswrapper[4854]: I0103 06:34:15.620482 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wn7nt"] Jan 03 06:34:15 crc kubenswrapper[4854]: I0103 06:34:15.676434 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29070506-272b-4fcc-8b26-70d944fcb786-utilities\") pod \"redhat-operators-wn7nt\" (UID: \"29070506-272b-4fcc-8b26-70d944fcb786\") " pod="openshift-marketplace/redhat-operators-wn7nt" Jan 03 06:34:15 crc kubenswrapper[4854]: I0103 06:34:15.676873 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29070506-272b-4fcc-8b26-70d944fcb786-catalog-content\") pod \"redhat-operators-wn7nt\" (UID: \"29070506-272b-4fcc-8b26-70d944fcb786\") " pod="openshift-marketplace/redhat-operators-wn7nt" Jan 03 06:34:15 crc kubenswrapper[4854]: I0103 06:34:15.676905 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcnfp\" (UniqueName: \"kubernetes.io/projected/29070506-272b-4fcc-8b26-70d944fcb786-kube-api-access-rcnfp\") pod \"redhat-operators-wn7nt\" (UID: \"29070506-272b-4fcc-8b26-70d944fcb786\") " pod="openshift-marketplace/redhat-operators-wn7nt" Jan 03 06:34:15 crc kubenswrapper[4854]: I0103 06:34:15.780061 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29070506-272b-4fcc-8b26-70d944fcb786-catalog-content\") pod \"redhat-operators-wn7nt\" (UID: \"29070506-272b-4fcc-8b26-70d944fcb786\") " pod="openshift-marketplace/redhat-operators-wn7nt" Jan 03 06:34:15 crc kubenswrapper[4854]: I0103 06:34:15.780129 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcnfp\" (UniqueName: \"kubernetes.io/projected/29070506-272b-4fcc-8b26-70d944fcb786-kube-api-access-rcnfp\") pod \"redhat-operators-wn7nt\" (UID: \"29070506-272b-4fcc-8b26-70d944fcb786\") " pod="openshift-marketplace/redhat-operators-wn7nt" Jan 03 06:34:15 crc kubenswrapper[4854]: I0103 06:34:15.780303 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29070506-272b-4fcc-8b26-70d944fcb786-utilities\") pod \"redhat-operators-wn7nt\" (UID: \"29070506-272b-4fcc-8b26-70d944fcb786\") " pod="openshift-marketplace/redhat-operators-wn7nt" Jan 03 06:34:15 crc kubenswrapper[4854]: I0103 06:34:15.780553 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29070506-272b-4fcc-8b26-70d944fcb786-catalog-content\") pod \"redhat-operators-wn7nt\" (UID: \"29070506-272b-4fcc-8b26-70d944fcb786\") " pod="openshift-marketplace/redhat-operators-wn7nt" Jan 03 06:34:15 crc kubenswrapper[4854]: I0103 06:34:15.780858 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29070506-272b-4fcc-8b26-70d944fcb786-utilities\") pod \"redhat-operators-wn7nt\" (UID: \"29070506-272b-4fcc-8b26-70d944fcb786\") " pod="openshift-marketplace/redhat-operators-wn7nt" Jan 03 06:34:15 crc kubenswrapper[4854]: I0103 06:34:15.801056 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcnfp\" (UniqueName: \"kubernetes.io/projected/29070506-272b-4fcc-8b26-70d944fcb786-kube-api-access-rcnfp\") pod \"redhat-operators-wn7nt\" (UID: \"29070506-272b-4fcc-8b26-70d944fcb786\") " pod="openshift-marketplace/redhat-operators-wn7nt" Jan 03 06:34:15 crc kubenswrapper[4854]: I0103 06:34:15.940387 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wn7nt" Jan 03 06:34:16 crc kubenswrapper[4854]: I0103 06:34:16.504202 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wn7nt"] Jan 03 06:34:16 crc kubenswrapper[4854]: I0103 06:34:16.898260 4854 generic.go:334] "Generic (PLEG): container finished" podID="29070506-272b-4fcc-8b26-70d944fcb786" containerID="7ec535b4e9403c3709c3e8c8b5a572806404075726988e7fd2b437729cbf21f3" exitCode=0 Jan 03 06:34:16 crc kubenswrapper[4854]: I0103 06:34:16.898369 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wn7nt" event={"ID":"29070506-272b-4fcc-8b26-70d944fcb786","Type":"ContainerDied","Data":"7ec535b4e9403c3709c3e8c8b5a572806404075726988e7fd2b437729cbf21f3"} Jan 03 06:34:16 crc kubenswrapper[4854]: I0103 06:34:16.898610 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wn7nt" event={"ID":"29070506-272b-4fcc-8b26-70d944fcb786","Type":"ContainerStarted","Data":"acfb74f598370bd6cda06e1ca9fe293c7e2bdd43c2e59c2a4818775656dae984"} Jan 03 06:34:17 crc kubenswrapper[4854]: I0103 06:34:17.916421 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wn7nt" event={"ID":"29070506-272b-4fcc-8b26-70d944fcb786","Type":"ContainerStarted","Data":"0cf88c12c68ff5535733d0335d8e9420ac7a795ca9908a1fc6f59c22f9a35f00"} Jan 03 06:34:20 crc kubenswrapper[4854]: I0103 06:34:20.964899 4854 generic.go:334] "Generic (PLEG): container finished" podID="29070506-272b-4fcc-8b26-70d944fcb786" containerID="0cf88c12c68ff5535733d0335d8e9420ac7a795ca9908a1fc6f59c22f9a35f00" exitCode=0 Jan 03 06:34:20 crc kubenswrapper[4854]: I0103 06:34:20.965025 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wn7nt" event={"ID":"29070506-272b-4fcc-8b26-70d944fcb786","Type":"ContainerDied","Data":"0cf88c12c68ff5535733d0335d8e9420ac7a795ca9908a1fc6f59c22f9a35f00"} Jan 03 06:34:21 crc kubenswrapper[4854]: I0103 06:34:21.980872 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wn7nt" event={"ID":"29070506-272b-4fcc-8b26-70d944fcb786","Type":"ContainerStarted","Data":"82aee8b88616e5b0d22dbc46da1ce19beecaa9837c05a664a30997df513a310b"} Jan 03 06:34:22 crc kubenswrapper[4854]: I0103 06:34:22.015436 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wn7nt" podStartSLOduration=2.3924112490000002 podStartE2EDuration="7.015410291s" podCreationTimestamp="2026-01-03 06:34:15 +0000 UTC" firstStartedPulling="2026-01-03 06:34:16.900292056 +0000 UTC m=+3235.226868628" lastFinishedPulling="2026-01-03 06:34:21.523291098 +0000 UTC m=+3239.849867670" observedRunningTime="2026-01-03 06:34:22.014427307 +0000 UTC m=+3240.341003949" watchObservedRunningTime="2026-01-03 06:34:22.015410291 +0000 UTC m=+3240.341986903" Jan 03 06:34:22 crc kubenswrapper[4854]: I0103 06:34:22.997684 4854 generic.go:334] "Generic (PLEG): container finished" podID="9643ceca-5e7b-4dfd-a68a-27ff08e51c26" containerID="b80acf4276c04936f29cafcbef0078f9368a93714444eb5eecd54a9cb2aafa50" exitCode=0 Jan 03 06:34:22 crc kubenswrapper[4854]: I0103 06:34:22.997784 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6m698" event={"ID":"9643ceca-5e7b-4dfd-a68a-27ff08e51c26","Type":"ContainerDied","Data":"b80acf4276c04936f29cafcbef0078f9368a93714444eb5eecd54a9cb2aafa50"} Jan 03 06:34:24 crc kubenswrapper[4854]: I0103 06:34:24.626219 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6m698" Jan 03 06:34:24 crc kubenswrapper[4854]: I0103 06:34:24.761522 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9643ceca-5e7b-4dfd-a68a-27ff08e51c26-telemetry-combined-ca-bundle\") pod \"9643ceca-5e7b-4dfd-a68a-27ff08e51c26\" (UID: \"9643ceca-5e7b-4dfd-a68a-27ff08e51c26\") " Jan 03 06:34:24 crc kubenswrapper[4854]: I0103 06:34:24.761828 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9643ceca-5e7b-4dfd-a68a-27ff08e51c26-ssh-key\") pod \"9643ceca-5e7b-4dfd-a68a-27ff08e51c26\" (UID: \"9643ceca-5e7b-4dfd-a68a-27ff08e51c26\") " Jan 03 06:34:24 crc kubenswrapper[4854]: I0103 06:34:24.761904 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/9643ceca-5e7b-4dfd-a68a-27ff08e51c26-ceilometer-compute-config-data-2\") pod \"9643ceca-5e7b-4dfd-a68a-27ff08e51c26\" (UID: \"9643ceca-5e7b-4dfd-a68a-27ff08e51c26\") " Jan 03 06:34:24 crc kubenswrapper[4854]: I0103 06:34:24.761951 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/9643ceca-5e7b-4dfd-a68a-27ff08e51c26-ceilometer-compute-config-data-1\") pod \"9643ceca-5e7b-4dfd-a68a-27ff08e51c26\" (UID: \"9643ceca-5e7b-4dfd-a68a-27ff08e51c26\") " Jan 03 06:34:24 crc kubenswrapper[4854]: I0103 06:34:24.762132 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9643ceca-5e7b-4dfd-a68a-27ff08e51c26-inventory\") pod \"9643ceca-5e7b-4dfd-a68a-27ff08e51c26\" (UID: \"9643ceca-5e7b-4dfd-a68a-27ff08e51c26\") " Jan 03 06:34:24 crc kubenswrapper[4854]: I0103 06:34:24.762171 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/9643ceca-5e7b-4dfd-a68a-27ff08e51c26-ceilometer-compute-config-data-0\") pod \"9643ceca-5e7b-4dfd-a68a-27ff08e51c26\" (UID: \"9643ceca-5e7b-4dfd-a68a-27ff08e51c26\") " Jan 03 06:34:24 crc kubenswrapper[4854]: I0103 06:34:24.762284 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5z2vb\" (UniqueName: \"kubernetes.io/projected/9643ceca-5e7b-4dfd-a68a-27ff08e51c26-kube-api-access-5z2vb\") pod \"9643ceca-5e7b-4dfd-a68a-27ff08e51c26\" (UID: \"9643ceca-5e7b-4dfd-a68a-27ff08e51c26\") " Jan 03 06:34:24 crc kubenswrapper[4854]: I0103 06:34:24.767408 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9643ceca-5e7b-4dfd-a68a-27ff08e51c26-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "9643ceca-5e7b-4dfd-a68a-27ff08e51c26" (UID: "9643ceca-5e7b-4dfd-a68a-27ff08e51c26"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:34:24 crc kubenswrapper[4854]: I0103 06:34:24.769270 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9643ceca-5e7b-4dfd-a68a-27ff08e51c26-kube-api-access-5z2vb" (OuterVolumeSpecName: "kube-api-access-5z2vb") pod "9643ceca-5e7b-4dfd-a68a-27ff08e51c26" (UID: "9643ceca-5e7b-4dfd-a68a-27ff08e51c26"). InnerVolumeSpecName "kube-api-access-5z2vb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:34:24 crc kubenswrapper[4854]: I0103 06:34:24.793681 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9643ceca-5e7b-4dfd-a68a-27ff08e51c26-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "9643ceca-5e7b-4dfd-a68a-27ff08e51c26" (UID: "9643ceca-5e7b-4dfd-a68a-27ff08e51c26"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:34:24 crc kubenswrapper[4854]: I0103 06:34:24.807842 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9643ceca-5e7b-4dfd-a68a-27ff08e51c26-inventory" (OuterVolumeSpecName: "inventory") pod "9643ceca-5e7b-4dfd-a68a-27ff08e51c26" (UID: "9643ceca-5e7b-4dfd-a68a-27ff08e51c26"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:34:24 crc kubenswrapper[4854]: I0103 06:34:24.814139 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9643ceca-5e7b-4dfd-a68a-27ff08e51c26-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "9643ceca-5e7b-4dfd-a68a-27ff08e51c26" (UID: "9643ceca-5e7b-4dfd-a68a-27ff08e51c26"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:34:24 crc kubenswrapper[4854]: I0103 06:34:24.815294 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9643ceca-5e7b-4dfd-a68a-27ff08e51c26-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9643ceca-5e7b-4dfd-a68a-27ff08e51c26" (UID: "9643ceca-5e7b-4dfd-a68a-27ff08e51c26"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:34:24 crc kubenswrapper[4854]: I0103 06:34:24.824263 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9643ceca-5e7b-4dfd-a68a-27ff08e51c26-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "9643ceca-5e7b-4dfd-a68a-27ff08e51c26" (UID: "9643ceca-5e7b-4dfd-a68a-27ff08e51c26"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:34:24 crc kubenswrapper[4854]: I0103 06:34:24.865026 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5z2vb\" (UniqueName: \"kubernetes.io/projected/9643ceca-5e7b-4dfd-a68a-27ff08e51c26-kube-api-access-5z2vb\") on node \"crc\" DevicePath \"\"" Jan 03 06:34:24 crc kubenswrapper[4854]: I0103 06:34:24.865064 4854 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9643ceca-5e7b-4dfd-a68a-27ff08e51c26-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:34:24 crc kubenswrapper[4854]: I0103 06:34:24.865100 4854 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9643ceca-5e7b-4dfd-a68a-27ff08e51c26-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 03 06:34:24 crc kubenswrapper[4854]: I0103 06:34:24.865118 4854 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/9643ceca-5e7b-4dfd-a68a-27ff08e51c26-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 03 06:34:24 crc kubenswrapper[4854]: I0103 06:34:24.865137 4854 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/9643ceca-5e7b-4dfd-a68a-27ff08e51c26-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 03 06:34:24 crc kubenswrapper[4854]: I0103 06:34:24.865156 4854 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9643ceca-5e7b-4dfd-a68a-27ff08e51c26-inventory\") on node \"crc\" DevicePath \"\"" Jan 03 06:34:24 crc kubenswrapper[4854]: I0103 06:34:24.865177 4854 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/9643ceca-5e7b-4dfd-a68a-27ff08e51c26-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 03 06:34:25 crc kubenswrapper[4854]: I0103 06:34:25.030685 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6m698" event={"ID":"9643ceca-5e7b-4dfd-a68a-27ff08e51c26","Type":"ContainerDied","Data":"88a915b2aca0ecccbd8cd238bacda418beb6e19f72b35f2806aad7ab32d146e2"} Jan 03 06:34:25 crc kubenswrapper[4854]: I0103 06:34:25.030738 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88a915b2aca0ecccbd8cd238bacda418beb6e19f72b35f2806aad7ab32d146e2" Jan 03 06:34:25 crc kubenswrapper[4854]: I0103 06:34:25.031293 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6m698" Jan 03 06:34:25 crc kubenswrapper[4854]: I0103 06:34:25.170812 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-kctkw"] Jan 03 06:34:25 crc kubenswrapper[4854]: E0103 06:34:25.174298 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9643ceca-5e7b-4dfd-a68a-27ff08e51c26" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 03 06:34:25 crc kubenswrapper[4854]: I0103 06:34:25.174330 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="9643ceca-5e7b-4dfd-a68a-27ff08e51c26" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 03 06:34:25 crc kubenswrapper[4854]: I0103 06:34:25.174596 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="9643ceca-5e7b-4dfd-a68a-27ff08e51c26" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 03 06:34:25 crc kubenswrapper[4854]: I0103 06:34:25.177273 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-kctkw" Jan 03 06:34:25 crc kubenswrapper[4854]: I0103 06:34:25.180365 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 03 06:34:25 crc kubenswrapper[4854]: I0103 06:34:25.180544 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-ipmi-config-data" Jan 03 06:34:25 crc kubenswrapper[4854]: I0103 06:34:25.180665 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4bl62" Jan 03 06:34:25 crc kubenswrapper[4854]: I0103 06:34:25.180682 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 03 06:34:25 crc kubenswrapper[4854]: I0103 06:34:25.182098 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 03 06:34:25 crc kubenswrapper[4854]: I0103 06:34:25.188327 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-kctkw"] Jan 03 06:34:25 crc kubenswrapper[4854]: I0103 06:34:25.378775 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/97b9ad0f-caba-426d-a1b7-b2b7c669ab18-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-kctkw\" (UID: \"97b9ad0f-caba-426d-a1b7-b2b7c669ab18\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-kctkw" Jan 03 06:34:25 crc kubenswrapper[4854]: I0103 06:34:25.378902 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97b9ad0f-caba-426d-a1b7-b2b7c669ab18-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-kctkw\" (UID: \"97b9ad0f-caba-426d-a1b7-b2b7c669ab18\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-kctkw" Jan 03 06:34:25 crc kubenswrapper[4854]: I0103 06:34:25.378928 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/97b9ad0f-caba-426d-a1b7-b2b7c669ab18-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-kctkw\" (UID: \"97b9ad0f-caba-426d-a1b7-b2b7c669ab18\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-kctkw" Jan 03 06:34:25 crc kubenswrapper[4854]: I0103 06:34:25.378956 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gzxj\" (UniqueName: \"kubernetes.io/projected/97b9ad0f-caba-426d-a1b7-b2b7c669ab18-kube-api-access-2gzxj\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-kctkw\" (UID: \"97b9ad0f-caba-426d-a1b7-b2b7c669ab18\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-kctkw" Jan 03 06:34:25 crc kubenswrapper[4854]: I0103 06:34:25.379182 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97b9ad0f-caba-426d-a1b7-b2b7c669ab18-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-kctkw\" (UID: \"97b9ad0f-caba-426d-a1b7-b2b7c669ab18\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-kctkw" Jan 03 06:34:25 crc kubenswrapper[4854]: I0103 06:34:25.379247 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/97b9ad0f-caba-426d-a1b7-b2b7c669ab18-ssh-key\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-kctkw\" (UID: \"97b9ad0f-caba-426d-a1b7-b2b7c669ab18\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-kctkw" Jan 03 06:34:25 crc kubenswrapper[4854]: I0103 06:34:25.379515 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/97b9ad0f-caba-426d-a1b7-b2b7c669ab18-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-kctkw\" (UID: \"97b9ad0f-caba-426d-a1b7-b2b7c669ab18\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-kctkw" Jan 03 06:34:25 crc kubenswrapper[4854]: I0103 06:34:25.481923 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/97b9ad0f-caba-426d-a1b7-b2b7c669ab18-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-kctkw\" (UID: \"97b9ad0f-caba-426d-a1b7-b2b7c669ab18\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-kctkw" Jan 03 06:34:25 crc kubenswrapper[4854]: I0103 06:34:25.482353 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/97b9ad0f-caba-426d-a1b7-b2b7c669ab18-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-kctkw\" (UID: \"97b9ad0f-caba-426d-a1b7-b2b7c669ab18\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-kctkw" Jan 03 06:34:25 crc kubenswrapper[4854]: I0103 06:34:25.482492 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97b9ad0f-caba-426d-a1b7-b2b7c669ab18-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-kctkw\" (UID: \"97b9ad0f-caba-426d-a1b7-b2b7c669ab18\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-kctkw" Jan 03 06:34:25 crc kubenswrapper[4854]: I0103 06:34:25.482564 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/97b9ad0f-caba-426d-a1b7-b2b7c669ab18-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-kctkw\" (UID: \"97b9ad0f-caba-426d-a1b7-b2b7c669ab18\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-kctkw" Jan 03 06:34:25 crc kubenswrapper[4854]: I0103 06:34:25.482656 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gzxj\" (UniqueName: \"kubernetes.io/projected/97b9ad0f-caba-426d-a1b7-b2b7c669ab18-kube-api-access-2gzxj\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-kctkw\" (UID: \"97b9ad0f-caba-426d-a1b7-b2b7c669ab18\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-kctkw" Jan 03 06:34:25 crc kubenswrapper[4854]: I0103 06:34:25.482988 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97b9ad0f-caba-426d-a1b7-b2b7c669ab18-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-kctkw\" (UID: \"97b9ad0f-caba-426d-a1b7-b2b7c669ab18\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-kctkw" Jan 03 06:34:25 crc kubenswrapper[4854]: I0103 06:34:25.483030 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/97b9ad0f-caba-426d-a1b7-b2b7c669ab18-ssh-key\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-kctkw\" (UID: \"97b9ad0f-caba-426d-a1b7-b2b7c669ab18\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-kctkw" Jan 03 06:34:25 crc kubenswrapper[4854]: I0103 06:34:25.487815 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/97b9ad0f-caba-426d-a1b7-b2b7c669ab18-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-kctkw\" (UID: \"97b9ad0f-caba-426d-a1b7-b2b7c669ab18\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-kctkw" Jan 03 06:34:25 crc kubenswrapper[4854]: I0103 06:34:25.488432 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/97b9ad0f-caba-426d-a1b7-b2b7c669ab18-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-kctkw\" (UID: \"97b9ad0f-caba-426d-a1b7-b2b7c669ab18\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-kctkw" Jan 03 06:34:25 crc kubenswrapper[4854]: I0103 06:34:25.489357 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/97b9ad0f-caba-426d-a1b7-b2b7c669ab18-ssh-key\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-kctkw\" (UID: \"97b9ad0f-caba-426d-a1b7-b2b7c669ab18\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-kctkw" Jan 03 06:34:25 crc kubenswrapper[4854]: I0103 06:34:25.489607 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97b9ad0f-caba-426d-a1b7-b2b7c669ab18-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-kctkw\" (UID: \"97b9ad0f-caba-426d-a1b7-b2b7c669ab18\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-kctkw" Jan 03 06:34:25 crc kubenswrapper[4854]: I0103 06:34:25.491673 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97b9ad0f-caba-426d-a1b7-b2b7c669ab18-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-kctkw\" (UID: \"97b9ad0f-caba-426d-a1b7-b2b7c669ab18\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-kctkw" Jan 03 06:34:25 crc kubenswrapper[4854]: I0103 06:34:25.495342 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/97b9ad0f-caba-426d-a1b7-b2b7c669ab18-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-kctkw\" (UID: \"97b9ad0f-caba-426d-a1b7-b2b7c669ab18\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-kctkw" Jan 03 06:34:25 crc kubenswrapper[4854]: I0103 06:34:25.506762 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gzxj\" (UniqueName: \"kubernetes.io/projected/97b9ad0f-caba-426d-a1b7-b2b7c669ab18-kube-api-access-2gzxj\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-kctkw\" (UID: \"97b9ad0f-caba-426d-a1b7-b2b7c669ab18\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-kctkw" Jan 03 06:34:25 crc kubenswrapper[4854]: I0103 06:34:25.805586 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-kctkw" Jan 03 06:34:25 crc kubenswrapper[4854]: I0103 06:34:25.943316 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wn7nt" Jan 03 06:34:25 crc kubenswrapper[4854]: I0103 06:34:25.944602 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wn7nt" Jan 03 06:34:26 crc kubenswrapper[4854]: I0103 06:34:26.118793 4854 scope.go:117] "RemoveContainer" containerID="d1062222733ed9b45e74b489b85666fabd821cc1e4231cded66f368d3ddfe0d2" Jan 03 06:34:26 crc kubenswrapper[4854]: E0103 06:34:26.119364 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:34:26 crc kubenswrapper[4854]: I0103 06:34:26.434679 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-kctkw"] Jan 03 06:34:26 crc kubenswrapper[4854]: W0103 06:34:26.441264 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97b9ad0f_caba_426d_a1b7_b2b7c669ab18.slice/crio-ea07568fa7e3c390b3482900e2d8b86e7034471bff891ddbbcb72a15c2f4c433 WatchSource:0}: Error finding container ea07568fa7e3c390b3482900e2d8b86e7034471bff891ddbbcb72a15c2f4c433: Status 404 returned error can't find the container with id ea07568fa7e3c390b3482900e2d8b86e7034471bff891ddbbcb72a15c2f4c433 Jan 03 06:34:27 crc kubenswrapper[4854]: I0103 06:34:27.021540 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wn7nt" podUID="29070506-272b-4fcc-8b26-70d944fcb786" containerName="registry-server" probeResult="failure" output=< Jan 03 06:34:27 crc kubenswrapper[4854]: timeout: failed to connect service ":50051" within 1s Jan 03 06:34:27 crc kubenswrapper[4854]: > Jan 03 06:34:27 crc kubenswrapper[4854]: I0103 06:34:27.095254 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-kctkw" event={"ID":"97b9ad0f-caba-426d-a1b7-b2b7c669ab18","Type":"ContainerStarted","Data":"ea07568fa7e3c390b3482900e2d8b86e7034471bff891ddbbcb72a15c2f4c433"} Jan 03 06:34:28 crc kubenswrapper[4854]: I0103 06:34:28.107855 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-kctkw" event={"ID":"97b9ad0f-caba-426d-a1b7-b2b7c669ab18","Type":"ContainerStarted","Data":"b53ec96f0b7468906d11356d73f442571196487d4c3fcb6f961d01e3e25a8115"} Jan 03 06:34:28 crc kubenswrapper[4854]: I0103 06:34:28.163895 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-kctkw" podStartSLOduration=2.6655959620000003 podStartE2EDuration="3.163876955s" podCreationTimestamp="2026-01-03 06:34:25 +0000 UTC" firstStartedPulling="2026-01-03 06:34:26.444492765 +0000 UTC m=+3244.771069337" lastFinishedPulling="2026-01-03 06:34:26.942773758 +0000 UTC m=+3245.269350330" observedRunningTime="2026-01-03 06:34:28.133933577 +0000 UTC m=+3246.460510159" watchObservedRunningTime="2026-01-03 06:34:28.163876955 +0000 UTC m=+3246.490453527" Jan 03 06:34:36 crc kubenswrapper[4854]: I0103 06:34:36.020099 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wn7nt" Jan 03 06:34:36 crc kubenswrapper[4854]: I0103 06:34:36.077973 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wn7nt" Jan 03 06:34:36 crc kubenswrapper[4854]: I0103 06:34:36.266376 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wn7nt"] Jan 03 06:34:37 crc kubenswrapper[4854]: I0103 06:34:37.217997 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wn7nt" podUID="29070506-272b-4fcc-8b26-70d944fcb786" containerName="registry-server" containerID="cri-o://82aee8b88616e5b0d22dbc46da1ce19beecaa9837c05a664a30997df513a310b" gracePeriod=2 Jan 03 06:34:37 crc kubenswrapper[4854]: I0103 06:34:37.742918 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wn7nt" Jan 03 06:34:37 crc kubenswrapper[4854]: I0103 06:34:37.828284 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcnfp\" (UniqueName: \"kubernetes.io/projected/29070506-272b-4fcc-8b26-70d944fcb786-kube-api-access-rcnfp\") pod \"29070506-272b-4fcc-8b26-70d944fcb786\" (UID: \"29070506-272b-4fcc-8b26-70d944fcb786\") " Jan 03 06:34:37 crc kubenswrapper[4854]: I0103 06:34:37.828449 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29070506-272b-4fcc-8b26-70d944fcb786-utilities\") pod \"29070506-272b-4fcc-8b26-70d944fcb786\" (UID: \"29070506-272b-4fcc-8b26-70d944fcb786\") " Jan 03 06:34:37 crc kubenswrapper[4854]: I0103 06:34:37.828583 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29070506-272b-4fcc-8b26-70d944fcb786-catalog-content\") pod \"29070506-272b-4fcc-8b26-70d944fcb786\" (UID: \"29070506-272b-4fcc-8b26-70d944fcb786\") " Jan 03 06:34:37 crc kubenswrapper[4854]: I0103 06:34:37.829389 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29070506-272b-4fcc-8b26-70d944fcb786-utilities" (OuterVolumeSpecName: "utilities") pod "29070506-272b-4fcc-8b26-70d944fcb786" (UID: "29070506-272b-4fcc-8b26-70d944fcb786"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:34:37 crc kubenswrapper[4854]: I0103 06:34:37.837274 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29070506-272b-4fcc-8b26-70d944fcb786-kube-api-access-rcnfp" (OuterVolumeSpecName: "kube-api-access-rcnfp") pod "29070506-272b-4fcc-8b26-70d944fcb786" (UID: "29070506-272b-4fcc-8b26-70d944fcb786"). InnerVolumeSpecName "kube-api-access-rcnfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:34:37 crc kubenswrapper[4854]: I0103 06:34:37.931692 4854 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29070506-272b-4fcc-8b26-70d944fcb786-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 06:34:37 crc kubenswrapper[4854]: I0103 06:34:37.931738 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcnfp\" (UniqueName: \"kubernetes.io/projected/29070506-272b-4fcc-8b26-70d944fcb786-kube-api-access-rcnfp\") on node \"crc\" DevicePath \"\"" Jan 03 06:34:37 crc kubenswrapper[4854]: I0103 06:34:37.968060 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29070506-272b-4fcc-8b26-70d944fcb786-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "29070506-272b-4fcc-8b26-70d944fcb786" (UID: "29070506-272b-4fcc-8b26-70d944fcb786"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:34:38 crc kubenswrapper[4854]: I0103 06:34:38.034561 4854 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29070506-272b-4fcc-8b26-70d944fcb786-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 06:34:38 crc kubenswrapper[4854]: I0103 06:34:38.123195 4854 scope.go:117] "RemoveContainer" containerID="d1062222733ed9b45e74b489b85666fabd821cc1e4231cded66f368d3ddfe0d2" Jan 03 06:34:38 crc kubenswrapper[4854]: E0103 06:34:38.123468 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:34:38 crc kubenswrapper[4854]: I0103 06:34:38.232064 4854 generic.go:334] "Generic (PLEG): container finished" podID="29070506-272b-4fcc-8b26-70d944fcb786" containerID="82aee8b88616e5b0d22dbc46da1ce19beecaa9837c05a664a30997df513a310b" exitCode=0 Jan 03 06:34:38 crc kubenswrapper[4854]: I0103 06:34:38.232118 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wn7nt" Jan 03 06:34:38 crc kubenswrapper[4854]: I0103 06:34:38.232152 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wn7nt" event={"ID":"29070506-272b-4fcc-8b26-70d944fcb786","Type":"ContainerDied","Data":"82aee8b88616e5b0d22dbc46da1ce19beecaa9837c05a664a30997df513a310b"} Jan 03 06:34:38 crc kubenswrapper[4854]: I0103 06:34:38.232240 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wn7nt" event={"ID":"29070506-272b-4fcc-8b26-70d944fcb786","Type":"ContainerDied","Data":"acfb74f598370bd6cda06e1ca9fe293c7e2bdd43c2e59c2a4818775656dae984"} Jan 03 06:34:38 crc kubenswrapper[4854]: I0103 06:34:38.232282 4854 scope.go:117] "RemoveContainer" containerID="82aee8b88616e5b0d22dbc46da1ce19beecaa9837c05a664a30997df513a310b" Jan 03 06:34:38 crc kubenswrapper[4854]: I0103 06:34:38.264337 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wn7nt"] Jan 03 06:34:38 crc kubenswrapper[4854]: I0103 06:34:38.273810 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wn7nt"] Jan 03 06:34:38 crc kubenswrapper[4854]: I0103 06:34:38.276695 4854 scope.go:117] "RemoveContainer" containerID="0cf88c12c68ff5535733d0335d8e9420ac7a795ca9908a1fc6f59c22f9a35f00" Jan 03 06:34:38 crc kubenswrapper[4854]: I0103 06:34:38.321617 4854 scope.go:117] "RemoveContainer" containerID="7ec535b4e9403c3709c3e8c8b5a572806404075726988e7fd2b437729cbf21f3" Jan 03 06:34:38 crc kubenswrapper[4854]: I0103 06:34:38.363396 4854 scope.go:117] "RemoveContainer" containerID="82aee8b88616e5b0d22dbc46da1ce19beecaa9837c05a664a30997df513a310b" Jan 03 06:34:38 crc kubenswrapper[4854]: E0103 06:34:38.364194 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82aee8b88616e5b0d22dbc46da1ce19beecaa9837c05a664a30997df513a310b\": container with ID starting with 82aee8b88616e5b0d22dbc46da1ce19beecaa9837c05a664a30997df513a310b not found: ID does not exist" containerID="82aee8b88616e5b0d22dbc46da1ce19beecaa9837c05a664a30997df513a310b" Jan 03 06:34:38 crc kubenswrapper[4854]: I0103 06:34:38.364258 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82aee8b88616e5b0d22dbc46da1ce19beecaa9837c05a664a30997df513a310b"} err="failed to get container status \"82aee8b88616e5b0d22dbc46da1ce19beecaa9837c05a664a30997df513a310b\": rpc error: code = NotFound desc = could not find container \"82aee8b88616e5b0d22dbc46da1ce19beecaa9837c05a664a30997df513a310b\": container with ID starting with 82aee8b88616e5b0d22dbc46da1ce19beecaa9837c05a664a30997df513a310b not found: ID does not exist" Jan 03 06:34:38 crc kubenswrapper[4854]: I0103 06:34:38.364301 4854 scope.go:117] "RemoveContainer" containerID="0cf88c12c68ff5535733d0335d8e9420ac7a795ca9908a1fc6f59c22f9a35f00" Jan 03 06:34:38 crc kubenswrapper[4854]: E0103 06:34:38.364616 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cf88c12c68ff5535733d0335d8e9420ac7a795ca9908a1fc6f59c22f9a35f00\": container with ID starting with 0cf88c12c68ff5535733d0335d8e9420ac7a795ca9908a1fc6f59c22f9a35f00 not found: ID does not exist" containerID="0cf88c12c68ff5535733d0335d8e9420ac7a795ca9908a1fc6f59c22f9a35f00" Jan 03 06:34:38 crc kubenswrapper[4854]: I0103 06:34:38.364718 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cf88c12c68ff5535733d0335d8e9420ac7a795ca9908a1fc6f59c22f9a35f00"} err="failed to get container status \"0cf88c12c68ff5535733d0335d8e9420ac7a795ca9908a1fc6f59c22f9a35f00\": rpc error: code = NotFound desc = could not find container \"0cf88c12c68ff5535733d0335d8e9420ac7a795ca9908a1fc6f59c22f9a35f00\": container with ID starting with 0cf88c12c68ff5535733d0335d8e9420ac7a795ca9908a1fc6f59c22f9a35f00 not found: ID does not exist" Jan 03 06:34:38 crc kubenswrapper[4854]: I0103 06:34:38.364820 4854 scope.go:117] "RemoveContainer" containerID="7ec535b4e9403c3709c3e8c8b5a572806404075726988e7fd2b437729cbf21f3" Jan 03 06:34:38 crc kubenswrapper[4854]: E0103 06:34:38.365143 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ec535b4e9403c3709c3e8c8b5a572806404075726988e7fd2b437729cbf21f3\": container with ID starting with 7ec535b4e9403c3709c3e8c8b5a572806404075726988e7fd2b437729cbf21f3 not found: ID does not exist" containerID="7ec535b4e9403c3709c3e8c8b5a572806404075726988e7fd2b437729cbf21f3" Jan 03 06:34:38 crc kubenswrapper[4854]: I0103 06:34:38.365237 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ec535b4e9403c3709c3e8c8b5a572806404075726988e7fd2b437729cbf21f3"} err="failed to get container status \"7ec535b4e9403c3709c3e8c8b5a572806404075726988e7fd2b437729cbf21f3\": rpc error: code = NotFound desc = could not find container \"7ec535b4e9403c3709c3e8c8b5a572806404075726988e7fd2b437729cbf21f3\": container with ID starting with 7ec535b4e9403c3709c3e8c8b5a572806404075726988e7fd2b437729cbf21f3 not found: ID does not exist" Jan 03 06:34:40 crc kubenswrapper[4854]: I0103 06:34:40.133866 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29070506-272b-4fcc-8b26-70d944fcb786" path="/var/lib/kubelet/pods/29070506-272b-4fcc-8b26-70d944fcb786/volumes" Jan 03 06:34:49 crc kubenswrapper[4854]: I0103 06:34:49.118748 4854 scope.go:117] "RemoveContainer" containerID="d1062222733ed9b45e74b489b85666fabd821cc1e4231cded66f368d3ddfe0d2" Jan 03 06:34:49 crc kubenswrapper[4854]: E0103 06:34:49.119501 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:35:00 crc kubenswrapper[4854]: I0103 06:35:00.118448 4854 scope.go:117] "RemoveContainer" containerID="d1062222733ed9b45e74b489b85666fabd821cc1e4231cded66f368d3ddfe0d2" Jan 03 06:35:00 crc kubenswrapper[4854]: E0103 06:35:00.119339 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:35:15 crc kubenswrapper[4854]: I0103 06:35:15.118762 4854 scope.go:117] "RemoveContainer" containerID="d1062222733ed9b45e74b489b85666fabd821cc1e4231cded66f368d3ddfe0d2" Jan 03 06:35:15 crc kubenswrapper[4854]: E0103 06:35:15.119812 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:35:28 crc kubenswrapper[4854]: I0103 06:35:28.120192 4854 scope.go:117] "RemoveContainer" containerID="d1062222733ed9b45e74b489b85666fabd821cc1e4231cded66f368d3ddfe0d2" Jan 03 06:35:28 crc kubenswrapper[4854]: E0103 06:35:28.122512 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:35:43 crc kubenswrapper[4854]: I0103 06:35:43.118771 4854 scope.go:117] "RemoveContainer" containerID="d1062222733ed9b45e74b489b85666fabd821cc1e4231cded66f368d3ddfe0d2" Jan 03 06:35:43 crc kubenswrapper[4854]: E0103 06:35:43.119595 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:35:54 crc kubenswrapper[4854]: I0103 06:35:54.118331 4854 scope.go:117] "RemoveContainer" containerID="d1062222733ed9b45e74b489b85666fabd821cc1e4231cded66f368d3ddfe0d2" Jan 03 06:35:54 crc kubenswrapper[4854]: E0103 06:35:54.119330 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:36:06 crc kubenswrapper[4854]: I0103 06:36:06.118460 4854 scope.go:117] "RemoveContainer" containerID="d1062222733ed9b45e74b489b85666fabd821cc1e4231cded66f368d3ddfe0d2" Jan 03 06:36:06 crc kubenswrapper[4854]: E0103 06:36:06.119614 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:36:19 crc kubenswrapper[4854]: I0103 06:36:19.118268 4854 scope.go:117] "RemoveContainer" containerID="d1062222733ed9b45e74b489b85666fabd821cc1e4231cded66f368d3ddfe0d2" Jan 03 06:36:19 crc kubenswrapper[4854]: E0103 06:36:19.119052 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:36:34 crc kubenswrapper[4854]: I0103 06:36:34.118478 4854 scope.go:117] "RemoveContainer" containerID="d1062222733ed9b45e74b489b85666fabd821cc1e4231cded66f368d3ddfe0d2" Jan 03 06:36:34 crc kubenswrapper[4854]: E0103 06:36:34.119146 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:36:37 crc kubenswrapper[4854]: I0103 06:36:37.733824 4854 generic.go:334] "Generic (PLEG): container finished" podID="97b9ad0f-caba-426d-a1b7-b2b7c669ab18" containerID="b53ec96f0b7468906d11356d73f442571196487d4c3fcb6f961d01e3e25a8115" exitCode=0 Jan 03 06:36:37 crc kubenswrapper[4854]: I0103 06:36:37.733907 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-kctkw" event={"ID":"97b9ad0f-caba-426d-a1b7-b2b7c669ab18","Type":"ContainerDied","Data":"b53ec96f0b7468906d11356d73f442571196487d4c3fcb6f961d01e3e25a8115"} Jan 03 06:36:39 crc kubenswrapper[4854]: I0103 06:36:39.402550 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-kctkw" Jan 03 06:36:39 crc kubenswrapper[4854]: I0103 06:36:39.515421 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/97b9ad0f-caba-426d-a1b7-b2b7c669ab18-ceilometer-ipmi-config-data-2\") pod \"97b9ad0f-caba-426d-a1b7-b2b7c669ab18\" (UID: \"97b9ad0f-caba-426d-a1b7-b2b7c669ab18\") " Jan 03 06:36:39 crc kubenswrapper[4854]: I0103 06:36:39.515517 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/97b9ad0f-caba-426d-a1b7-b2b7c669ab18-ceilometer-ipmi-config-data-0\") pod \"97b9ad0f-caba-426d-a1b7-b2b7c669ab18\" (UID: \"97b9ad0f-caba-426d-a1b7-b2b7c669ab18\") " Jan 03 06:36:39 crc kubenswrapper[4854]: I0103 06:36:39.515660 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/97b9ad0f-caba-426d-a1b7-b2b7c669ab18-ssh-key\") pod \"97b9ad0f-caba-426d-a1b7-b2b7c669ab18\" (UID: \"97b9ad0f-caba-426d-a1b7-b2b7c669ab18\") " Jan 03 06:36:39 crc kubenswrapper[4854]: I0103 06:36:39.515683 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97b9ad0f-caba-426d-a1b7-b2b7c669ab18-inventory\") pod \"97b9ad0f-caba-426d-a1b7-b2b7c669ab18\" (UID: \"97b9ad0f-caba-426d-a1b7-b2b7c669ab18\") " Jan 03 06:36:39 crc kubenswrapper[4854]: I0103 06:36:39.516637 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/97b9ad0f-caba-426d-a1b7-b2b7c669ab18-ceilometer-ipmi-config-data-1\") pod \"97b9ad0f-caba-426d-a1b7-b2b7c669ab18\" (UID: \"97b9ad0f-caba-426d-a1b7-b2b7c669ab18\") " Jan 03 06:36:39 crc kubenswrapper[4854]: I0103 06:36:39.516701 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97b9ad0f-caba-426d-a1b7-b2b7c669ab18-telemetry-power-monitoring-combined-ca-bundle\") pod \"97b9ad0f-caba-426d-a1b7-b2b7c669ab18\" (UID: \"97b9ad0f-caba-426d-a1b7-b2b7c669ab18\") " Jan 03 06:36:39 crc kubenswrapper[4854]: I0103 06:36:39.516746 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gzxj\" (UniqueName: \"kubernetes.io/projected/97b9ad0f-caba-426d-a1b7-b2b7c669ab18-kube-api-access-2gzxj\") pod \"97b9ad0f-caba-426d-a1b7-b2b7c669ab18\" (UID: \"97b9ad0f-caba-426d-a1b7-b2b7c669ab18\") " Jan 03 06:36:39 crc kubenswrapper[4854]: I0103 06:36:39.529279 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97b9ad0f-caba-426d-a1b7-b2b7c669ab18-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "97b9ad0f-caba-426d-a1b7-b2b7c669ab18" (UID: "97b9ad0f-caba-426d-a1b7-b2b7c669ab18"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:36:39 crc kubenswrapper[4854]: I0103 06:36:39.529620 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97b9ad0f-caba-426d-a1b7-b2b7c669ab18-kube-api-access-2gzxj" (OuterVolumeSpecName: "kube-api-access-2gzxj") pod "97b9ad0f-caba-426d-a1b7-b2b7c669ab18" (UID: "97b9ad0f-caba-426d-a1b7-b2b7c669ab18"). InnerVolumeSpecName "kube-api-access-2gzxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:36:39 crc kubenswrapper[4854]: I0103 06:36:39.552880 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97b9ad0f-caba-426d-a1b7-b2b7c669ab18-ceilometer-ipmi-config-data-1" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-1") pod "97b9ad0f-caba-426d-a1b7-b2b7c669ab18" (UID: "97b9ad0f-caba-426d-a1b7-b2b7c669ab18"). InnerVolumeSpecName "ceilometer-ipmi-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:36:39 crc kubenswrapper[4854]: I0103 06:36:39.552991 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97b9ad0f-caba-426d-a1b7-b2b7c669ab18-inventory" (OuterVolumeSpecName: "inventory") pod "97b9ad0f-caba-426d-a1b7-b2b7c669ab18" (UID: "97b9ad0f-caba-426d-a1b7-b2b7c669ab18"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:36:39 crc kubenswrapper[4854]: I0103 06:36:39.564552 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97b9ad0f-caba-426d-a1b7-b2b7c669ab18-ceilometer-ipmi-config-data-2" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-2") pod "97b9ad0f-caba-426d-a1b7-b2b7c669ab18" (UID: "97b9ad0f-caba-426d-a1b7-b2b7c669ab18"). InnerVolumeSpecName "ceilometer-ipmi-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:36:39 crc kubenswrapper[4854]: I0103 06:36:39.564652 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97b9ad0f-caba-426d-a1b7-b2b7c669ab18-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "97b9ad0f-caba-426d-a1b7-b2b7c669ab18" (UID: "97b9ad0f-caba-426d-a1b7-b2b7c669ab18"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:36:39 crc kubenswrapper[4854]: I0103 06:36:39.578451 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97b9ad0f-caba-426d-a1b7-b2b7c669ab18-ceilometer-ipmi-config-data-0" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-0") pod "97b9ad0f-caba-426d-a1b7-b2b7c669ab18" (UID: "97b9ad0f-caba-426d-a1b7-b2b7c669ab18"). InnerVolumeSpecName "ceilometer-ipmi-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:36:39 crc kubenswrapper[4854]: I0103 06:36:39.621506 4854 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/97b9ad0f-caba-426d-a1b7-b2b7c669ab18-ceilometer-ipmi-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 03 06:36:39 crc kubenswrapper[4854]: I0103 06:36:39.621538 4854 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/97b9ad0f-caba-426d-a1b7-b2b7c669ab18-ceilometer-ipmi-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 03 06:36:39 crc kubenswrapper[4854]: I0103 06:36:39.621550 4854 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/97b9ad0f-caba-426d-a1b7-b2b7c669ab18-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 03 06:36:39 crc kubenswrapper[4854]: I0103 06:36:39.621559 4854 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97b9ad0f-caba-426d-a1b7-b2b7c669ab18-inventory\") on node \"crc\" DevicePath \"\"" Jan 03 06:36:39 crc kubenswrapper[4854]: I0103 06:36:39.621568 4854 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/97b9ad0f-caba-426d-a1b7-b2b7c669ab18-ceilometer-ipmi-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 03 06:36:39 crc kubenswrapper[4854]: I0103 06:36:39.621577 4854 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97b9ad0f-caba-426d-a1b7-b2b7c669ab18-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 06:36:39 crc kubenswrapper[4854]: I0103 06:36:39.621588 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gzxj\" (UniqueName: \"kubernetes.io/projected/97b9ad0f-caba-426d-a1b7-b2b7c669ab18-kube-api-access-2gzxj\") on node \"crc\" DevicePath \"\"" Jan 03 06:36:39 crc kubenswrapper[4854]: I0103 06:36:39.764538 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-kctkw" event={"ID":"97b9ad0f-caba-426d-a1b7-b2b7c669ab18","Type":"ContainerDied","Data":"ea07568fa7e3c390b3482900e2d8b86e7034471bff891ddbbcb72a15c2f4c433"} Jan 03 06:36:39 crc kubenswrapper[4854]: I0103 06:36:39.764599 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea07568fa7e3c390b3482900e2d8b86e7034471bff891ddbbcb72a15c2f4c433" Jan 03 06:36:39 crc kubenswrapper[4854]: I0103 06:36:39.764850 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-kctkw" Jan 03 06:36:39 crc kubenswrapper[4854]: I0103 06:36:39.902540 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-5kw8n"] Jan 03 06:36:39 crc kubenswrapper[4854]: E0103 06:36:39.903488 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29070506-272b-4fcc-8b26-70d944fcb786" containerName="extract-utilities" Jan 03 06:36:39 crc kubenswrapper[4854]: I0103 06:36:39.903511 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="29070506-272b-4fcc-8b26-70d944fcb786" containerName="extract-utilities" Jan 03 06:36:39 crc kubenswrapper[4854]: E0103 06:36:39.903529 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29070506-272b-4fcc-8b26-70d944fcb786" containerName="extract-content" Jan 03 06:36:39 crc kubenswrapper[4854]: I0103 06:36:39.903536 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="29070506-272b-4fcc-8b26-70d944fcb786" containerName="extract-content" Jan 03 06:36:39 crc kubenswrapper[4854]: E0103 06:36:39.903552 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97b9ad0f-caba-426d-a1b7-b2b7c669ab18" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Jan 03 06:36:39 crc kubenswrapper[4854]: I0103 06:36:39.903559 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="97b9ad0f-caba-426d-a1b7-b2b7c669ab18" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Jan 03 06:36:39 crc kubenswrapper[4854]: E0103 06:36:39.903596 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29070506-272b-4fcc-8b26-70d944fcb786" containerName="registry-server" Jan 03 06:36:39 crc kubenswrapper[4854]: I0103 06:36:39.903602 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="29070506-272b-4fcc-8b26-70d944fcb786" containerName="registry-server" Jan 03 06:36:39 crc kubenswrapper[4854]: I0103 06:36:39.903815 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="97b9ad0f-caba-426d-a1b7-b2b7c669ab18" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Jan 03 06:36:39 crc kubenswrapper[4854]: I0103 06:36:39.903837 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="29070506-272b-4fcc-8b26-70d944fcb786" containerName="registry-server" Jan 03 06:36:39 crc kubenswrapper[4854]: I0103 06:36:39.904814 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-5kw8n" Jan 03 06:36:39 crc kubenswrapper[4854]: I0103 06:36:39.907518 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 03 06:36:39 crc kubenswrapper[4854]: I0103 06:36:39.909783 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"logging-compute-config-data" Jan 03 06:36:39 crc kubenswrapper[4854]: I0103 06:36:39.910062 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 03 06:36:39 crc kubenswrapper[4854]: I0103 06:36:39.910174 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 03 06:36:39 crc kubenswrapper[4854]: I0103 06:36:39.910326 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4bl62" Jan 03 06:36:39 crc kubenswrapper[4854]: I0103 06:36:39.938665 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-5kw8n"] Jan 03 06:36:40 crc kubenswrapper[4854]: I0103 06:36:40.038734 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d7be9778-71f8-4c1d-a9db-1e12587bb2fd-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-5kw8n\" (UID: \"d7be9778-71f8-4c1d-a9db-1e12587bb2fd\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-5kw8n" Jan 03 06:36:40 crc kubenswrapper[4854]: I0103 06:36:40.038805 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/d7be9778-71f8-4c1d-a9db-1e12587bb2fd-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-5kw8n\" (UID: \"d7be9778-71f8-4c1d-a9db-1e12587bb2fd\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-5kw8n" Jan 03 06:36:40 crc kubenswrapper[4854]: I0103 06:36:40.038833 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/d7be9778-71f8-4c1d-a9db-1e12587bb2fd-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-5kw8n\" (UID: \"d7be9778-71f8-4c1d-a9db-1e12587bb2fd\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-5kw8n" Jan 03 06:36:40 crc kubenswrapper[4854]: I0103 06:36:40.038852 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jgd4\" (UniqueName: \"kubernetes.io/projected/d7be9778-71f8-4c1d-a9db-1e12587bb2fd-kube-api-access-9jgd4\") pod \"logging-edpm-deployment-openstack-edpm-ipam-5kw8n\" (UID: \"d7be9778-71f8-4c1d-a9db-1e12587bb2fd\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-5kw8n" Jan 03 06:36:40 crc kubenswrapper[4854]: I0103 06:36:40.038956 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d7be9778-71f8-4c1d-a9db-1e12587bb2fd-ssh-key\") pod \"logging-edpm-deployment-openstack-edpm-ipam-5kw8n\" (UID: \"d7be9778-71f8-4c1d-a9db-1e12587bb2fd\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-5kw8n" Jan 03 06:36:40 crc kubenswrapper[4854]: I0103 06:36:40.142588 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/d7be9778-71f8-4c1d-a9db-1e12587bb2fd-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-5kw8n\" (UID: \"d7be9778-71f8-4c1d-a9db-1e12587bb2fd\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-5kw8n" Jan 03 06:36:40 crc kubenswrapper[4854]: I0103 06:36:40.142645 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/d7be9778-71f8-4c1d-a9db-1e12587bb2fd-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-5kw8n\" (UID: \"d7be9778-71f8-4c1d-a9db-1e12587bb2fd\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-5kw8n" Jan 03 06:36:40 crc kubenswrapper[4854]: I0103 06:36:40.142666 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jgd4\" (UniqueName: \"kubernetes.io/projected/d7be9778-71f8-4c1d-a9db-1e12587bb2fd-kube-api-access-9jgd4\") pod \"logging-edpm-deployment-openstack-edpm-ipam-5kw8n\" (UID: \"d7be9778-71f8-4c1d-a9db-1e12587bb2fd\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-5kw8n" Jan 03 06:36:40 crc kubenswrapper[4854]: I0103 06:36:40.142791 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d7be9778-71f8-4c1d-a9db-1e12587bb2fd-ssh-key\") pod \"logging-edpm-deployment-openstack-edpm-ipam-5kw8n\" (UID: \"d7be9778-71f8-4c1d-a9db-1e12587bb2fd\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-5kw8n" Jan 03 06:36:40 crc kubenswrapper[4854]: I0103 06:36:40.142911 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d7be9778-71f8-4c1d-a9db-1e12587bb2fd-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-5kw8n\" (UID: \"d7be9778-71f8-4c1d-a9db-1e12587bb2fd\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-5kw8n" Jan 03 06:36:40 crc kubenswrapper[4854]: I0103 06:36:40.148330 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/d7be9778-71f8-4c1d-a9db-1e12587bb2fd-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-5kw8n\" (UID: \"d7be9778-71f8-4c1d-a9db-1e12587bb2fd\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-5kw8n" Jan 03 06:36:40 crc kubenswrapper[4854]: I0103 06:36:40.149068 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d7be9778-71f8-4c1d-a9db-1e12587bb2fd-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-5kw8n\" (UID: \"d7be9778-71f8-4c1d-a9db-1e12587bb2fd\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-5kw8n" Jan 03 06:36:40 crc kubenswrapper[4854]: I0103 06:36:40.149102 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/d7be9778-71f8-4c1d-a9db-1e12587bb2fd-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-5kw8n\" (UID: \"d7be9778-71f8-4c1d-a9db-1e12587bb2fd\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-5kw8n" Jan 03 06:36:40 crc kubenswrapper[4854]: I0103 06:36:40.154510 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d7be9778-71f8-4c1d-a9db-1e12587bb2fd-ssh-key\") pod \"logging-edpm-deployment-openstack-edpm-ipam-5kw8n\" (UID: \"d7be9778-71f8-4c1d-a9db-1e12587bb2fd\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-5kw8n" Jan 03 06:36:40 crc kubenswrapper[4854]: I0103 06:36:40.171496 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jgd4\" (UniqueName: \"kubernetes.io/projected/d7be9778-71f8-4c1d-a9db-1e12587bb2fd-kube-api-access-9jgd4\") pod \"logging-edpm-deployment-openstack-edpm-ipam-5kw8n\" (UID: \"d7be9778-71f8-4c1d-a9db-1e12587bb2fd\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-5kw8n" Jan 03 06:36:40 crc kubenswrapper[4854]: I0103 06:36:40.225421 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-5kw8n" Jan 03 06:36:40 crc kubenswrapper[4854]: I0103 06:36:40.835340 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-5kw8n"] Jan 03 06:36:40 crc kubenswrapper[4854]: I0103 06:36:40.883286 4854 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 03 06:36:41 crc kubenswrapper[4854]: I0103 06:36:41.785936 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-5kw8n" event={"ID":"d7be9778-71f8-4c1d-a9db-1e12587bb2fd","Type":"ContainerStarted","Data":"7b6d66a487592f78414b621a79b875586eadadb57483390c02a359a6fc6663b4"} Jan 03 06:36:41 crc kubenswrapper[4854]: I0103 06:36:41.786281 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-5kw8n" event={"ID":"d7be9778-71f8-4c1d-a9db-1e12587bb2fd","Type":"ContainerStarted","Data":"56efbacb0518494019f1e0eca15e4a039b8e5f8fe8c65fe9211c48ef8ccdf247"} Jan 03 06:36:41 crc kubenswrapper[4854]: I0103 06:36:41.803548 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-5kw8n" podStartSLOduration=2.352629843 podStartE2EDuration="2.803522765s" podCreationTimestamp="2026-01-03 06:36:39 +0000 UTC" firstStartedPulling="2026-01-03 06:36:40.882974276 +0000 UTC m=+3379.209550858" lastFinishedPulling="2026-01-03 06:36:41.333867208 +0000 UTC m=+3379.660443780" observedRunningTime="2026-01-03 06:36:41.799252469 +0000 UTC m=+3380.125829051" watchObservedRunningTime="2026-01-03 06:36:41.803522765 +0000 UTC m=+3380.130099347" Jan 03 06:36:49 crc kubenswrapper[4854]: I0103 06:36:49.118530 4854 scope.go:117] "RemoveContainer" containerID="d1062222733ed9b45e74b489b85666fabd821cc1e4231cded66f368d3ddfe0d2" Jan 03 06:36:49 crc kubenswrapper[4854]: E0103 06:36:49.119622 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:36:57 crc kubenswrapper[4854]: I0103 06:36:57.972802 4854 generic.go:334] "Generic (PLEG): container finished" podID="d7be9778-71f8-4c1d-a9db-1e12587bb2fd" containerID="7b6d66a487592f78414b621a79b875586eadadb57483390c02a359a6fc6663b4" exitCode=0 Jan 03 06:36:57 crc kubenswrapper[4854]: I0103 06:36:57.972874 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-5kw8n" event={"ID":"d7be9778-71f8-4c1d-a9db-1e12587bb2fd","Type":"ContainerDied","Data":"7b6d66a487592f78414b621a79b875586eadadb57483390c02a359a6fc6663b4"} Jan 03 06:36:59 crc kubenswrapper[4854]: I0103 06:36:59.579391 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-5kw8n" Jan 03 06:36:59 crc kubenswrapper[4854]: I0103 06:36:59.696844 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/d7be9778-71f8-4c1d-a9db-1e12587bb2fd-logging-compute-config-data-1\") pod \"d7be9778-71f8-4c1d-a9db-1e12587bb2fd\" (UID: \"d7be9778-71f8-4c1d-a9db-1e12587bb2fd\") " Jan 03 06:36:59 crc kubenswrapper[4854]: I0103 06:36:59.696909 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/d7be9778-71f8-4c1d-a9db-1e12587bb2fd-logging-compute-config-data-0\") pod \"d7be9778-71f8-4c1d-a9db-1e12587bb2fd\" (UID: \"d7be9778-71f8-4c1d-a9db-1e12587bb2fd\") " Jan 03 06:36:59 crc kubenswrapper[4854]: I0103 06:36:59.697023 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d7be9778-71f8-4c1d-a9db-1e12587bb2fd-ssh-key\") pod \"d7be9778-71f8-4c1d-a9db-1e12587bb2fd\" (UID: \"d7be9778-71f8-4c1d-a9db-1e12587bb2fd\") " Jan 03 06:36:59 crc kubenswrapper[4854]: I0103 06:36:59.698138 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jgd4\" (UniqueName: \"kubernetes.io/projected/d7be9778-71f8-4c1d-a9db-1e12587bb2fd-kube-api-access-9jgd4\") pod \"d7be9778-71f8-4c1d-a9db-1e12587bb2fd\" (UID: \"d7be9778-71f8-4c1d-a9db-1e12587bb2fd\") " Jan 03 06:36:59 crc kubenswrapper[4854]: I0103 06:36:59.698185 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d7be9778-71f8-4c1d-a9db-1e12587bb2fd-inventory\") pod \"d7be9778-71f8-4c1d-a9db-1e12587bb2fd\" (UID: \"d7be9778-71f8-4c1d-a9db-1e12587bb2fd\") " Jan 03 06:36:59 crc kubenswrapper[4854]: I0103 06:36:59.702794 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7be9778-71f8-4c1d-a9db-1e12587bb2fd-kube-api-access-9jgd4" (OuterVolumeSpecName: "kube-api-access-9jgd4") pod "d7be9778-71f8-4c1d-a9db-1e12587bb2fd" (UID: "d7be9778-71f8-4c1d-a9db-1e12587bb2fd"). InnerVolumeSpecName "kube-api-access-9jgd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:36:59 crc kubenswrapper[4854]: I0103 06:36:59.733737 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7be9778-71f8-4c1d-a9db-1e12587bb2fd-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d7be9778-71f8-4c1d-a9db-1e12587bb2fd" (UID: "d7be9778-71f8-4c1d-a9db-1e12587bb2fd"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:36:59 crc kubenswrapper[4854]: I0103 06:36:59.735908 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7be9778-71f8-4c1d-a9db-1e12587bb2fd-inventory" (OuterVolumeSpecName: "inventory") pod "d7be9778-71f8-4c1d-a9db-1e12587bb2fd" (UID: "d7be9778-71f8-4c1d-a9db-1e12587bb2fd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:36:59 crc kubenswrapper[4854]: I0103 06:36:59.736242 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7be9778-71f8-4c1d-a9db-1e12587bb2fd-logging-compute-config-data-1" (OuterVolumeSpecName: "logging-compute-config-data-1") pod "d7be9778-71f8-4c1d-a9db-1e12587bb2fd" (UID: "d7be9778-71f8-4c1d-a9db-1e12587bb2fd"). InnerVolumeSpecName "logging-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:36:59 crc kubenswrapper[4854]: I0103 06:36:59.746418 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7be9778-71f8-4c1d-a9db-1e12587bb2fd-logging-compute-config-data-0" (OuterVolumeSpecName: "logging-compute-config-data-0") pod "d7be9778-71f8-4c1d-a9db-1e12587bb2fd" (UID: "d7be9778-71f8-4c1d-a9db-1e12587bb2fd"). InnerVolumeSpecName "logging-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:36:59 crc kubenswrapper[4854]: I0103 06:36:59.801803 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jgd4\" (UniqueName: \"kubernetes.io/projected/d7be9778-71f8-4c1d-a9db-1e12587bb2fd-kube-api-access-9jgd4\") on node \"crc\" DevicePath \"\"" Jan 03 06:36:59 crc kubenswrapper[4854]: I0103 06:36:59.801838 4854 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d7be9778-71f8-4c1d-a9db-1e12587bb2fd-inventory\") on node \"crc\" DevicePath \"\"" Jan 03 06:36:59 crc kubenswrapper[4854]: I0103 06:36:59.801852 4854 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/d7be9778-71f8-4c1d-a9db-1e12587bb2fd-logging-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 03 06:36:59 crc kubenswrapper[4854]: I0103 06:36:59.801862 4854 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/d7be9778-71f8-4c1d-a9db-1e12587bb2fd-logging-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 03 06:36:59 crc kubenswrapper[4854]: I0103 06:36:59.801871 4854 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d7be9778-71f8-4c1d-a9db-1e12587bb2fd-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 03 06:36:59 crc kubenswrapper[4854]: I0103 06:36:59.999337 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-5kw8n" event={"ID":"d7be9778-71f8-4c1d-a9db-1e12587bb2fd","Type":"ContainerDied","Data":"56efbacb0518494019f1e0eca15e4a039b8e5f8fe8c65fe9211c48ef8ccdf247"} Jan 03 06:36:59 crc kubenswrapper[4854]: I0103 06:36:59.999384 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56efbacb0518494019f1e0eca15e4a039b8e5f8fe8c65fe9211c48ef8ccdf247" Jan 03 06:36:59 crc kubenswrapper[4854]: I0103 06:36:59.999391 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-5kw8n" Jan 03 06:37:04 crc kubenswrapper[4854]: I0103 06:37:04.118874 4854 scope.go:117] "RemoveContainer" containerID="d1062222733ed9b45e74b489b85666fabd821cc1e4231cded66f368d3ddfe0d2" Jan 03 06:37:04 crc kubenswrapper[4854]: E0103 06:37:04.120383 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:37:04 crc kubenswrapper[4854]: I0103 06:37:04.386831 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zr2x8"] Jan 03 06:37:04 crc kubenswrapper[4854]: E0103 06:37:04.388037 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7be9778-71f8-4c1d-a9db-1e12587bb2fd" containerName="logging-edpm-deployment-openstack-edpm-ipam" Jan 03 06:37:04 crc kubenswrapper[4854]: I0103 06:37:04.388292 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7be9778-71f8-4c1d-a9db-1e12587bb2fd" containerName="logging-edpm-deployment-openstack-edpm-ipam" Jan 03 06:37:04 crc kubenswrapper[4854]: I0103 06:37:04.388618 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7be9778-71f8-4c1d-a9db-1e12587bb2fd" containerName="logging-edpm-deployment-openstack-edpm-ipam" Jan 03 06:37:04 crc kubenswrapper[4854]: I0103 06:37:04.391367 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zr2x8" Jan 03 06:37:04 crc kubenswrapper[4854]: I0103 06:37:04.405014 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zr2x8"] Jan 03 06:37:04 crc kubenswrapper[4854]: I0103 06:37:04.514659 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/513a73f9-aa9c-44c8-868d-bbf0e80d973b-utilities\") pod \"community-operators-zr2x8\" (UID: \"513a73f9-aa9c-44c8-868d-bbf0e80d973b\") " pod="openshift-marketplace/community-operators-zr2x8" Jan 03 06:37:04 crc kubenswrapper[4854]: I0103 06:37:04.514780 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/513a73f9-aa9c-44c8-868d-bbf0e80d973b-catalog-content\") pod \"community-operators-zr2x8\" (UID: \"513a73f9-aa9c-44c8-868d-bbf0e80d973b\") " pod="openshift-marketplace/community-operators-zr2x8" Jan 03 06:37:04 crc kubenswrapper[4854]: I0103 06:37:04.514817 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4lxv\" (UniqueName: \"kubernetes.io/projected/513a73f9-aa9c-44c8-868d-bbf0e80d973b-kube-api-access-b4lxv\") pod \"community-operators-zr2x8\" (UID: \"513a73f9-aa9c-44c8-868d-bbf0e80d973b\") " pod="openshift-marketplace/community-operators-zr2x8" Jan 03 06:37:04 crc kubenswrapper[4854]: I0103 06:37:04.617550 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/513a73f9-aa9c-44c8-868d-bbf0e80d973b-catalog-content\") pod \"community-operators-zr2x8\" (UID: \"513a73f9-aa9c-44c8-868d-bbf0e80d973b\") " pod="openshift-marketplace/community-operators-zr2x8" Jan 03 06:37:04 crc kubenswrapper[4854]: I0103 06:37:04.617626 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4lxv\" (UniqueName: \"kubernetes.io/projected/513a73f9-aa9c-44c8-868d-bbf0e80d973b-kube-api-access-b4lxv\") pod \"community-operators-zr2x8\" (UID: \"513a73f9-aa9c-44c8-868d-bbf0e80d973b\") " pod="openshift-marketplace/community-operators-zr2x8" Jan 03 06:37:04 crc kubenswrapper[4854]: I0103 06:37:04.617782 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/513a73f9-aa9c-44c8-868d-bbf0e80d973b-utilities\") pod \"community-operators-zr2x8\" (UID: \"513a73f9-aa9c-44c8-868d-bbf0e80d973b\") " pod="openshift-marketplace/community-operators-zr2x8" Jan 03 06:37:04 crc kubenswrapper[4854]: I0103 06:37:04.618155 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/513a73f9-aa9c-44c8-868d-bbf0e80d973b-catalog-content\") pod \"community-operators-zr2x8\" (UID: \"513a73f9-aa9c-44c8-868d-bbf0e80d973b\") " pod="openshift-marketplace/community-operators-zr2x8" Jan 03 06:37:04 crc kubenswrapper[4854]: I0103 06:37:04.618214 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/513a73f9-aa9c-44c8-868d-bbf0e80d973b-utilities\") pod \"community-operators-zr2x8\" (UID: \"513a73f9-aa9c-44c8-868d-bbf0e80d973b\") " pod="openshift-marketplace/community-operators-zr2x8" Jan 03 06:37:04 crc kubenswrapper[4854]: I0103 06:37:04.647922 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4lxv\" (UniqueName: \"kubernetes.io/projected/513a73f9-aa9c-44c8-868d-bbf0e80d973b-kube-api-access-b4lxv\") pod \"community-operators-zr2x8\" (UID: \"513a73f9-aa9c-44c8-868d-bbf0e80d973b\") " pod="openshift-marketplace/community-operators-zr2x8" Jan 03 06:37:04 crc kubenswrapper[4854]: I0103 06:37:04.720746 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zr2x8" Jan 03 06:37:05 crc kubenswrapper[4854]: I0103 06:37:05.256864 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zr2x8"] Jan 03 06:37:06 crc kubenswrapper[4854]: I0103 06:37:06.308833 4854 generic.go:334] "Generic (PLEG): container finished" podID="513a73f9-aa9c-44c8-868d-bbf0e80d973b" containerID="59d2ffc831502761fa2654887f3e257083db38e44b53ff6be20487f39368ee87" exitCode=0 Jan 03 06:37:06 crc kubenswrapper[4854]: I0103 06:37:06.309241 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zr2x8" event={"ID":"513a73f9-aa9c-44c8-868d-bbf0e80d973b","Type":"ContainerDied","Data":"59d2ffc831502761fa2654887f3e257083db38e44b53ff6be20487f39368ee87"} Jan 03 06:37:06 crc kubenswrapper[4854]: I0103 06:37:06.309275 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zr2x8" event={"ID":"513a73f9-aa9c-44c8-868d-bbf0e80d973b","Type":"ContainerStarted","Data":"a8398e50885539480b3b6a654a77391535a0fd1e111a1108831a072fa01ad4b1"} Jan 03 06:37:07 crc kubenswrapper[4854]: I0103 06:37:07.336015 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zr2x8" event={"ID":"513a73f9-aa9c-44c8-868d-bbf0e80d973b","Type":"ContainerStarted","Data":"8a8f7bffcd24e45120c37a27d67d882f672140b431e3d8d5f13e4a4967247b65"} Jan 03 06:37:08 crc kubenswrapper[4854]: I0103 06:37:08.348481 4854 generic.go:334] "Generic (PLEG): container finished" podID="513a73f9-aa9c-44c8-868d-bbf0e80d973b" containerID="8a8f7bffcd24e45120c37a27d67d882f672140b431e3d8d5f13e4a4967247b65" exitCode=0 Jan 03 06:37:08 crc kubenswrapper[4854]: I0103 06:37:08.348538 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zr2x8" event={"ID":"513a73f9-aa9c-44c8-868d-bbf0e80d973b","Type":"ContainerDied","Data":"8a8f7bffcd24e45120c37a27d67d882f672140b431e3d8d5f13e4a4967247b65"} Jan 03 06:37:09 crc kubenswrapper[4854]: I0103 06:37:09.498872 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zr2x8" event={"ID":"513a73f9-aa9c-44c8-868d-bbf0e80d973b","Type":"ContainerStarted","Data":"ae0b4dfaf57be0b71c98f458da9e3927ed1d286e2675445d2a911764711f5cc6"} Jan 03 06:37:09 crc kubenswrapper[4854]: I0103 06:37:09.532363 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zr2x8" podStartSLOduration=3.088923679 podStartE2EDuration="5.532341773s" podCreationTimestamp="2026-01-03 06:37:04 +0000 UTC" firstStartedPulling="2026-01-03 06:37:06.312873684 +0000 UTC m=+3404.639450256" lastFinishedPulling="2026-01-03 06:37:08.756291788 +0000 UTC m=+3407.082868350" observedRunningTime="2026-01-03 06:37:09.521240568 +0000 UTC m=+3407.847817140" watchObservedRunningTime="2026-01-03 06:37:09.532341773 +0000 UTC m=+3407.858918335" Jan 03 06:37:14 crc kubenswrapper[4854]: I0103 06:37:14.721915 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zr2x8" Jan 03 06:37:14 crc kubenswrapper[4854]: I0103 06:37:14.722374 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zr2x8" Jan 03 06:37:14 crc kubenswrapper[4854]: I0103 06:37:14.783445 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zr2x8" Jan 03 06:37:15 crc kubenswrapper[4854]: I0103 06:37:15.119152 4854 scope.go:117] "RemoveContainer" containerID="d1062222733ed9b45e74b489b85666fabd821cc1e4231cded66f368d3ddfe0d2" Jan 03 06:37:15 crc kubenswrapper[4854]: E0103 06:37:15.119809 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:37:15 crc kubenswrapper[4854]: I0103 06:37:15.621472 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zr2x8" Jan 03 06:37:15 crc kubenswrapper[4854]: I0103 06:37:15.667478 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zr2x8"] Jan 03 06:37:17 crc kubenswrapper[4854]: I0103 06:37:17.595593 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zr2x8" podUID="513a73f9-aa9c-44c8-868d-bbf0e80d973b" containerName="registry-server" containerID="cri-o://ae0b4dfaf57be0b71c98f458da9e3927ed1d286e2675445d2a911764711f5cc6" gracePeriod=2 Jan 03 06:37:18 crc kubenswrapper[4854]: I0103 06:37:18.135951 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zr2x8" Jan 03 06:37:18 crc kubenswrapper[4854]: I0103 06:37:18.224860 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/513a73f9-aa9c-44c8-868d-bbf0e80d973b-utilities\") pod \"513a73f9-aa9c-44c8-868d-bbf0e80d973b\" (UID: \"513a73f9-aa9c-44c8-868d-bbf0e80d973b\") " Jan 03 06:37:18 crc kubenswrapper[4854]: I0103 06:37:18.225024 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4lxv\" (UniqueName: \"kubernetes.io/projected/513a73f9-aa9c-44c8-868d-bbf0e80d973b-kube-api-access-b4lxv\") pod \"513a73f9-aa9c-44c8-868d-bbf0e80d973b\" (UID: \"513a73f9-aa9c-44c8-868d-bbf0e80d973b\") " Jan 03 06:37:18 crc kubenswrapper[4854]: I0103 06:37:18.225151 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/513a73f9-aa9c-44c8-868d-bbf0e80d973b-catalog-content\") pod \"513a73f9-aa9c-44c8-868d-bbf0e80d973b\" (UID: \"513a73f9-aa9c-44c8-868d-bbf0e80d973b\") " Jan 03 06:37:18 crc kubenswrapper[4854]: I0103 06:37:18.225975 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/513a73f9-aa9c-44c8-868d-bbf0e80d973b-utilities" (OuterVolumeSpecName: "utilities") pod "513a73f9-aa9c-44c8-868d-bbf0e80d973b" (UID: "513a73f9-aa9c-44c8-868d-bbf0e80d973b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:37:18 crc kubenswrapper[4854]: I0103 06:37:18.265491 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/513a73f9-aa9c-44c8-868d-bbf0e80d973b-kube-api-access-b4lxv" (OuterVolumeSpecName: "kube-api-access-b4lxv") pod "513a73f9-aa9c-44c8-868d-bbf0e80d973b" (UID: "513a73f9-aa9c-44c8-868d-bbf0e80d973b"). InnerVolumeSpecName "kube-api-access-b4lxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:37:18 crc kubenswrapper[4854]: I0103 06:37:18.323651 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/513a73f9-aa9c-44c8-868d-bbf0e80d973b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "513a73f9-aa9c-44c8-868d-bbf0e80d973b" (UID: "513a73f9-aa9c-44c8-868d-bbf0e80d973b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:37:18 crc kubenswrapper[4854]: I0103 06:37:18.330868 4854 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/513a73f9-aa9c-44c8-868d-bbf0e80d973b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 06:37:18 crc kubenswrapper[4854]: I0103 06:37:18.330904 4854 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/513a73f9-aa9c-44c8-868d-bbf0e80d973b-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 06:37:18 crc kubenswrapper[4854]: I0103 06:37:18.330916 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4lxv\" (UniqueName: \"kubernetes.io/projected/513a73f9-aa9c-44c8-868d-bbf0e80d973b-kube-api-access-b4lxv\") on node \"crc\" DevicePath \"\"" Jan 03 06:37:18 crc kubenswrapper[4854]: I0103 06:37:18.617756 4854 generic.go:334] "Generic (PLEG): container finished" podID="513a73f9-aa9c-44c8-868d-bbf0e80d973b" containerID="ae0b4dfaf57be0b71c98f458da9e3927ed1d286e2675445d2a911764711f5cc6" exitCode=0 Jan 03 06:37:18 crc kubenswrapper[4854]: I0103 06:37:18.617888 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zr2x8" Jan 03 06:37:18 crc kubenswrapper[4854]: I0103 06:37:18.617876 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zr2x8" event={"ID":"513a73f9-aa9c-44c8-868d-bbf0e80d973b","Type":"ContainerDied","Data":"ae0b4dfaf57be0b71c98f458da9e3927ed1d286e2675445d2a911764711f5cc6"} Jan 03 06:37:18 crc kubenswrapper[4854]: I0103 06:37:18.618043 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zr2x8" event={"ID":"513a73f9-aa9c-44c8-868d-bbf0e80d973b","Type":"ContainerDied","Data":"a8398e50885539480b3b6a654a77391535a0fd1e111a1108831a072fa01ad4b1"} Jan 03 06:37:18 crc kubenswrapper[4854]: I0103 06:37:18.618074 4854 scope.go:117] "RemoveContainer" containerID="ae0b4dfaf57be0b71c98f458da9e3927ed1d286e2675445d2a911764711f5cc6" Jan 03 06:37:18 crc kubenswrapper[4854]: I0103 06:37:18.657813 4854 scope.go:117] "RemoveContainer" containerID="8a8f7bffcd24e45120c37a27d67d882f672140b431e3d8d5f13e4a4967247b65" Jan 03 06:37:18 crc kubenswrapper[4854]: I0103 06:37:18.674243 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zr2x8"] Jan 03 06:37:18 crc kubenswrapper[4854]: I0103 06:37:18.694421 4854 scope.go:117] "RemoveContainer" containerID="59d2ffc831502761fa2654887f3e257083db38e44b53ff6be20487f39368ee87" Jan 03 06:37:18 crc kubenswrapper[4854]: I0103 06:37:18.699563 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zr2x8"] Jan 03 06:37:18 crc kubenswrapper[4854]: I0103 06:37:18.737010 4854 scope.go:117] "RemoveContainer" containerID="ae0b4dfaf57be0b71c98f458da9e3927ed1d286e2675445d2a911764711f5cc6" Jan 03 06:37:18 crc kubenswrapper[4854]: E0103 06:37:18.737521 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae0b4dfaf57be0b71c98f458da9e3927ed1d286e2675445d2a911764711f5cc6\": container with ID starting with ae0b4dfaf57be0b71c98f458da9e3927ed1d286e2675445d2a911764711f5cc6 not found: ID does not exist" containerID="ae0b4dfaf57be0b71c98f458da9e3927ed1d286e2675445d2a911764711f5cc6" Jan 03 06:37:18 crc kubenswrapper[4854]: I0103 06:37:18.737573 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae0b4dfaf57be0b71c98f458da9e3927ed1d286e2675445d2a911764711f5cc6"} err="failed to get container status \"ae0b4dfaf57be0b71c98f458da9e3927ed1d286e2675445d2a911764711f5cc6\": rpc error: code = NotFound desc = could not find container \"ae0b4dfaf57be0b71c98f458da9e3927ed1d286e2675445d2a911764711f5cc6\": container with ID starting with ae0b4dfaf57be0b71c98f458da9e3927ed1d286e2675445d2a911764711f5cc6 not found: ID does not exist" Jan 03 06:37:18 crc kubenswrapper[4854]: I0103 06:37:18.737608 4854 scope.go:117] "RemoveContainer" containerID="8a8f7bffcd24e45120c37a27d67d882f672140b431e3d8d5f13e4a4967247b65" Jan 03 06:37:18 crc kubenswrapper[4854]: E0103 06:37:18.737919 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a8f7bffcd24e45120c37a27d67d882f672140b431e3d8d5f13e4a4967247b65\": container with ID starting with 8a8f7bffcd24e45120c37a27d67d882f672140b431e3d8d5f13e4a4967247b65 not found: ID does not exist" containerID="8a8f7bffcd24e45120c37a27d67d882f672140b431e3d8d5f13e4a4967247b65" Jan 03 06:37:18 crc kubenswrapper[4854]: I0103 06:37:18.737957 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a8f7bffcd24e45120c37a27d67d882f672140b431e3d8d5f13e4a4967247b65"} err="failed to get container status \"8a8f7bffcd24e45120c37a27d67d882f672140b431e3d8d5f13e4a4967247b65\": rpc error: code = NotFound desc = could not find container \"8a8f7bffcd24e45120c37a27d67d882f672140b431e3d8d5f13e4a4967247b65\": container with ID starting with 8a8f7bffcd24e45120c37a27d67d882f672140b431e3d8d5f13e4a4967247b65 not found: ID does not exist" Jan 03 06:37:18 crc kubenswrapper[4854]: I0103 06:37:18.737986 4854 scope.go:117] "RemoveContainer" containerID="59d2ffc831502761fa2654887f3e257083db38e44b53ff6be20487f39368ee87" Jan 03 06:37:18 crc kubenswrapper[4854]: E0103 06:37:18.738243 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59d2ffc831502761fa2654887f3e257083db38e44b53ff6be20487f39368ee87\": container with ID starting with 59d2ffc831502761fa2654887f3e257083db38e44b53ff6be20487f39368ee87 not found: ID does not exist" containerID="59d2ffc831502761fa2654887f3e257083db38e44b53ff6be20487f39368ee87" Jan 03 06:37:18 crc kubenswrapper[4854]: I0103 06:37:18.738266 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59d2ffc831502761fa2654887f3e257083db38e44b53ff6be20487f39368ee87"} err="failed to get container status \"59d2ffc831502761fa2654887f3e257083db38e44b53ff6be20487f39368ee87\": rpc error: code = NotFound desc = could not find container \"59d2ffc831502761fa2654887f3e257083db38e44b53ff6be20487f39368ee87\": container with ID starting with 59d2ffc831502761fa2654887f3e257083db38e44b53ff6be20487f39368ee87 not found: ID does not exist" Jan 03 06:37:20 crc kubenswrapper[4854]: I0103 06:37:20.132940 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="513a73f9-aa9c-44c8-868d-bbf0e80d973b" path="/var/lib/kubelet/pods/513a73f9-aa9c-44c8-868d-bbf0e80d973b/volumes" Jan 03 06:37:27 crc kubenswrapper[4854]: I0103 06:37:27.118210 4854 scope.go:117] "RemoveContainer" containerID="d1062222733ed9b45e74b489b85666fabd821cc1e4231cded66f368d3ddfe0d2" Jan 03 06:37:27 crc kubenswrapper[4854]: E0103 06:37:27.119053 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:37:40 crc kubenswrapper[4854]: I0103 06:37:40.118223 4854 scope.go:117] "RemoveContainer" containerID="d1062222733ed9b45e74b489b85666fabd821cc1e4231cded66f368d3ddfe0d2" Jan 03 06:37:40 crc kubenswrapper[4854]: E0103 06:37:40.119207 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:37:50 crc kubenswrapper[4854]: I0103 06:37:50.432219 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wv5np"] Jan 03 06:37:50 crc kubenswrapper[4854]: E0103 06:37:50.433468 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="513a73f9-aa9c-44c8-868d-bbf0e80d973b" containerName="registry-server" Jan 03 06:37:50 crc kubenswrapper[4854]: I0103 06:37:50.433487 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="513a73f9-aa9c-44c8-868d-bbf0e80d973b" containerName="registry-server" Jan 03 06:37:50 crc kubenswrapper[4854]: E0103 06:37:50.433551 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="513a73f9-aa9c-44c8-868d-bbf0e80d973b" containerName="extract-content" Jan 03 06:37:50 crc kubenswrapper[4854]: I0103 06:37:50.433562 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="513a73f9-aa9c-44c8-868d-bbf0e80d973b" containerName="extract-content" Jan 03 06:37:50 crc kubenswrapper[4854]: E0103 06:37:50.433574 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="513a73f9-aa9c-44c8-868d-bbf0e80d973b" containerName="extract-utilities" Jan 03 06:37:50 crc kubenswrapper[4854]: I0103 06:37:50.433583 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="513a73f9-aa9c-44c8-868d-bbf0e80d973b" containerName="extract-utilities" Jan 03 06:37:50 crc kubenswrapper[4854]: I0103 06:37:50.433903 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="513a73f9-aa9c-44c8-868d-bbf0e80d973b" containerName="registry-server" Jan 03 06:37:50 crc kubenswrapper[4854]: I0103 06:37:50.436390 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wv5np" Jan 03 06:37:50 crc kubenswrapper[4854]: I0103 06:37:50.450789 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wv5np"] Jan 03 06:37:50 crc kubenswrapper[4854]: I0103 06:37:50.509424 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e2aaf10-c22c-4cc5-80f0-7465a387c999-utilities\") pod \"certified-operators-wv5np\" (UID: \"6e2aaf10-c22c-4cc5-80f0-7465a387c999\") " pod="openshift-marketplace/certified-operators-wv5np" Jan 03 06:37:50 crc kubenswrapper[4854]: I0103 06:37:50.509855 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wk2h\" (UniqueName: \"kubernetes.io/projected/6e2aaf10-c22c-4cc5-80f0-7465a387c999-kube-api-access-2wk2h\") pod \"certified-operators-wv5np\" (UID: \"6e2aaf10-c22c-4cc5-80f0-7465a387c999\") " pod="openshift-marketplace/certified-operators-wv5np" Jan 03 06:37:50 crc kubenswrapper[4854]: I0103 06:37:50.509926 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e2aaf10-c22c-4cc5-80f0-7465a387c999-catalog-content\") pod \"certified-operators-wv5np\" (UID: \"6e2aaf10-c22c-4cc5-80f0-7465a387c999\") " pod="openshift-marketplace/certified-operators-wv5np" Jan 03 06:37:50 crc kubenswrapper[4854]: I0103 06:37:50.611841 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e2aaf10-c22c-4cc5-80f0-7465a387c999-utilities\") pod \"certified-operators-wv5np\" (UID: \"6e2aaf10-c22c-4cc5-80f0-7465a387c999\") " pod="openshift-marketplace/certified-operators-wv5np" Jan 03 06:37:50 crc kubenswrapper[4854]: I0103 06:37:50.612074 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wk2h\" (UniqueName: \"kubernetes.io/projected/6e2aaf10-c22c-4cc5-80f0-7465a387c999-kube-api-access-2wk2h\") pod \"certified-operators-wv5np\" (UID: \"6e2aaf10-c22c-4cc5-80f0-7465a387c999\") " pod="openshift-marketplace/certified-operators-wv5np" Jan 03 06:37:50 crc kubenswrapper[4854]: I0103 06:37:50.612131 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e2aaf10-c22c-4cc5-80f0-7465a387c999-catalog-content\") pod \"certified-operators-wv5np\" (UID: \"6e2aaf10-c22c-4cc5-80f0-7465a387c999\") " pod="openshift-marketplace/certified-operators-wv5np" Jan 03 06:37:50 crc kubenswrapper[4854]: I0103 06:37:50.612805 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e2aaf10-c22c-4cc5-80f0-7465a387c999-utilities\") pod \"certified-operators-wv5np\" (UID: \"6e2aaf10-c22c-4cc5-80f0-7465a387c999\") " pod="openshift-marketplace/certified-operators-wv5np" Jan 03 06:37:50 crc kubenswrapper[4854]: I0103 06:37:50.612959 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e2aaf10-c22c-4cc5-80f0-7465a387c999-catalog-content\") pod \"certified-operators-wv5np\" (UID: \"6e2aaf10-c22c-4cc5-80f0-7465a387c999\") " pod="openshift-marketplace/certified-operators-wv5np" Jan 03 06:37:50 crc kubenswrapper[4854]: I0103 06:37:50.635868 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wk2h\" (UniqueName: \"kubernetes.io/projected/6e2aaf10-c22c-4cc5-80f0-7465a387c999-kube-api-access-2wk2h\") pod \"certified-operators-wv5np\" (UID: \"6e2aaf10-c22c-4cc5-80f0-7465a387c999\") " pod="openshift-marketplace/certified-operators-wv5np" Jan 03 06:37:50 crc kubenswrapper[4854]: I0103 06:37:50.775434 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wv5np" Jan 03 06:37:51 crc kubenswrapper[4854]: I0103 06:37:51.118183 4854 scope.go:117] "RemoveContainer" containerID="d1062222733ed9b45e74b489b85666fabd821cc1e4231cded66f368d3ddfe0d2" Jan 03 06:37:51 crc kubenswrapper[4854]: I0103 06:37:51.327704 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wv5np"] Jan 03 06:37:52 crc kubenswrapper[4854]: I0103 06:37:52.010356 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" event={"ID":"e8c88d7d-092b-44f7-b4c8-3540be3c0e8b","Type":"ContainerStarted","Data":"9be9571d9b7e30074562447f16093118a686bcb88c3071236f620678b77ed55f"} Jan 03 06:37:52 crc kubenswrapper[4854]: I0103 06:37:52.014235 4854 generic.go:334] "Generic (PLEG): container finished" podID="6e2aaf10-c22c-4cc5-80f0-7465a387c999" containerID="cca07264d5be9223d0fb38078aedd4d53e078e67c83f317d0663c3b705569488" exitCode=0 Jan 03 06:37:52 crc kubenswrapper[4854]: I0103 06:37:52.014269 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wv5np" event={"ID":"6e2aaf10-c22c-4cc5-80f0-7465a387c999","Type":"ContainerDied","Data":"cca07264d5be9223d0fb38078aedd4d53e078e67c83f317d0663c3b705569488"} Jan 03 06:37:52 crc kubenswrapper[4854]: I0103 06:37:52.014288 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wv5np" event={"ID":"6e2aaf10-c22c-4cc5-80f0-7465a387c999","Type":"ContainerStarted","Data":"a5e6ff060099269d0b0cc84c59f089ee40e554d38178f1daecc7fe9bf362435e"} Jan 03 06:37:53 crc kubenswrapper[4854]: I0103 06:37:53.028439 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wv5np" event={"ID":"6e2aaf10-c22c-4cc5-80f0-7465a387c999","Type":"ContainerStarted","Data":"d97d7ce6425c7bb01779082810e30d1299181ef421a190b02ce119a981990f90"} Jan 03 06:37:54 crc kubenswrapper[4854]: I0103 06:37:54.039389 4854 generic.go:334] "Generic (PLEG): container finished" podID="6e2aaf10-c22c-4cc5-80f0-7465a387c999" containerID="d97d7ce6425c7bb01779082810e30d1299181ef421a190b02ce119a981990f90" exitCode=0 Jan 03 06:37:54 crc kubenswrapper[4854]: I0103 06:37:54.039478 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wv5np" event={"ID":"6e2aaf10-c22c-4cc5-80f0-7465a387c999","Type":"ContainerDied","Data":"d97d7ce6425c7bb01779082810e30d1299181ef421a190b02ce119a981990f90"} Jan 03 06:37:55 crc kubenswrapper[4854]: I0103 06:37:55.060167 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wv5np" event={"ID":"6e2aaf10-c22c-4cc5-80f0-7465a387c999","Type":"ContainerStarted","Data":"1e9b545004a385936ba82b0909c4e398279a79bfcd88cdf82799e1e156ec31a5"} Jan 03 06:37:55 crc kubenswrapper[4854]: I0103 06:37:55.089796 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wv5np" podStartSLOduration=2.482795227 podStartE2EDuration="5.089771567s" podCreationTimestamp="2026-01-03 06:37:50 +0000 UTC" firstStartedPulling="2026-01-03 06:37:52.01596427 +0000 UTC m=+3450.342540842" lastFinishedPulling="2026-01-03 06:37:54.62294061 +0000 UTC m=+3452.949517182" observedRunningTime="2026-01-03 06:37:55.078008166 +0000 UTC m=+3453.404584748" watchObservedRunningTime="2026-01-03 06:37:55.089771567 +0000 UTC m=+3453.416348139" Jan 03 06:38:00 crc kubenswrapper[4854]: I0103 06:38:00.775651 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wv5np" Jan 03 06:38:00 crc kubenswrapper[4854]: I0103 06:38:00.776163 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wv5np" Jan 03 06:38:00 crc kubenswrapper[4854]: I0103 06:38:00.831366 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wv5np" Jan 03 06:38:01 crc kubenswrapper[4854]: I0103 06:38:01.188582 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wv5np" Jan 03 06:38:01 crc kubenswrapper[4854]: I0103 06:38:01.269307 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wv5np"] Jan 03 06:38:03 crc kubenswrapper[4854]: I0103 06:38:03.158390 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wv5np" podUID="6e2aaf10-c22c-4cc5-80f0-7465a387c999" containerName="registry-server" containerID="cri-o://1e9b545004a385936ba82b0909c4e398279a79bfcd88cdf82799e1e156ec31a5" gracePeriod=2 Jan 03 06:38:03 crc kubenswrapper[4854]: I0103 06:38:03.677177 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wv5np" Jan 03 06:38:03 crc kubenswrapper[4854]: I0103 06:38:03.824741 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e2aaf10-c22c-4cc5-80f0-7465a387c999-utilities\") pod \"6e2aaf10-c22c-4cc5-80f0-7465a387c999\" (UID: \"6e2aaf10-c22c-4cc5-80f0-7465a387c999\") " Jan 03 06:38:03 crc kubenswrapper[4854]: I0103 06:38:03.824921 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wk2h\" (UniqueName: \"kubernetes.io/projected/6e2aaf10-c22c-4cc5-80f0-7465a387c999-kube-api-access-2wk2h\") pod \"6e2aaf10-c22c-4cc5-80f0-7465a387c999\" (UID: \"6e2aaf10-c22c-4cc5-80f0-7465a387c999\") " Jan 03 06:38:03 crc kubenswrapper[4854]: I0103 06:38:03.825142 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e2aaf10-c22c-4cc5-80f0-7465a387c999-catalog-content\") pod \"6e2aaf10-c22c-4cc5-80f0-7465a387c999\" (UID: \"6e2aaf10-c22c-4cc5-80f0-7465a387c999\") " Jan 03 06:38:03 crc kubenswrapper[4854]: I0103 06:38:03.826452 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e2aaf10-c22c-4cc5-80f0-7465a387c999-utilities" (OuterVolumeSpecName: "utilities") pod "6e2aaf10-c22c-4cc5-80f0-7465a387c999" (UID: "6e2aaf10-c22c-4cc5-80f0-7465a387c999"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:38:03 crc kubenswrapper[4854]: I0103 06:38:03.834479 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e2aaf10-c22c-4cc5-80f0-7465a387c999-kube-api-access-2wk2h" (OuterVolumeSpecName: "kube-api-access-2wk2h") pod "6e2aaf10-c22c-4cc5-80f0-7465a387c999" (UID: "6e2aaf10-c22c-4cc5-80f0-7465a387c999"). InnerVolumeSpecName "kube-api-access-2wk2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:38:03 crc kubenswrapper[4854]: I0103 06:38:03.901353 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e2aaf10-c22c-4cc5-80f0-7465a387c999-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6e2aaf10-c22c-4cc5-80f0-7465a387c999" (UID: "6e2aaf10-c22c-4cc5-80f0-7465a387c999"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:38:03 crc kubenswrapper[4854]: I0103 06:38:03.927853 4854 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e2aaf10-c22c-4cc5-80f0-7465a387c999-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 06:38:03 crc kubenswrapper[4854]: I0103 06:38:03.927891 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wk2h\" (UniqueName: \"kubernetes.io/projected/6e2aaf10-c22c-4cc5-80f0-7465a387c999-kube-api-access-2wk2h\") on node \"crc\" DevicePath \"\"" Jan 03 06:38:03 crc kubenswrapper[4854]: I0103 06:38:03.927905 4854 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e2aaf10-c22c-4cc5-80f0-7465a387c999-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 06:38:04 crc kubenswrapper[4854]: I0103 06:38:04.174224 4854 generic.go:334] "Generic (PLEG): container finished" podID="6e2aaf10-c22c-4cc5-80f0-7465a387c999" containerID="1e9b545004a385936ba82b0909c4e398279a79bfcd88cdf82799e1e156ec31a5" exitCode=0 Jan 03 06:38:04 crc kubenswrapper[4854]: I0103 06:38:04.174271 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wv5np" event={"ID":"6e2aaf10-c22c-4cc5-80f0-7465a387c999","Type":"ContainerDied","Data":"1e9b545004a385936ba82b0909c4e398279a79bfcd88cdf82799e1e156ec31a5"} Jan 03 06:38:04 crc kubenswrapper[4854]: I0103 06:38:04.174305 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wv5np" event={"ID":"6e2aaf10-c22c-4cc5-80f0-7465a387c999","Type":"ContainerDied","Data":"a5e6ff060099269d0b0cc84c59f089ee40e554d38178f1daecc7fe9bf362435e"} Jan 03 06:38:04 crc kubenswrapper[4854]: I0103 06:38:04.174322 4854 scope.go:117] "RemoveContainer" containerID="1e9b545004a385936ba82b0909c4e398279a79bfcd88cdf82799e1e156ec31a5" Jan 03 06:38:04 crc kubenswrapper[4854]: I0103 06:38:04.174476 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wv5np" Jan 03 06:38:04 crc kubenswrapper[4854]: I0103 06:38:04.206251 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wv5np"] Jan 03 06:38:04 crc kubenswrapper[4854]: I0103 06:38:04.212279 4854 scope.go:117] "RemoveContainer" containerID="d97d7ce6425c7bb01779082810e30d1299181ef421a190b02ce119a981990f90" Jan 03 06:38:04 crc kubenswrapper[4854]: I0103 06:38:04.219750 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wv5np"] Jan 03 06:38:04 crc kubenswrapper[4854]: I0103 06:38:04.254314 4854 scope.go:117] "RemoveContainer" containerID="cca07264d5be9223d0fb38078aedd4d53e078e67c83f317d0663c3b705569488" Jan 03 06:38:04 crc kubenswrapper[4854]: I0103 06:38:04.318298 4854 scope.go:117] "RemoveContainer" containerID="1e9b545004a385936ba82b0909c4e398279a79bfcd88cdf82799e1e156ec31a5" Jan 03 06:38:04 crc kubenswrapper[4854]: E0103 06:38:04.318714 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e9b545004a385936ba82b0909c4e398279a79bfcd88cdf82799e1e156ec31a5\": container with ID starting with 1e9b545004a385936ba82b0909c4e398279a79bfcd88cdf82799e1e156ec31a5 not found: ID does not exist" containerID="1e9b545004a385936ba82b0909c4e398279a79bfcd88cdf82799e1e156ec31a5" Jan 03 06:38:04 crc kubenswrapper[4854]: I0103 06:38:04.318746 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e9b545004a385936ba82b0909c4e398279a79bfcd88cdf82799e1e156ec31a5"} err="failed to get container status \"1e9b545004a385936ba82b0909c4e398279a79bfcd88cdf82799e1e156ec31a5\": rpc error: code = NotFound desc = could not find container \"1e9b545004a385936ba82b0909c4e398279a79bfcd88cdf82799e1e156ec31a5\": container with ID starting with 1e9b545004a385936ba82b0909c4e398279a79bfcd88cdf82799e1e156ec31a5 not found: ID does not exist" Jan 03 06:38:04 crc kubenswrapper[4854]: I0103 06:38:04.318771 4854 scope.go:117] "RemoveContainer" containerID="d97d7ce6425c7bb01779082810e30d1299181ef421a190b02ce119a981990f90" Jan 03 06:38:04 crc kubenswrapper[4854]: E0103 06:38:04.319517 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d97d7ce6425c7bb01779082810e30d1299181ef421a190b02ce119a981990f90\": container with ID starting with d97d7ce6425c7bb01779082810e30d1299181ef421a190b02ce119a981990f90 not found: ID does not exist" containerID="d97d7ce6425c7bb01779082810e30d1299181ef421a190b02ce119a981990f90" Jan 03 06:38:04 crc kubenswrapper[4854]: I0103 06:38:04.319543 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d97d7ce6425c7bb01779082810e30d1299181ef421a190b02ce119a981990f90"} err="failed to get container status \"d97d7ce6425c7bb01779082810e30d1299181ef421a190b02ce119a981990f90\": rpc error: code = NotFound desc = could not find container \"d97d7ce6425c7bb01779082810e30d1299181ef421a190b02ce119a981990f90\": container with ID starting with d97d7ce6425c7bb01779082810e30d1299181ef421a190b02ce119a981990f90 not found: ID does not exist" Jan 03 06:38:04 crc kubenswrapper[4854]: I0103 06:38:04.319559 4854 scope.go:117] "RemoveContainer" containerID="cca07264d5be9223d0fb38078aedd4d53e078e67c83f317d0663c3b705569488" Jan 03 06:38:04 crc kubenswrapper[4854]: E0103 06:38:04.319790 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cca07264d5be9223d0fb38078aedd4d53e078e67c83f317d0663c3b705569488\": container with ID starting with cca07264d5be9223d0fb38078aedd4d53e078e67c83f317d0663c3b705569488 not found: ID does not exist" containerID="cca07264d5be9223d0fb38078aedd4d53e078e67c83f317d0663c3b705569488" Jan 03 06:38:04 crc kubenswrapper[4854]: I0103 06:38:04.319811 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cca07264d5be9223d0fb38078aedd4d53e078e67c83f317d0663c3b705569488"} err="failed to get container status \"cca07264d5be9223d0fb38078aedd4d53e078e67c83f317d0663c3b705569488\": rpc error: code = NotFound desc = could not find container \"cca07264d5be9223d0fb38078aedd4d53e078e67c83f317d0663c3b705569488\": container with ID starting with cca07264d5be9223d0fb38078aedd4d53e078e67c83f317d0663c3b705569488 not found: ID does not exist" Jan 03 06:38:06 crc kubenswrapper[4854]: I0103 06:38:06.131489 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e2aaf10-c22c-4cc5-80f0-7465a387c999" path="/var/lib/kubelet/pods/6e2aaf10-c22c-4cc5-80f0-7465a387c999/volumes" Jan 03 06:40:11 crc kubenswrapper[4854]: I0103 06:40:11.755842 4854 patch_prober.go:28] interesting pod/machine-config-daemon-qdhfx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 06:40:11 crc kubenswrapper[4854]: I0103 06:40:11.756811 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 06:40:41 crc kubenswrapper[4854]: I0103 06:40:41.755346 4854 patch_prober.go:28] interesting pod/machine-config-daemon-qdhfx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 06:40:41 crc kubenswrapper[4854]: I0103 06:40:41.755822 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 06:41:11 crc kubenswrapper[4854]: I0103 06:41:11.756410 4854 patch_prober.go:28] interesting pod/machine-config-daemon-qdhfx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 06:41:11 crc kubenswrapper[4854]: I0103 06:41:11.757305 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 06:41:11 crc kubenswrapper[4854]: I0103 06:41:11.757361 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" Jan 03 06:41:11 crc kubenswrapper[4854]: I0103 06:41:11.758398 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9be9571d9b7e30074562447f16093118a686bcb88c3071236f620678b77ed55f"} pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 03 06:41:11 crc kubenswrapper[4854]: I0103 06:41:11.758471 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" containerID="cri-o://9be9571d9b7e30074562447f16093118a686bcb88c3071236f620678b77ed55f" gracePeriod=600 Jan 03 06:41:12 crc kubenswrapper[4854]: I0103 06:41:12.815512 4854 generic.go:334] "Generic (PLEG): container finished" podID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerID="9be9571d9b7e30074562447f16093118a686bcb88c3071236f620678b77ed55f" exitCode=0 Jan 03 06:41:12 crc kubenswrapper[4854]: I0103 06:41:12.815550 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" event={"ID":"e8c88d7d-092b-44f7-b4c8-3540be3c0e8b","Type":"ContainerDied","Data":"9be9571d9b7e30074562447f16093118a686bcb88c3071236f620678b77ed55f"} Jan 03 06:41:12 crc kubenswrapper[4854]: I0103 06:41:12.816134 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" event={"ID":"e8c88d7d-092b-44f7-b4c8-3540be3c0e8b","Type":"ContainerStarted","Data":"0ad481e457e7d7b411292c10ab3c7ba3a1eabe2389a354e3396b5bc21a961e6f"} Jan 03 06:41:12 crc kubenswrapper[4854]: I0103 06:41:12.816181 4854 scope.go:117] "RemoveContainer" containerID="d1062222733ed9b45e74b489b85666fabd821cc1e4231cded66f368d3ddfe0d2" Jan 03 06:43:03 crc kubenswrapper[4854]: E0103 06:43:03.894326 4854 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.102:53024->38.102.83.102:42659: write tcp 38.102.83.102:53024->38.102.83.102:42659: write: connection reset by peer Jan 03 06:43:42 crc kubenswrapper[4854]: I0103 06:43:42.051726 4854 patch_prober.go:28] interesting pod/machine-config-daemon-qdhfx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 06:43:42 crc kubenswrapper[4854]: I0103 06:43:42.052394 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 06:44:11 crc kubenswrapper[4854]: I0103 06:44:11.755828 4854 patch_prober.go:28] interesting pod/machine-config-daemon-qdhfx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 06:44:11 crc kubenswrapper[4854]: I0103 06:44:11.756591 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 06:44:41 crc kubenswrapper[4854]: I0103 06:44:41.755501 4854 patch_prober.go:28] interesting pod/machine-config-daemon-qdhfx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 06:44:41 crc kubenswrapper[4854]: I0103 06:44:41.756146 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 06:44:41 crc kubenswrapper[4854]: I0103 06:44:41.756211 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" Jan 03 06:44:41 crc kubenswrapper[4854]: I0103 06:44:41.758254 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0ad481e457e7d7b411292c10ab3c7ba3a1eabe2389a354e3396b5bc21a961e6f"} pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 03 06:44:41 crc kubenswrapper[4854]: I0103 06:44:41.758362 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" containerID="cri-o://0ad481e457e7d7b411292c10ab3c7ba3a1eabe2389a354e3396b5bc21a961e6f" gracePeriod=600 Jan 03 06:44:41 crc kubenswrapper[4854]: E0103 06:44:41.897414 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:44:41 crc kubenswrapper[4854]: I0103 06:44:41.908272 4854 generic.go:334] "Generic (PLEG): container finished" podID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerID="0ad481e457e7d7b411292c10ab3c7ba3a1eabe2389a354e3396b5bc21a961e6f" exitCode=0 Jan 03 06:44:41 crc kubenswrapper[4854]: I0103 06:44:41.908639 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" event={"ID":"e8c88d7d-092b-44f7-b4c8-3540be3c0e8b","Type":"ContainerDied","Data":"0ad481e457e7d7b411292c10ab3c7ba3a1eabe2389a354e3396b5bc21a961e6f"} Jan 03 06:44:41 crc kubenswrapper[4854]: I0103 06:44:41.908755 4854 scope.go:117] "RemoveContainer" containerID="9be9571d9b7e30074562447f16093118a686bcb88c3071236f620678b77ed55f" Jan 03 06:44:42 crc kubenswrapper[4854]: I0103 06:44:42.934125 4854 scope.go:117] "RemoveContainer" containerID="0ad481e457e7d7b411292c10ab3c7ba3a1eabe2389a354e3396b5bc21a961e6f" Jan 03 06:44:42 crc kubenswrapper[4854]: E0103 06:44:42.935238 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:44:53 crc kubenswrapper[4854]: I0103 06:44:53.743413 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q5k88"] Jan 03 06:44:53 crc kubenswrapper[4854]: E0103 06:44:53.744949 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e2aaf10-c22c-4cc5-80f0-7465a387c999" containerName="extract-utilities" Jan 03 06:44:53 crc kubenswrapper[4854]: I0103 06:44:53.744971 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e2aaf10-c22c-4cc5-80f0-7465a387c999" containerName="extract-utilities" Jan 03 06:44:53 crc kubenswrapper[4854]: E0103 06:44:53.745042 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e2aaf10-c22c-4cc5-80f0-7465a387c999" containerName="extract-content" Jan 03 06:44:53 crc kubenswrapper[4854]: I0103 06:44:53.745054 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e2aaf10-c22c-4cc5-80f0-7465a387c999" containerName="extract-content" Jan 03 06:44:53 crc kubenswrapper[4854]: E0103 06:44:53.745098 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e2aaf10-c22c-4cc5-80f0-7465a387c999" containerName="registry-server" Jan 03 06:44:53 crc kubenswrapper[4854]: I0103 06:44:53.745110 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e2aaf10-c22c-4cc5-80f0-7465a387c999" containerName="registry-server" Jan 03 06:44:53 crc kubenswrapper[4854]: I0103 06:44:53.745501 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e2aaf10-c22c-4cc5-80f0-7465a387c999" containerName="registry-server" Jan 03 06:44:53 crc kubenswrapper[4854]: I0103 06:44:53.748764 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q5k88" Jan 03 06:44:53 crc kubenswrapper[4854]: I0103 06:44:53.775948 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q5k88"] Jan 03 06:44:53 crc kubenswrapper[4854]: I0103 06:44:53.781318 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f23290a4-73d0-4ae6-b628-f02b4b179555-utilities\") pod \"redhat-marketplace-q5k88\" (UID: \"f23290a4-73d0-4ae6-b628-f02b4b179555\") " pod="openshift-marketplace/redhat-marketplace-q5k88" Jan 03 06:44:53 crc kubenswrapper[4854]: I0103 06:44:53.781378 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddjdc\" (UniqueName: \"kubernetes.io/projected/f23290a4-73d0-4ae6-b628-f02b4b179555-kube-api-access-ddjdc\") pod \"redhat-marketplace-q5k88\" (UID: \"f23290a4-73d0-4ae6-b628-f02b4b179555\") " pod="openshift-marketplace/redhat-marketplace-q5k88" Jan 03 06:44:53 crc kubenswrapper[4854]: I0103 06:44:53.781460 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f23290a4-73d0-4ae6-b628-f02b4b179555-catalog-content\") pod \"redhat-marketplace-q5k88\" (UID: \"f23290a4-73d0-4ae6-b628-f02b4b179555\") " pod="openshift-marketplace/redhat-marketplace-q5k88" Jan 03 06:44:53 crc kubenswrapper[4854]: I0103 06:44:53.883583 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f23290a4-73d0-4ae6-b628-f02b4b179555-utilities\") pod \"redhat-marketplace-q5k88\" (UID: \"f23290a4-73d0-4ae6-b628-f02b4b179555\") " pod="openshift-marketplace/redhat-marketplace-q5k88" Jan 03 06:44:53 crc kubenswrapper[4854]: I0103 06:44:53.883941 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddjdc\" (UniqueName: \"kubernetes.io/projected/f23290a4-73d0-4ae6-b628-f02b4b179555-kube-api-access-ddjdc\") pod \"redhat-marketplace-q5k88\" (UID: \"f23290a4-73d0-4ae6-b628-f02b4b179555\") " pod="openshift-marketplace/redhat-marketplace-q5k88" Jan 03 06:44:53 crc kubenswrapper[4854]: I0103 06:44:53.884144 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f23290a4-73d0-4ae6-b628-f02b4b179555-catalog-content\") pod \"redhat-marketplace-q5k88\" (UID: \"f23290a4-73d0-4ae6-b628-f02b4b179555\") " pod="openshift-marketplace/redhat-marketplace-q5k88" Jan 03 06:44:53 crc kubenswrapper[4854]: I0103 06:44:53.884329 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f23290a4-73d0-4ae6-b628-f02b4b179555-utilities\") pod \"redhat-marketplace-q5k88\" (UID: \"f23290a4-73d0-4ae6-b628-f02b4b179555\") " pod="openshift-marketplace/redhat-marketplace-q5k88" Jan 03 06:44:53 crc kubenswrapper[4854]: I0103 06:44:53.884673 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f23290a4-73d0-4ae6-b628-f02b4b179555-catalog-content\") pod \"redhat-marketplace-q5k88\" (UID: \"f23290a4-73d0-4ae6-b628-f02b4b179555\") " pod="openshift-marketplace/redhat-marketplace-q5k88" Jan 03 06:44:53 crc kubenswrapper[4854]: I0103 06:44:53.908654 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddjdc\" (UniqueName: \"kubernetes.io/projected/f23290a4-73d0-4ae6-b628-f02b4b179555-kube-api-access-ddjdc\") pod \"redhat-marketplace-q5k88\" (UID: \"f23290a4-73d0-4ae6-b628-f02b4b179555\") " pod="openshift-marketplace/redhat-marketplace-q5k88" Jan 03 06:44:54 crc kubenswrapper[4854]: I0103 06:44:54.080352 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q5k88" Jan 03 06:44:54 crc kubenswrapper[4854]: I0103 06:44:54.579071 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q5k88"] Jan 03 06:44:55 crc kubenswrapper[4854]: I0103 06:44:55.094590 4854 generic.go:334] "Generic (PLEG): container finished" podID="f23290a4-73d0-4ae6-b628-f02b4b179555" containerID="1f9c46cbf2d16cd5a9e4b94df901aebc3ce6ef9efc0fdd85cac31822ee0ce9ec" exitCode=0 Jan 03 06:44:55 crc kubenswrapper[4854]: I0103 06:44:55.094644 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q5k88" event={"ID":"f23290a4-73d0-4ae6-b628-f02b4b179555","Type":"ContainerDied","Data":"1f9c46cbf2d16cd5a9e4b94df901aebc3ce6ef9efc0fdd85cac31822ee0ce9ec"} Jan 03 06:44:55 crc kubenswrapper[4854]: I0103 06:44:55.094944 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q5k88" event={"ID":"f23290a4-73d0-4ae6-b628-f02b4b179555","Type":"ContainerStarted","Data":"78af1a2a178a03d229fd6d89ce735b6924e4158119a4a19389d5de7d6889443c"} Jan 03 06:44:55 crc kubenswrapper[4854]: I0103 06:44:55.099729 4854 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 03 06:44:55 crc kubenswrapper[4854]: I0103 06:44:55.118467 4854 scope.go:117] "RemoveContainer" containerID="0ad481e457e7d7b411292c10ab3c7ba3a1eabe2389a354e3396b5bc21a961e6f" Jan 03 06:44:55 crc kubenswrapper[4854]: E0103 06:44:55.118755 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:44:57 crc kubenswrapper[4854]: I0103 06:44:57.120982 4854 generic.go:334] "Generic (PLEG): container finished" podID="f23290a4-73d0-4ae6-b628-f02b4b179555" containerID="67aa3bfb767380b3f3b9246afffe365eb978714334eb852f1cf068e0dbfad29d" exitCode=0 Jan 03 06:44:57 crc kubenswrapper[4854]: I0103 06:44:57.121170 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q5k88" event={"ID":"f23290a4-73d0-4ae6-b628-f02b4b179555","Type":"ContainerDied","Data":"67aa3bfb767380b3f3b9246afffe365eb978714334eb852f1cf068e0dbfad29d"} Jan 03 06:44:58 crc kubenswrapper[4854]: I0103 06:44:58.138670 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q5k88" event={"ID":"f23290a4-73d0-4ae6-b628-f02b4b179555","Type":"ContainerStarted","Data":"d1d0249091d44b673d1f855e59b291c384ae33aa5919903e5682b281609fc4bd"} Jan 03 06:44:58 crc kubenswrapper[4854]: I0103 06:44:58.172126 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q5k88" podStartSLOduration=2.734252144 podStartE2EDuration="5.172074355s" podCreationTimestamp="2026-01-03 06:44:53 +0000 UTC" firstStartedPulling="2026-01-03 06:44:55.099500119 +0000 UTC m=+3873.426076691" lastFinishedPulling="2026-01-03 06:44:57.53732229 +0000 UTC m=+3875.863898902" observedRunningTime="2026-01-03 06:44:58.164601099 +0000 UTC m=+3876.491177671" watchObservedRunningTime="2026-01-03 06:44:58.172074355 +0000 UTC m=+3876.498650927" Jan 03 06:45:00 crc kubenswrapper[4854]: I0103 06:45:00.173180 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29457045-vwxsc"] Jan 03 06:45:00 crc kubenswrapper[4854]: I0103 06:45:00.176872 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29457045-vwxsc" Jan 03 06:45:00 crc kubenswrapper[4854]: I0103 06:45:00.180220 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 03 06:45:00 crc kubenswrapper[4854]: I0103 06:45:00.180560 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 03 06:45:00 crc kubenswrapper[4854]: I0103 06:45:00.189317 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29457045-vwxsc"] Jan 03 06:45:00 crc kubenswrapper[4854]: I0103 06:45:00.270780 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1f367193-592a-4ab3-8898-53bffecb915e-secret-volume\") pod \"collect-profiles-29457045-vwxsc\" (UID: \"1f367193-592a-4ab3-8898-53bffecb915e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29457045-vwxsc" Jan 03 06:45:00 crc kubenswrapper[4854]: I0103 06:45:00.270940 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nlcx\" (UniqueName: \"kubernetes.io/projected/1f367193-592a-4ab3-8898-53bffecb915e-kube-api-access-5nlcx\") pod \"collect-profiles-29457045-vwxsc\" (UID: \"1f367193-592a-4ab3-8898-53bffecb915e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29457045-vwxsc" Jan 03 06:45:00 crc kubenswrapper[4854]: I0103 06:45:00.271101 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1f367193-592a-4ab3-8898-53bffecb915e-config-volume\") pod \"collect-profiles-29457045-vwxsc\" (UID: \"1f367193-592a-4ab3-8898-53bffecb915e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29457045-vwxsc" Jan 03 06:45:00 crc kubenswrapper[4854]: I0103 06:45:00.373702 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1f367193-592a-4ab3-8898-53bffecb915e-secret-volume\") pod \"collect-profiles-29457045-vwxsc\" (UID: \"1f367193-592a-4ab3-8898-53bffecb915e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29457045-vwxsc" Jan 03 06:45:00 crc kubenswrapper[4854]: I0103 06:45:00.373902 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nlcx\" (UniqueName: \"kubernetes.io/projected/1f367193-592a-4ab3-8898-53bffecb915e-kube-api-access-5nlcx\") pod \"collect-profiles-29457045-vwxsc\" (UID: \"1f367193-592a-4ab3-8898-53bffecb915e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29457045-vwxsc" Jan 03 06:45:00 crc kubenswrapper[4854]: I0103 06:45:00.373986 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1f367193-592a-4ab3-8898-53bffecb915e-config-volume\") pod \"collect-profiles-29457045-vwxsc\" (UID: \"1f367193-592a-4ab3-8898-53bffecb915e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29457045-vwxsc" Jan 03 06:45:00 crc kubenswrapper[4854]: I0103 06:45:00.375338 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1f367193-592a-4ab3-8898-53bffecb915e-config-volume\") pod \"collect-profiles-29457045-vwxsc\" (UID: \"1f367193-592a-4ab3-8898-53bffecb915e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29457045-vwxsc" Jan 03 06:45:00 crc kubenswrapper[4854]: I0103 06:45:00.380529 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1f367193-592a-4ab3-8898-53bffecb915e-secret-volume\") pod \"collect-profiles-29457045-vwxsc\" (UID: \"1f367193-592a-4ab3-8898-53bffecb915e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29457045-vwxsc" Jan 03 06:45:00 crc kubenswrapper[4854]: I0103 06:45:00.392840 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nlcx\" (UniqueName: \"kubernetes.io/projected/1f367193-592a-4ab3-8898-53bffecb915e-kube-api-access-5nlcx\") pod \"collect-profiles-29457045-vwxsc\" (UID: \"1f367193-592a-4ab3-8898-53bffecb915e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29457045-vwxsc" Jan 03 06:45:00 crc kubenswrapper[4854]: I0103 06:45:00.496377 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29457045-vwxsc" Jan 03 06:45:01 crc kubenswrapper[4854]: I0103 06:45:01.031809 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29457045-vwxsc"] Jan 03 06:45:01 crc kubenswrapper[4854]: I0103 06:45:01.191304 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29457045-vwxsc" event={"ID":"1f367193-592a-4ab3-8898-53bffecb915e","Type":"ContainerStarted","Data":"2ee8e3256163484c017b34b4f9e5901a2d817f88dcf3fcbb5a459d39344ce040"} Jan 03 06:45:02 crc kubenswrapper[4854]: I0103 06:45:02.205740 4854 generic.go:334] "Generic (PLEG): container finished" podID="1f367193-592a-4ab3-8898-53bffecb915e" containerID="6da0036bbe5dc63fa42bda811d8acbe8d562af4da5712f146cd46919da63196e" exitCode=0 Jan 03 06:45:02 crc kubenswrapper[4854]: I0103 06:45:02.205823 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29457045-vwxsc" event={"ID":"1f367193-592a-4ab3-8898-53bffecb915e","Type":"ContainerDied","Data":"6da0036bbe5dc63fa42bda811d8acbe8d562af4da5712f146cd46919da63196e"} Jan 03 06:45:03 crc kubenswrapper[4854]: I0103 06:45:03.728275 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29457045-vwxsc" Jan 03 06:45:03 crc kubenswrapper[4854]: I0103 06:45:03.879428 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nlcx\" (UniqueName: \"kubernetes.io/projected/1f367193-592a-4ab3-8898-53bffecb915e-kube-api-access-5nlcx\") pod \"1f367193-592a-4ab3-8898-53bffecb915e\" (UID: \"1f367193-592a-4ab3-8898-53bffecb915e\") " Jan 03 06:45:03 crc kubenswrapper[4854]: I0103 06:45:03.879817 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1f367193-592a-4ab3-8898-53bffecb915e-config-volume\") pod \"1f367193-592a-4ab3-8898-53bffecb915e\" (UID: \"1f367193-592a-4ab3-8898-53bffecb915e\") " Jan 03 06:45:03 crc kubenswrapper[4854]: I0103 06:45:03.880234 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1f367193-592a-4ab3-8898-53bffecb915e-secret-volume\") pod \"1f367193-592a-4ab3-8898-53bffecb915e\" (UID: \"1f367193-592a-4ab3-8898-53bffecb915e\") " Jan 03 06:45:03 crc kubenswrapper[4854]: I0103 06:45:03.881003 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f367193-592a-4ab3-8898-53bffecb915e-config-volume" (OuterVolumeSpecName: "config-volume") pod "1f367193-592a-4ab3-8898-53bffecb915e" (UID: "1f367193-592a-4ab3-8898-53bffecb915e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 06:45:03 crc kubenswrapper[4854]: I0103 06:45:03.891235 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f367193-592a-4ab3-8898-53bffecb915e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1f367193-592a-4ab3-8898-53bffecb915e" (UID: "1f367193-592a-4ab3-8898-53bffecb915e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 06:45:03 crc kubenswrapper[4854]: I0103 06:45:03.891342 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f367193-592a-4ab3-8898-53bffecb915e-kube-api-access-5nlcx" (OuterVolumeSpecName: "kube-api-access-5nlcx") pod "1f367193-592a-4ab3-8898-53bffecb915e" (UID: "1f367193-592a-4ab3-8898-53bffecb915e"). InnerVolumeSpecName "kube-api-access-5nlcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:45:03 crc kubenswrapper[4854]: I0103 06:45:03.984291 4854 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1f367193-592a-4ab3-8898-53bffecb915e-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 03 06:45:03 crc kubenswrapper[4854]: I0103 06:45:03.984323 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nlcx\" (UniqueName: \"kubernetes.io/projected/1f367193-592a-4ab3-8898-53bffecb915e-kube-api-access-5nlcx\") on node \"crc\" DevicePath \"\"" Jan 03 06:45:03 crc kubenswrapper[4854]: I0103 06:45:03.984338 4854 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1f367193-592a-4ab3-8898-53bffecb915e-config-volume\") on node \"crc\" DevicePath \"\"" Jan 03 06:45:04 crc kubenswrapper[4854]: I0103 06:45:04.080517 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q5k88" Jan 03 06:45:04 crc kubenswrapper[4854]: I0103 06:45:04.080573 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-q5k88" Jan 03 06:45:04 crc kubenswrapper[4854]: I0103 06:45:04.146803 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q5k88" Jan 03 06:45:04 crc kubenswrapper[4854]: I0103 06:45:04.228595 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29457045-vwxsc" Jan 03 06:45:04 crc kubenswrapper[4854]: I0103 06:45:04.228587 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29457045-vwxsc" event={"ID":"1f367193-592a-4ab3-8898-53bffecb915e","Type":"ContainerDied","Data":"2ee8e3256163484c017b34b4f9e5901a2d817f88dcf3fcbb5a459d39344ce040"} Jan 03 06:45:04 crc kubenswrapper[4854]: I0103 06:45:04.228739 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ee8e3256163484c017b34b4f9e5901a2d817f88dcf3fcbb5a459d39344ce040" Jan 03 06:45:04 crc kubenswrapper[4854]: I0103 06:45:04.282279 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q5k88" Jan 03 06:45:04 crc kubenswrapper[4854]: I0103 06:45:04.893443 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29457000-4b58c"] Jan 03 06:45:04 crc kubenswrapper[4854]: I0103 06:45:04.911965 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29457000-4b58c"] Jan 03 06:45:06 crc kubenswrapper[4854]: I0103 06:45:06.139571 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bef59ea8-bada-439a-a6fe-1745e38b01c7" path="/var/lib/kubelet/pods/bef59ea8-bada-439a-a6fe-1745e38b01c7/volumes" Jan 03 06:45:07 crc kubenswrapper[4854]: I0103 06:45:07.119786 4854 scope.go:117] "RemoveContainer" containerID="0ad481e457e7d7b411292c10ab3c7ba3a1eabe2389a354e3396b5bc21a961e6f" Jan 03 06:45:07 crc kubenswrapper[4854]: E0103 06:45:07.120518 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:45:07 crc kubenswrapper[4854]: I0103 06:45:07.711223 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q5k88"] Jan 03 06:45:07 crc kubenswrapper[4854]: I0103 06:45:07.711838 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-q5k88" podUID="f23290a4-73d0-4ae6-b628-f02b4b179555" containerName="registry-server" containerID="cri-o://d1d0249091d44b673d1f855e59b291c384ae33aa5919903e5682b281609fc4bd" gracePeriod=2 Jan 03 06:45:08 crc kubenswrapper[4854]: I0103 06:45:08.295935 4854 generic.go:334] "Generic (PLEG): container finished" podID="f23290a4-73d0-4ae6-b628-f02b4b179555" containerID="d1d0249091d44b673d1f855e59b291c384ae33aa5919903e5682b281609fc4bd" exitCode=0 Jan 03 06:45:08 crc kubenswrapper[4854]: I0103 06:45:08.296032 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q5k88" event={"ID":"f23290a4-73d0-4ae6-b628-f02b4b179555","Type":"ContainerDied","Data":"d1d0249091d44b673d1f855e59b291c384ae33aa5919903e5682b281609fc4bd"} Jan 03 06:45:08 crc kubenswrapper[4854]: I0103 06:45:08.296235 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q5k88" event={"ID":"f23290a4-73d0-4ae6-b628-f02b4b179555","Type":"ContainerDied","Data":"78af1a2a178a03d229fd6d89ce735b6924e4158119a4a19389d5de7d6889443c"} Jan 03 06:45:08 crc kubenswrapper[4854]: I0103 06:45:08.296252 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78af1a2a178a03d229fd6d89ce735b6924e4158119a4a19389d5de7d6889443c" Jan 03 06:45:08 crc kubenswrapper[4854]: I0103 06:45:08.326984 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q5k88" Jan 03 06:45:08 crc kubenswrapper[4854]: I0103 06:45:08.496891 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f23290a4-73d0-4ae6-b628-f02b4b179555-catalog-content\") pod \"f23290a4-73d0-4ae6-b628-f02b4b179555\" (UID: \"f23290a4-73d0-4ae6-b628-f02b4b179555\") " Jan 03 06:45:08 crc kubenswrapper[4854]: I0103 06:45:08.496943 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddjdc\" (UniqueName: \"kubernetes.io/projected/f23290a4-73d0-4ae6-b628-f02b4b179555-kube-api-access-ddjdc\") pod \"f23290a4-73d0-4ae6-b628-f02b4b179555\" (UID: \"f23290a4-73d0-4ae6-b628-f02b4b179555\") " Jan 03 06:45:08 crc kubenswrapper[4854]: I0103 06:45:08.497119 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f23290a4-73d0-4ae6-b628-f02b4b179555-utilities\") pod \"f23290a4-73d0-4ae6-b628-f02b4b179555\" (UID: \"f23290a4-73d0-4ae6-b628-f02b4b179555\") " Jan 03 06:45:08 crc kubenswrapper[4854]: I0103 06:45:08.498243 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f23290a4-73d0-4ae6-b628-f02b4b179555-utilities" (OuterVolumeSpecName: "utilities") pod "f23290a4-73d0-4ae6-b628-f02b4b179555" (UID: "f23290a4-73d0-4ae6-b628-f02b4b179555"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:45:08 crc kubenswrapper[4854]: I0103 06:45:08.504833 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f23290a4-73d0-4ae6-b628-f02b4b179555-kube-api-access-ddjdc" (OuterVolumeSpecName: "kube-api-access-ddjdc") pod "f23290a4-73d0-4ae6-b628-f02b4b179555" (UID: "f23290a4-73d0-4ae6-b628-f02b4b179555"). InnerVolumeSpecName "kube-api-access-ddjdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:45:08 crc kubenswrapper[4854]: I0103 06:45:08.543293 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f23290a4-73d0-4ae6-b628-f02b4b179555-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f23290a4-73d0-4ae6-b628-f02b4b179555" (UID: "f23290a4-73d0-4ae6-b628-f02b4b179555"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:45:08 crc kubenswrapper[4854]: I0103 06:45:08.599411 4854 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f23290a4-73d0-4ae6-b628-f02b4b179555-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 06:45:08 crc kubenswrapper[4854]: I0103 06:45:08.599690 4854 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f23290a4-73d0-4ae6-b628-f02b4b179555-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 06:45:08 crc kubenswrapper[4854]: I0103 06:45:08.599777 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddjdc\" (UniqueName: \"kubernetes.io/projected/f23290a4-73d0-4ae6-b628-f02b4b179555-kube-api-access-ddjdc\") on node \"crc\" DevicePath \"\"" Jan 03 06:45:09 crc kubenswrapper[4854]: I0103 06:45:09.307576 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q5k88" Jan 03 06:45:09 crc kubenswrapper[4854]: I0103 06:45:09.364911 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q5k88"] Jan 03 06:45:09 crc kubenswrapper[4854]: I0103 06:45:09.387397 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-q5k88"] Jan 03 06:45:10 crc kubenswrapper[4854]: I0103 06:45:10.158686 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f23290a4-73d0-4ae6-b628-f02b4b179555" path="/var/lib/kubelet/pods/f23290a4-73d0-4ae6-b628-f02b4b179555/volumes" Jan 03 06:45:21 crc kubenswrapper[4854]: I0103 06:45:21.117992 4854 scope.go:117] "RemoveContainer" containerID="0ad481e457e7d7b411292c10ab3c7ba3a1eabe2389a354e3396b5bc21a961e6f" Jan 03 06:45:21 crc kubenswrapper[4854]: E0103 06:45:21.118671 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:45:24 crc kubenswrapper[4854]: E0103 06:45:24.425842 4854 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Jan 03 06:45:35 crc kubenswrapper[4854]: I0103 06:45:35.118940 4854 scope.go:117] "RemoveContainer" containerID="0ad481e457e7d7b411292c10ab3c7ba3a1eabe2389a354e3396b5bc21a961e6f" Jan 03 06:45:35 crc kubenswrapper[4854]: E0103 06:45:35.120273 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:45:43 crc kubenswrapper[4854]: I0103 06:45:43.812897 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-blrt5"] Jan 03 06:45:43 crc kubenswrapper[4854]: E0103 06:45:43.817921 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f367193-592a-4ab3-8898-53bffecb915e" containerName="collect-profiles" Jan 03 06:45:43 crc kubenswrapper[4854]: I0103 06:45:43.817958 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f367193-592a-4ab3-8898-53bffecb915e" containerName="collect-profiles" Jan 03 06:45:43 crc kubenswrapper[4854]: E0103 06:45:43.818004 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f23290a4-73d0-4ae6-b628-f02b4b179555" containerName="extract-utilities" Jan 03 06:45:43 crc kubenswrapper[4854]: I0103 06:45:43.818016 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="f23290a4-73d0-4ae6-b628-f02b4b179555" containerName="extract-utilities" Jan 03 06:45:43 crc kubenswrapper[4854]: E0103 06:45:43.818071 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f23290a4-73d0-4ae6-b628-f02b4b179555" containerName="registry-server" Jan 03 06:45:43 crc kubenswrapper[4854]: I0103 06:45:43.818104 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="f23290a4-73d0-4ae6-b628-f02b4b179555" containerName="registry-server" Jan 03 06:45:43 crc kubenswrapper[4854]: E0103 06:45:43.818167 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f23290a4-73d0-4ae6-b628-f02b4b179555" containerName="extract-content" Jan 03 06:45:43 crc kubenswrapper[4854]: I0103 06:45:43.818176 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="f23290a4-73d0-4ae6-b628-f02b4b179555" containerName="extract-content" Jan 03 06:45:43 crc kubenswrapper[4854]: I0103 06:45:43.819146 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f367193-592a-4ab3-8898-53bffecb915e" containerName="collect-profiles" Jan 03 06:45:43 crc kubenswrapper[4854]: I0103 06:45:43.819176 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="f23290a4-73d0-4ae6-b628-f02b4b179555" containerName="registry-server" Jan 03 06:45:43 crc kubenswrapper[4854]: I0103 06:45:43.824288 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-blrt5" Jan 03 06:45:43 crc kubenswrapper[4854]: I0103 06:45:43.835886 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-blrt5"] Jan 03 06:45:43 crc kubenswrapper[4854]: I0103 06:45:43.961728 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjdnh\" (UniqueName: \"kubernetes.io/projected/1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce-kube-api-access-mjdnh\") pod \"redhat-operators-blrt5\" (UID: \"1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce\") " pod="openshift-marketplace/redhat-operators-blrt5" Jan 03 06:45:43 crc kubenswrapper[4854]: I0103 06:45:43.961782 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce-catalog-content\") pod \"redhat-operators-blrt5\" (UID: \"1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce\") " pod="openshift-marketplace/redhat-operators-blrt5" Jan 03 06:45:43 crc kubenswrapper[4854]: I0103 06:45:43.962207 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce-utilities\") pod \"redhat-operators-blrt5\" (UID: \"1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce\") " pod="openshift-marketplace/redhat-operators-blrt5" Jan 03 06:45:44 crc kubenswrapper[4854]: I0103 06:45:44.064826 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjdnh\" (UniqueName: \"kubernetes.io/projected/1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce-kube-api-access-mjdnh\") pod \"redhat-operators-blrt5\" (UID: \"1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce\") " pod="openshift-marketplace/redhat-operators-blrt5" Jan 03 06:45:44 crc kubenswrapper[4854]: I0103 06:45:44.065198 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce-catalog-content\") pod \"redhat-operators-blrt5\" (UID: \"1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce\") " pod="openshift-marketplace/redhat-operators-blrt5" Jan 03 06:45:44 crc kubenswrapper[4854]: I0103 06:45:44.065347 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce-utilities\") pod \"redhat-operators-blrt5\" (UID: \"1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce\") " pod="openshift-marketplace/redhat-operators-blrt5" Jan 03 06:45:44 crc kubenswrapper[4854]: I0103 06:45:44.065686 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce-catalog-content\") pod \"redhat-operators-blrt5\" (UID: \"1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce\") " pod="openshift-marketplace/redhat-operators-blrt5" Jan 03 06:45:44 crc kubenswrapper[4854]: I0103 06:45:44.065844 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce-utilities\") pod \"redhat-operators-blrt5\" (UID: \"1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce\") " pod="openshift-marketplace/redhat-operators-blrt5" Jan 03 06:45:44 crc kubenswrapper[4854]: I0103 06:45:44.087016 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjdnh\" (UniqueName: \"kubernetes.io/projected/1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce-kube-api-access-mjdnh\") pod \"redhat-operators-blrt5\" (UID: \"1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce\") " pod="openshift-marketplace/redhat-operators-blrt5" Jan 03 06:45:44 crc kubenswrapper[4854]: I0103 06:45:44.169277 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-blrt5" Jan 03 06:45:44 crc kubenswrapper[4854]: I0103 06:45:44.717937 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-blrt5"] Jan 03 06:45:44 crc kubenswrapper[4854]: I0103 06:45:44.856004 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-blrt5" event={"ID":"1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce","Type":"ContainerStarted","Data":"fd7f372c92c5f9b373a625d5a0e203f57e298b4dd961ebe92ba8b7128dfe762c"} Jan 03 06:45:45 crc kubenswrapper[4854]: I0103 06:45:45.872187 4854 generic.go:334] "Generic (PLEG): container finished" podID="1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce" containerID="999837c3ebf46414be3e6869924a5d7f7acb1ea3513ee6b8bf7e453c5ad5d61d" exitCode=0 Jan 03 06:45:45 crc kubenswrapper[4854]: I0103 06:45:45.872284 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-blrt5" event={"ID":"1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce","Type":"ContainerDied","Data":"999837c3ebf46414be3e6869924a5d7f7acb1ea3513ee6b8bf7e453c5ad5d61d"} Jan 03 06:45:47 crc kubenswrapper[4854]: I0103 06:45:47.894998 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-blrt5" event={"ID":"1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce","Type":"ContainerStarted","Data":"d31f4d96208980b4aa1e8637d536a18b8fdc26d73aa45ce994abde957235bca8"} Jan 03 06:45:49 crc kubenswrapper[4854]: I0103 06:45:49.917499 4854 generic.go:334] "Generic (PLEG): container finished" podID="1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce" containerID="d31f4d96208980b4aa1e8637d536a18b8fdc26d73aa45ce994abde957235bca8" exitCode=0 Jan 03 06:45:49 crc kubenswrapper[4854]: I0103 06:45:49.917610 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-blrt5" event={"ID":"1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce","Type":"ContainerDied","Data":"d31f4d96208980b4aa1e8637d536a18b8fdc26d73aa45ce994abde957235bca8"} Jan 03 06:45:50 crc kubenswrapper[4854]: I0103 06:45:50.118870 4854 scope.go:117] "RemoveContainer" containerID="0ad481e457e7d7b411292c10ab3c7ba3a1eabe2389a354e3396b5bc21a961e6f" Jan 03 06:45:50 crc kubenswrapper[4854]: E0103 06:45:50.119531 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:45:50 crc kubenswrapper[4854]: I0103 06:45:50.934302 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-blrt5" event={"ID":"1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce","Type":"ContainerStarted","Data":"e551079aa09f5ec7a01e46b64fcde9da751d07cedc40d2a82fafe6eab897dc23"} Jan 03 06:45:50 crc kubenswrapper[4854]: I0103 06:45:50.966474 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-blrt5" podStartSLOduration=3.405725169 podStartE2EDuration="7.966444755s" podCreationTimestamp="2026-01-03 06:45:43 +0000 UTC" firstStartedPulling="2026-01-03 06:45:45.874570152 +0000 UTC m=+3924.201146734" lastFinishedPulling="2026-01-03 06:45:50.435289748 +0000 UTC m=+3928.761866320" observedRunningTime="2026-01-03 06:45:50.960459607 +0000 UTC m=+3929.287036229" watchObservedRunningTime="2026-01-03 06:45:50.966444755 +0000 UTC m=+3929.293021357" Jan 03 06:45:54 crc kubenswrapper[4854]: I0103 06:45:54.169542 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-blrt5" Jan 03 06:45:54 crc kubenswrapper[4854]: I0103 06:45:54.170185 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-blrt5" Jan 03 06:45:55 crc kubenswrapper[4854]: I0103 06:45:55.303894 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-blrt5" podUID="1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce" containerName="registry-server" probeResult="failure" output=< Jan 03 06:45:55 crc kubenswrapper[4854]: timeout: failed to connect service ":50051" within 1s Jan 03 06:45:55 crc kubenswrapper[4854]: > Jan 03 06:45:57 crc kubenswrapper[4854]: E0103 06:45:57.395715 4854 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.102:59238->38.102.83.102:42659: read tcp 38.102.83.102:59238->38.102.83.102:42659: read: connection reset by peer Jan 03 06:46:04 crc kubenswrapper[4854]: I0103 06:46:04.118734 4854 scope.go:117] "RemoveContainer" containerID="0ad481e457e7d7b411292c10ab3c7ba3a1eabe2389a354e3396b5bc21a961e6f" Jan 03 06:46:04 crc kubenswrapper[4854]: E0103 06:46:04.120359 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:46:04 crc kubenswrapper[4854]: I0103 06:46:04.224268 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-blrt5" Jan 03 06:46:04 crc kubenswrapper[4854]: I0103 06:46:04.282856 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-blrt5" Jan 03 06:46:04 crc kubenswrapper[4854]: I0103 06:46:04.466910 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-blrt5"] Jan 03 06:46:04 crc kubenswrapper[4854]: I0103 06:46:04.529251 4854 scope.go:117] "RemoveContainer" containerID="4d6e53659ed39a5dd6a8c8ba3ef8f6f3d84c57c3ace82ad4b9809b2f249492a2" Jan 03 06:46:06 crc kubenswrapper[4854]: I0103 06:46:06.106541 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-blrt5" podUID="1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce" containerName="registry-server" containerID="cri-o://e551079aa09f5ec7a01e46b64fcde9da751d07cedc40d2a82fafe6eab897dc23" gracePeriod=2 Jan 03 06:46:06 crc kubenswrapper[4854]: I0103 06:46:06.865927 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-blrt5" Jan 03 06:46:07 crc kubenswrapper[4854]: I0103 06:46:07.062633 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjdnh\" (UniqueName: \"kubernetes.io/projected/1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce-kube-api-access-mjdnh\") pod \"1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce\" (UID: \"1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce\") " Jan 03 06:46:07 crc kubenswrapper[4854]: I0103 06:46:07.062710 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce-catalog-content\") pod \"1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce\" (UID: \"1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce\") " Jan 03 06:46:07 crc kubenswrapper[4854]: I0103 06:46:07.062824 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce-utilities\") pod \"1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce\" (UID: \"1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce\") " Jan 03 06:46:07 crc kubenswrapper[4854]: I0103 06:46:07.063952 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce-utilities" (OuterVolumeSpecName: "utilities") pod "1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce" (UID: "1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:46:07 crc kubenswrapper[4854]: I0103 06:46:07.082346 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce-kube-api-access-mjdnh" (OuterVolumeSpecName: "kube-api-access-mjdnh") pod "1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce" (UID: "1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce"). InnerVolumeSpecName "kube-api-access-mjdnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:46:07 crc kubenswrapper[4854]: I0103 06:46:07.122641 4854 generic.go:334] "Generic (PLEG): container finished" podID="1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce" containerID="e551079aa09f5ec7a01e46b64fcde9da751d07cedc40d2a82fafe6eab897dc23" exitCode=0 Jan 03 06:46:07 crc kubenswrapper[4854]: I0103 06:46:07.122681 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-blrt5" Jan 03 06:46:07 crc kubenswrapper[4854]: I0103 06:46:07.122684 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-blrt5" event={"ID":"1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce","Type":"ContainerDied","Data":"e551079aa09f5ec7a01e46b64fcde9da751d07cedc40d2a82fafe6eab897dc23"} Jan 03 06:46:07 crc kubenswrapper[4854]: I0103 06:46:07.122731 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-blrt5" event={"ID":"1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce","Type":"ContainerDied","Data":"fd7f372c92c5f9b373a625d5a0e203f57e298b4dd961ebe92ba8b7128dfe762c"} Jan 03 06:46:07 crc kubenswrapper[4854]: I0103 06:46:07.122759 4854 scope.go:117] "RemoveContainer" containerID="e551079aa09f5ec7a01e46b64fcde9da751d07cedc40d2a82fafe6eab897dc23" Jan 03 06:46:07 crc kubenswrapper[4854]: I0103 06:46:07.166044 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjdnh\" (UniqueName: \"kubernetes.io/projected/1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce-kube-api-access-mjdnh\") on node \"crc\" DevicePath \"\"" Jan 03 06:46:07 crc kubenswrapper[4854]: I0103 06:46:07.166299 4854 scope.go:117] "RemoveContainer" containerID="d31f4d96208980b4aa1e8637d536a18b8fdc26d73aa45ce994abde957235bca8" Jan 03 06:46:07 crc kubenswrapper[4854]: I0103 06:46:07.166743 4854 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 06:46:07 crc kubenswrapper[4854]: I0103 06:46:07.195701 4854 scope.go:117] "RemoveContainer" containerID="999837c3ebf46414be3e6869924a5d7f7acb1ea3513ee6b8bf7e453c5ad5d61d" Jan 03 06:46:07 crc kubenswrapper[4854]: I0103 06:46:07.228894 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce" (UID: "1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:46:07 crc kubenswrapper[4854]: I0103 06:46:07.243247 4854 scope.go:117] "RemoveContainer" containerID="e551079aa09f5ec7a01e46b64fcde9da751d07cedc40d2a82fafe6eab897dc23" Jan 03 06:46:07 crc kubenswrapper[4854]: E0103 06:46:07.243734 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e551079aa09f5ec7a01e46b64fcde9da751d07cedc40d2a82fafe6eab897dc23\": container with ID starting with e551079aa09f5ec7a01e46b64fcde9da751d07cedc40d2a82fafe6eab897dc23 not found: ID does not exist" containerID="e551079aa09f5ec7a01e46b64fcde9da751d07cedc40d2a82fafe6eab897dc23" Jan 03 06:46:07 crc kubenswrapper[4854]: I0103 06:46:07.243769 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e551079aa09f5ec7a01e46b64fcde9da751d07cedc40d2a82fafe6eab897dc23"} err="failed to get container status \"e551079aa09f5ec7a01e46b64fcde9da751d07cedc40d2a82fafe6eab897dc23\": rpc error: code = NotFound desc = could not find container \"e551079aa09f5ec7a01e46b64fcde9da751d07cedc40d2a82fafe6eab897dc23\": container with ID starting with e551079aa09f5ec7a01e46b64fcde9da751d07cedc40d2a82fafe6eab897dc23 not found: ID does not exist" Jan 03 06:46:07 crc kubenswrapper[4854]: I0103 06:46:07.243828 4854 scope.go:117] "RemoveContainer" containerID="d31f4d96208980b4aa1e8637d536a18b8fdc26d73aa45ce994abde957235bca8" Jan 03 06:46:07 crc kubenswrapper[4854]: E0103 06:46:07.244101 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d31f4d96208980b4aa1e8637d536a18b8fdc26d73aa45ce994abde957235bca8\": container with ID starting with d31f4d96208980b4aa1e8637d536a18b8fdc26d73aa45ce994abde957235bca8 not found: ID does not exist" containerID="d31f4d96208980b4aa1e8637d536a18b8fdc26d73aa45ce994abde957235bca8" Jan 03 06:46:07 crc kubenswrapper[4854]: I0103 06:46:07.244152 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d31f4d96208980b4aa1e8637d536a18b8fdc26d73aa45ce994abde957235bca8"} err="failed to get container status \"d31f4d96208980b4aa1e8637d536a18b8fdc26d73aa45ce994abde957235bca8\": rpc error: code = NotFound desc = could not find container \"d31f4d96208980b4aa1e8637d536a18b8fdc26d73aa45ce994abde957235bca8\": container with ID starting with d31f4d96208980b4aa1e8637d536a18b8fdc26d73aa45ce994abde957235bca8 not found: ID does not exist" Jan 03 06:46:07 crc kubenswrapper[4854]: I0103 06:46:07.244180 4854 scope.go:117] "RemoveContainer" containerID="999837c3ebf46414be3e6869924a5d7f7acb1ea3513ee6b8bf7e453c5ad5d61d" Jan 03 06:46:07 crc kubenswrapper[4854]: E0103 06:46:07.244523 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"999837c3ebf46414be3e6869924a5d7f7acb1ea3513ee6b8bf7e453c5ad5d61d\": container with ID starting with 999837c3ebf46414be3e6869924a5d7f7acb1ea3513ee6b8bf7e453c5ad5d61d not found: ID does not exist" containerID="999837c3ebf46414be3e6869924a5d7f7acb1ea3513ee6b8bf7e453c5ad5d61d" Jan 03 06:46:07 crc kubenswrapper[4854]: I0103 06:46:07.244548 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"999837c3ebf46414be3e6869924a5d7f7acb1ea3513ee6b8bf7e453c5ad5d61d"} err="failed to get container status \"999837c3ebf46414be3e6869924a5d7f7acb1ea3513ee6b8bf7e453c5ad5d61d\": rpc error: code = NotFound desc = could not find container \"999837c3ebf46414be3e6869924a5d7f7acb1ea3513ee6b8bf7e453c5ad5d61d\": container with ID starting with 999837c3ebf46414be3e6869924a5d7f7acb1ea3513ee6b8bf7e453c5ad5d61d not found: ID does not exist" Jan 03 06:46:07 crc kubenswrapper[4854]: I0103 06:46:07.270377 4854 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 06:46:07 crc kubenswrapper[4854]: I0103 06:46:07.471466 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-blrt5"] Jan 03 06:46:07 crc kubenswrapper[4854]: I0103 06:46:07.483581 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-blrt5"] Jan 03 06:46:08 crc kubenswrapper[4854]: I0103 06:46:08.142037 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce" path="/var/lib/kubelet/pods/1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce/volumes" Jan 03 06:46:17 crc kubenswrapper[4854]: I0103 06:46:17.120272 4854 scope.go:117] "RemoveContainer" containerID="0ad481e457e7d7b411292c10ab3c7ba3a1eabe2389a354e3396b5bc21a961e6f" Jan 03 06:46:17 crc kubenswrapper[4854]: E0103 06:46:17.121525 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:46:30 crc kubenswrapper[4854]: I0103 06:46:30.118317 4854 scope.go:117] "RemoveContainer" containerID="0ad481e457e7d7b411292c10ab3c7ba3a1eabe2389a354e3396b5bc21a961e6f" Jan 03 06:46:30 crc kubenswrapper[4854]: E0103 06:46:30.119478 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:46:41 crc kubenswrapper[4854]: I0103 06:46:41.118991 4854 scope.go:117] "RemoveContainer" containerID="0ad481e457e7d7b411292c10ab3c7ba3a1eabe2389a354e3396b5bc21a961e6f" Jan 03 06:46:41 crc kubenswrapper[4854]: E0103 06:46:41.120471 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:46:56 crc kubenswrapper[4854]: I0103 06:46:56.119300 4854 scope.go:117] "RemoveContainer" containerID="0ad481e457e7d7b411292c10ab3c7ba3a1eabe2389a354e3396b5bc21a961e6f" Jan 03 06:46:56 crc kubenswrapper[4854]: E0103 06:46:56.120407 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:47:07 crc kubenswrapper[4854]: I0103 06:47:07.119244 4854 scope.go:117] "RemoveContainer" containerID="0ad481e457e7d7b411292c10ab3c7ba3a1eabe2389a354e3396b5bc21a961e6f" Jan 03 06:47:07 crc kubenswrapper[4854]: E0103 06:47:07.120438 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:47:12 crc kubenswrapper[4854]: I0103 06:47:12.268315 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6zbj7"] Jan 03 06:47:12 crc kubenswrapper[4854]: E0103 06:47:12.270252 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce" containerName="registry-server" Jan 03 06:47:12 crc kubenswrapper[4854]: I0103 06:47:12.270287 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce" containerName="registry-server" Jan 03 06:47:12 crc kubenswrapper[4854]: E0103 06:47:12.270336 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce" containerName="extract-content" Jan 03 06:47:12 crc kubenswrapper[4854]: I0103 06:47:12.270352 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce" containerName="extract-content" Jan 03 06:47:12 crc kubenswrapper[4854]: E0103 06:47:12.270390 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce" containerName="extract-utilities" Jan 03 06:47:12 crc kubenswrapper[4854]: I0103 06:47:12.270406 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce" containerName="extract-utilities" Jan 03 06:47:12 crc kubenswrapper[4854]: I0103 06:47:12.270924 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e7d3ce2-ad67-4183-b401-c0ca56d0a0ce" containerName="registry-server" Jan 03 06:47:12 crc kubenswrapper[4854]: I0103 06:47:12.274404 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6zbj7" Jan 03 06:47:12 crc kubenswrapper[4854]: I0103 06:47:12.282329 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6zbj7"] Jan 03 06:47:12 crc kubenswrapper[4854]: I0103 06:47:12.335262 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d6896c7-d282-487d-8f0e-d6601e2fd9e2-catalog-content\") pod \"community-operators-6zbj7\" (UID: \"9d6896c7-d282-487d-8f0e-d6601e2fd9e2\") " pod="openshift-marketplace/community-operators-6zbj7" Jan 03 06:47:12 crc kubenswrapper[4854]: I0103 06:47:12.335396 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d6896c7-d282-487d-8f0e-d6601e2fd9e2-utilities\") pod \"community-operators-6zbj7\" (UID: \"9d6896c7-d282-487d-8f0e-d6601e2fd9e2\") " pod="openshift-marketplace/community-operators-6zbj7" Jan 03 06:47:12 crc kubenswrapper[4854]: I0103 06:47:12.335517 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzw24\" (UniqueName: \"kubernetes.io/projected/9d6896c7-d282-487d-8f0e-d6601e2fd9e2-kube-api-access-bzw24\") pod \"community-operators-6zbj7\" (UID: \"9d6896c7-d282-487d-8f0e-d6601e2fd9e2\") " pod="openshift-marketplace/community-operators-6zbj7" Jan 03 06:47:12 crc kubenswrapper[4854]: I0103 06:47:12.438892 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzw24\" (UniqueName: \"kubernetes.io/projected/9d6896c7-d282-487d-8f0e-d6601e2fd9e2-kube-api-access-bzw24\") pod \"community-operators-6zbj7\" (UID: \"9d6896c7-d282-487d-8f0e-d6601e2fd9e2\") " pod="openshift-marketplace/community-operators-6zbj7" Jan 03 06:47:12 crc kubenswrapper[4854]: I0103 06:47:12.442612 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d6896c7-d282-487d-8f0e-d6601e2fd9e2-catalog-content\") pod \"community-operators-6zbj7\" (UID: \"9d6896c7-d282-487d-8f0e-d6601e2fd9e2\") " pod="openshift-marketplace/community-operators-6zbj7" Jan 03 06:47:12 crc kubenswrapper[4854]: I0103 06:47:12.442740 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d6896c7-d282-487d-8f0e-d6601e2fd9e2-utilities\") pod \"community-operators-6zbj7\" (UID: \"9d6896c7-d282-487d-8f0e-d6601e2fd9e2\") " pod="openshift-marketplace/community-operators-6zbj7" Jan 03 06:47:12 crc kubenswrapper[4854]: I0103 06:47:12.443430 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d6896c7-d282-487d-8f0e-d6601e2fd9e2-catalog-content\") pod \"community-operators-6zbj7\" (UID: \"9d6896c7-d282-487d-8f0e-d6601e2fd9e2\") " pod="openshift-marketplace/community-operators-6zbj7" Jan 03 06:47:12 crc kubenswrapper[4854]: I0103 06:47:12.443908 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d6896c7-d282-487d-8f0e-d6601e2fd9e2-utilities\") pod \"community-operators-6zbj7\" (UID: \"9d6896c7-d282-487d-8f0e-d6601e2fd9e2\") " pod="openshift-marketplace/community-operators-6zbj7" Jan 03 06:47:12 crc kubenswrapper[4854]: I0103 06:47:12.466228 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzw24\" (UniqueName: \"kubernetes.io/projected/9d6896c7-d282-487d-8f0e-d6601e2fd9e2-kube-api-access-bzw24\") pod \"community-operators-6zbj7\" (UID: \"9d6896c7-d282-487d-8f0e-d6601e2fd9e2\") " pod="openshift-marketplace/community-operators-6zbj7" Jan 03 06:47:12 crc kubenswrapper[4854]: I0103 06:47:12.611364 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6zbj7" Jan 03 06:47:13 crc kubenswrapper[4854]: I0103 06:47:13.176569 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6zbj7"] Jan 03 06:47:14 crc kubenswrapper[4854]: I0103 06:47:14.149029 4854 generic.go:334] "Generic (PLEG): container finished" podID="9d6896c7-d282-487d-8f0e-d6601e2fd9e2" containerID="6ff13db7424eb93bfcddcba07ffeff5e623275cb22a16f9ccb0ed993fbc5b2b4" exitCode=0 Jan 03 06:47:14 crc kubenswrapper[4854]: I0103 06:47:14.149168 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6zbj7" event={"ID":"9d6896c7-d282-487d-8f0e-d6601e2fd9e2","Type":"ContainerDied","Data":"6ff13db7424eb93bfcddcba07ffeff5e623275cb22a16f9ccb0ed993fbc5b2b4"} Jan 03 06:47:14 crc kubenswrapper[4854]: I0103 06:47:14.149622 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6zbj7" event={"ID":"9d6896c7-d282-487d-8f0e-d6601e2fd9e2","Type":"ContainerStarted","Data":"63e63b336389707c0881ba92c85a40075090ec5a87c5b620253f5f477040113a"} Jan 03 06:47:15 crc kubenswrapper[4854]: I0103 06:47:15.193202 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6zbj7" event={"ID":"9d6896c7-d282-487d-8f0e-d6601e2fd9e2","Type":"ContainerStarted","Data":"924151a0d74288551adeb0b45b4d965d58b48067c558afe06337b99568bd5520"} Jan 03 06:47:16 crc kubenswrapper[4854]: I0103 06:47:16.205193 4854 generic.go:334] "Generic (PLEG): container finished" podID="9d6896c7-d282-487d-8f0e-d6601e2fd9e2" containerID="924151a0d74288551adeb0b45b4d965d58b48067c558afe06337b99568bd5520" exitCode=0 Jan 03 06:47:16 crc kubenswrapper[4854]: I0103 06:47:16.205359 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6zbj7" event={"ID":"9d6896c7-d282-487d-8f0e-d6601e2fd9e2","Type":"ContainerDied","Data":"924151a0d74288551adeb0b45b4d965d58b48067c558afe06337b99568bd5520"} Jan 03 06:47:17 crc kubenswrapper[4854]: I0103 06:47:17.224783 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6zbj7" event={"ID":"9d6896c7-d282-487d-8f0e-d6601e2fd9e2","Type":"ContainerStarted","Data":"9f44641a76397b153fe336edcdafd05caa0bd29a1c8cc9dc831dfca0b7be2cdc"} Jan 03 06:47:17 crc kubenswrapper[4854]: I0103 06:47:17.266270 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6zbj7" podStartSLOduration=2.671457063 podStartE2EDuration="5.266239927s" podCreationTimestamp="2026-01-03 06:47:12 +0000 UTC" firstStartedPulling="2026-01-03 06:47:14.152452508 +0000 UTC m=+4012.479029080" lastFinishedPulling="2026-01-03 06:47:16.747235322 +0000 UTC m=+4015.073811944" observedRunningTime="2026-01-03 06:47:17.251741878 +0000 UTC m=+4015.578318530" watchObservedRunningTime="2026-01-03 06:47:17.266239927 +0000 UTC m=+4015.592816549" Jan 03 06:47:22 crc kubenswrapper[4854]: I0103 06:47:22.138934 4854 scope.go:117] "RemoveContainer" containerID="0ad481e457e7d7b411292c10ab3c7ba3a1eabe2389a354e3396b5bc21a961e6f" Jan 03 06:47:22 crc kubenswrapper[4854]: E0103 06:47:22.140307 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:47:22 crc kubenswrapper[4854]: I0103 06:47:22.612553 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6zbj7" Jan 03 06:47:22 crc kubenswrapper[4854]: I0103 06:47:22.613034 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6zbj7" Jan 03 06:47:22 crc kubenswrapper[4854]: I0103 06:47:22.689267 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6zbj7" Jan 03 06:47:23 crc kubenswrapper[4854]: I0103 06:47:23.907363 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6zbj7" Jan 03 06:47:23 crc kubenswrapper[4854]: I0103 06:47:23.975671 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6zbj7"] Jan 03 06:47:25 crc kubenswrapper[4854]: I0103 06:47:25.361630 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6zbj7" podUID="9d6896c7-d282-487d-8f0e-d6601e2fd9e2" containerName="registry-server" containerID="cri-o://9f44641a76397b153fe336edcdafd05caa0bd29a1c8cc9dc831dfca0b7be2cdc" gracePeriod=2 Jan 03 06:47:26 crc kubenswrapper[4854]: I0103 06:47:26.378587 4854 generic.go:334] "Generic (PLEG): container finished" podID="9d6896c7-d282-487d-8f0e-d6601e2fd9e2" containerID="9f44641a76397b153fe336edcdafd05caa0bd29a1c8cc9dc831dfca0b7be2cdc" exitCode=0 Jan 03 06:47:26 crc kubenswrapper[4854]: I0103 06:47:26.378694 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6zbj7" event={"ID":"9d6896c7-d282-487d-8f0e-d6601e2fd9e2","Type":"ContainerDied","Data":"9f44641a76397b153fe336edcdafd05caa0bd29a1c8cc9dc831dfca0b7be2cdc"} Jan 03 06:47:26 crc kubenswrapper[4854]: I0103 06:47:26.646427 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6zbj7" Jan 03 06:47:26 crc kubenswrapper[4854]: I0103 06:47:26.724209 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d6896c7-d282-487d-8f0e-d6601e2fd9e2-utilities\") pod \"9d6896c7-d282-487d-8f0e-d6601e2fd9e2\" (UID: \"9d6896c7-d282-487d-8f0e-d6601e2fd9e2\") " Jan 03 06:47:26 crc kubenswrapper[4854]: I0103 06:47:26.724470 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzw24\" (UniqueName: \"kubernetes.io/projected/9d6896c7-d282-487d-8f0e-d6601e2fd9e2-kube-api-access-bzw24\") pod \"9d6896c7-d282-487d-8f0e-d6601e2fd9e2\" (UID: \"9d6896c7-d282-487d-8f0e-d6601e2fd9e2\") " Jan 03 06:47:26 crc kubenswrapper[4854]: I0103 06:47:26.724540 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d6896c7-d282-487d-8f0e-d6601e2fd9e2-catalog-content\") pod \"9d6896c7-d282-487d-8f0e-d6601e2fd9e2\" (UID: \"9d6896c7-d282-487d-8f0e-d6601e2fd9e2\") " Jan 03 06:47:26 crc kubenswrapper[4854]: I0103 06:47:26.728908 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d6896c7-d282-487d-8f0e-d6601e2fd9e2-utilities" (OuterVolumeSpecName: "utilities") pod "9d6896c7-d282-487d-8f0e-d6601e2fd9e2" (UID: "9d6896c7-d282-487d-8f0e-d6601e2fd9e2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:47:26 crc kubenswrapper[4854]: I0103 06:47:26.735315 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d6896c7-d282-487d-8f0e-d6601e2fd9e2-kube-api-access-bzw24" (OuterVolumeSpecName: "kube-api-access-bzw24") pod "9d6896c7-d282-487d-8f0e-d6601e2fd9e2" (UID: "9d6896c7-d282-487d-8f0e-d6601e2fd9e2"). InnerVolumeSpecName "kube-api-access-bzw24". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:47:26 crc kubenswrapper[4854]: I0103 06:47:26.777499 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d6896c7-d282-487d-8f0e-d6601e2fd9e2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9d6896c7-d282-487d-8f0e-d6601e2fd9e2" (UID: "9d6896c7-d282-487d-8f0e-d6601e2fd9e2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:47:26 crc kubenswrapper[4854]: I0103 06:47:26.827133 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzw24\" (UniqueName: \"kubernetes.io/projected/9d6896c7-d282-487d-8f0e-d6601e2fd9e2-kube-api-access-bzw24\") on node \"crc\" DevicePath \"\"" Jan 03 06:47:26 crc kubenswrapper[4854]: I0103 06:47:26.827165 4854 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d6896c7-d282-487d-8f0e-d6601e2fd9e2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 06:47:26 crc kubenswrapper[4854]: I0103 06:47:26.827174 4854 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d6896c7-d282-487d-8f0e-d6601e2fd9e2-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 06:47:27 crc kubenswrapper[4854]: I0103 06:47:27.417390 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6zbj7" event={"ID":"9d6896c7-d282-487d-8f0e-d6601e2fd9e2","Type":"ContainerDied","Data":"63e63b336389707c0881ba92c85a40075090ec5a87c5b620253f5f477040113a"} Jan 03 06:47:27 crc kubenswrapper[4854]: I0103 06:47:27.417706 4854 scope.go:117] "RemoveContainer" containerID="9f44641a76397b153fe336edcdafd05caa0bd29a1c8cc9dc831dfca0b7be2cdc" Jan 03 06:47:27 crc kubenswrapper[4854]: I0103 06:47:27.417905 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6zbj7" Jan 03 06:47:27 crc kubenswrapper[4854]: I0103 06:47:27.476250 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6zbj7"] Jan 03 06:47:27 crc kubenswrapper[4854]: I0103 06:47:27.489506 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6zbj7"] Jan 03 06:47:27 crc kubenswrapper[4854]: I0103 06:47:27.500105 4854 scope.go:117] "RemoveContainer" containerID="924151a0d74288551adeb0b45b4d965d58b48067c558afe06337b99568bd5520" Jan 03 06:47:27 crc kubenswrapper[4854]: I0103 06:47:27.530377 4854 scope.go:117] "RemoveContainer" containerID="6ff13db7424eb93bfcddcba07ffeff5e623275cb22a16f9ccb0ed993fbc5b2b4" Jan 03 06:47:28 crc kubenswrapper[4854]: I0103 06:47:28.139483 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d6896c7-d282-487d-8f0e-d6601e2fd9e2" path="/var/lib/kubelet/pods/9d6896c7-d282-487d-8f0e-d6601e2fd9e2/volumes" Jan 03 06:47:35 crc kubenswrapper[4854]: I0103 06:47:35.118730 4854 scope.go:117] "RemoveContainer" containerID="0ad481e457e7d7b411292c10ab3c7ba3a1eabe2389a354e3396b5bc21a961e6f" Jan 03 06:47:35 crc kubenswrapper[4854]: E0103 06:47:35.120916 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:47:49 crc kubenswrapper[4854]: I0103 06:47:49.118402 4854 scope.go:117] "RemoveContainer" containerID="0ad481e457e7d7b411292c10ab3c7ba3a1eabe2389a354e3396b5bc21a961e6f" Jan 03 06:47:49 crc kubenswrapper[4854]: E0103 06:47:49.119973 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:48:03 crc kubenswrapper[4854]: I0103 06:48:03.120420 4854 scope.go:117] "RemoveContainer" containerID="0ad481e457e7d7b411292c10ab3c7ba3a1eabe2389a354e3396b5bc21a961e6f" Jan 03 06:48:03 crc kubenswrapper[4854]: E0103 06:48:03.121591 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:48:18 crc kubenswrapper[4854]: I0103 06:48:18.119526 4854 scope.go:117] "RemoveContainer" containerID="0ad481e457e7d7b411292c10ab3c7ba3a1eabe2389a354e3396b5bc21a961e6f" Jan 03 06:48:18 crc kubenswrapper[4854]: E0103 06:48:18.120939 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:48:30 crc kubenswrapper[4854]: I0103 06:48:30.118851 4854 scope.go:117] "RemoveContainer" containerID="0ad481e457e7d7b411292c10ab3c7ba3a1eabe2389a354e3396b5bc21a961e6f" Jan 03 06:48:30 crc kubenswrapper[4854]: E0103 06:48:30.119725 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:48:42 crc kubenswrapper[4854]: I0103 06:48:42.137283 4854 scope.go:117] "RemoveContainer" containerID="0ad481e457e7d7b411292c10ab3c7ba3a1eabe2389a354e3396b5bc21a961e6f" Jan 03 06:48:42 crc kubenswrapper[4854]: E0103 06:48:42.138385 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:48:53 crc kubenswrapper[4854]: I0103 06:48:53.119352 4854 scope.go:117] "RemoveContainer" containerID="0ad481e457e7d7b411292c10ab3c7ba3a1eabe2389a354e3396b5bc21a961e6f" Jan 03 06:48:53 crc kubenswrapper[4854]: E0103 06:48:53.120485 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:49:00 crc kubenswrapper[4854]: I0103 06:49:00.097715 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wtcjx"] Jan 03 06:49:00 crc kubenswrapper[4854]: E0103 06:49:00.100380 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d6896c7-d282-487d-8f0e-d6601e2fd9e2" containerName="extract-utilities" Jan 03 06:49:00 crc kubenswrapper[4854]: I0103 06:49:00.100400 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d6896c7-d282-487d-8f0e-d6601e2fd9e2" containerName="extract-utilities" Jan 03 06:49:00 crc kubenswrapper[4854]: E0103 06:49:00.100444 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d6896c7-d282-487d-8f0e-d6601e2fd9e2" containerName="extract-content" Jan 03 06:49:00 crc kubenswrapper[4854]: I0103 06:49:00.100452 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d6896c7-d282-487d-8f0e-d6601e2fd9e2" containerName="extract-content" Jan 03 06:49:00 crc kubenswrapper[4854]: E0103 06:49:00.100485 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d6896c7-d282-487d-8f0e-d6601e2fd9e2" containerName="registry-server" Jan 03 06:49:00 crc kubenswrapper[4854]: I0103 06:49:00.100494 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d6896c7-d282-487d-8f0e-d6601e2fd9e2" containerName="registry-server" Jan 03 06:49:00 crc kubenswrapper[4854]: I0103 06:49:00.100774 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d6896c7-d282-487d-8f0e-d6601e2fd9e2" containerName="registry-server" Jan 03 06:49:00 crc kubenswrapper[4854]: I0103 06:49:00.102893 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wtcjx" Jan 03 06:49:00 crc kubenswrapper[4854]: I0103 06:49:00.159976 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wtcjx"] Jan 03 06:49:00 crc kubenswrapper[4854]: I0103 06:49:00.240550 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1de4507b-fd9e-4301-a290-abecd4cdb494-catalog-content\") pod \"certified-operators-wtcjx\" (UID: \"1de4507b-fd9e-4301-a290-abecd4cdb494\") " pod="openshift-marketplace/certified-operators-wtcjx" Jan 03 06:49:00 crc kubenswrapper[4854]: I0103 06:49:00.240780 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx4sv\" (UniqueName: \"kubernetes.io/projected/1de4507b-fd9e-4301-a290-abecd4cdb494-kube-api-access-fx4sv\") pod \"certified-operators-wtcjx\" (UID: \"1de4507b-fd9e-4301-a290-abecd4cdb494\") " pod="openshift-marketplace/certified-operators-wtcjx" Jan 03 06:49:00 crc kubenswrapper[4854]: I0103 06:49:00.241037 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1de4507b-fd9e-4301-a290-abecd4cdb494-utilities\") pod \"certified-operators-wtcjx\" (UID: \"1de4507b-fd9e-4301-a290-abecd4cdb494\") " pod="openshift-marketplace/certified-operators-wtcjx" Jan 03 06:49:00 crc kubenswrapper[4854]: I0103 06:49:00.342759 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1de4507b-fd9e-4301-a290-abecd4cdb494-utilities\") pod \"certified-operators-wtcjx\" (UID: \"1de4507b-fd9e-4301-a290-abecd4cdb494\") " pod="openshift-marketplace/certified-operators-wtcjx" Jan 03 06:49:00 crc kubenswrapper[4854]: I0103 06:49:00.342884 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1de4507b-fd9e-4301-a290-abecd4cdb494-catalog-content\") pod \"certified-operators-wtcjx\" (UID: \"1de4507b-fd9e-4301-a290-abecd4cdb494\") " pod="openshift-marketplace/certified-operators-wtcjx" Jan 03 06:49:00 crc kubenswrapper[4854]: I0103 06:49:00.342967 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fx4sv\" (UniqueName: \"kubernetes.io/projected/1de4507b-fd9e-4301-a290-abecd4cdb494-kube-api-access-fx4sv\") pod \"certified-operators-wtcjx\" (UID: \"1de4507b-fd9e-4301-a290-abecd4cdb494\") " pod="openshift-marketplace/certified-operators-wtcjx" Jan 03 06:49:00 crc kubenswrapper[4854]: I0103 06:49:00.343465 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1de4507b-fd9e-4301-a290-abecd4cdb494-utilities\") pod \"certified-operators-wtcjx\" (UID: \"1de4507b-fd9e-4301-a290-abecd4cdb494\") " pod="openshift-marketplace/certified-operators-wtcjx" Jan 03 06:49:00 crc kubenswrapper[4854]: I0103 06:49:00.343657 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1de4507b-fd9e-4301-a290-abecd4cdb494-catalog-content\") pod \"certified-operators-wtcjx\" (UID: \"1de4507b-fd9e-4301-a290-abecd4cdb494\") " pod="openshift-marketplace/certified-operators-wtcjx" Jan 03 06:49:00 crc kubenswrapper[4854]: I0103 06:49:00.373022 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fx4sv\" (UniqueName: \"kubernetes.io/projected/1de4507b-fd9e-4301-a290-abecd4cdb494-kube-api-access-fx4sv\") pod \"certified-operators-wtcjx\" (UID: \"1de4507b-fd9e-4301-a290-abecd4cdb494\") " pod="openshift-marketplace/certified-operators-wtcjx" Jan 03 06:49:00 crc kubenswrapper[4854]: I0103 06:49:00.449901 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wtcjx" Jan 03 06:49:01 crc kubenswrapper[4854]: I0103 06:49:01.040567 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wtcjx"] Jan 03 06:49:01 crc kubenswrapper[4854]: I0103 06:49:01.075359 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wtcjx" event={"ID":"1de4507b-fd9e-4301-a290-abecd4cdb494","Type":"ContainerStarted","Data":"a6939bfabf1204f4b03c6ebcf49934860e1c8dcfdea83f08ba61c94ab7e5ea14"} Jan 03 06:49:02 crc kubenswrapper[4854]: I0103 06:49:02.099808 4854 generic.go:334] "Generic (PLEG): container finished" podID="1de4507b-fd9e-4301-a290-abecd4cdb494" containerID="69ccb61c82b22670b2ada96f0d0a9767d2caec61732d4ed14d1063b1511495d4" exitCode=0 Jan 03 06:49:02 crc kubenswrapper[4854]: I0103 06:49:02.099940 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wtcjx" event={"ID":"1de4507b-fd9e-4301-a290-abecd4cdb494","Type":"ContainerDied","Data":"69ccb61c82b22670b2ada96f0d0a9767d2caec61732d4ed14d1063b1511495d4"} Jan 03 06:49:03 crc kubenswrapper[4854]: I0103 06:49:03.111016 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wtcjx" event={"ID":"1de4507b-fd9e-4301-a290-abecd4cdb494","Type":"ContainerStarted","Data":"e93753cb2a5b2351180f9c391e113c6b14fdc5ac6fb92ab7296ff200f10fda39"} Jan 03 06:49:04 crc kubenswrapper[4854]: I0103 06:49:04.118188 4854 scope.go:117] "RemoveContainer" containerID="0ad481e457e7d7b411292c10ab3c7ba3a1eabe2389a354e3396b5bc21a961e6f" Jan 03 06:49:04 crc kubenswrapper[4854]: E0103 06:49:04.118841 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:49:04 crc kubenswrapper[4854]: I0103 06:49:04.122559 4854 generic.go:334] "Generic (PLEG): container finished" podID="1de4507b-fd9e-4301-a290-abecd4cdb494" containerID="e93753cb2a5b2351180f9c391e113c6b14fdc5ac6fb92ab7296ff200f10fda39" exitCode=0 Jan 03 06:49:04 crc kubenswrapper[4854]: I0103 06:49:04.137445 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wtcjx" event={"ID":"1de4507b-fd9e-4301-a290-abecd4cdb494","Type":"ContainerDied","Data":"e93753cb2a5b2351180f9c391e113c6b14fdc5ac6fb92ab7296ff200f10fda39"} Jan 03 06:49:05 crc kubenswrapper[4854]: I0103 06:49:05.137126 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wtcjx" event={"ID":"1de4507b-fd9e-4301-a290-abecd4cdb494","Type":"ContainerStarted","Data":"45966d3f330ad3de626abe0957f7da4f078f3d57912cb615bd466d61ec17cece"} Jan 03 06:49:05 crc kubenswrapper[4854]: I0103 06:49:05.165915 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wtcjx" podStartSLOduration=2.732100012 podStartE2EDuration="5.165897917s" podCreationTimestamp="2026-01-03 06:49:00 +0000 UTC" firstStartedPulling="2026-01-03 06:49:02.104917506 +0000 UTC m=+4120.431494118" lastFinishedPulling="2026-01-03 06:49:04.538715411 +0000 UTC m=+4122.865292023" observedRunningTime="2026-01-03 06:49:05.161073257 +0000 UTC m=+4123.487649859" watchObservedRunningTime="2026-01-03 06:49:05.165897917 +0000 UTC m=+4123.492474489" Jan 03 06:49:10 crc kubenswrapper[4854]: I0103 06:49:10.450177 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wtcjx" Jan 03 06:49:10 crc kubenswrapper[4854]: I0103 06:49:10.450649 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wtcjx" Jan 03 06:49:10 crc kubenswrapper[4854]: I0103 06:49:10.522136 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wtcjx" Jan 03 06:49:11 crc kubenswrapper[4854]: I0103 06:49:11.965259 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wtcjx" Jan 03 06:49:12 crc kubenswrapper[4854]: I0103 06:49:12.034923 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wtcjx"] Jan 03 06:49:13 crc kubenswrapper[4854]: I0103 06:49:13.236065 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wtcjx" podUID="1de4507b-fd9e-4301-a290-abecd4cdb494" containerName="registry-server" containerID="cri-o://45966d3f330ad3de626abe0957f7da4f078f3d57912cb615bd466d61ec17cece" gracePeriod=2 Jan 03 06:49:14 crc kubenswrapper[4854]: I0103 06:49:14.261157 4854 generic.go:334] "Generic (PLEG): container finished" podID="1de4507b-fd9e-4301-a290-abecd4cdb494" containerID="45966d3f330ad3de626abe0957f7da4f078f3d57912cb615bd466d61ec17cece" exitCode=0 Jan 03 06:49:14 crc kubenswrapper[4854]: I0103 06:49:14.261237 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wtcjx" event={"ID":"1de4507b-fd9e-4301-a290-abecd4cdb494","Type":"ContainerDied","Data":"45966d3f330ad3de626abe0957f7da4f078f3d57912cb615bd466d61ec17cece"} Jan 03 06:49:15 crc kubenswrapper[4854]: I0103 06:49:15.037963 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wtcjx" Jan 03 06:49:15 crc kubenswrapper[4854]: I0103 06:49:15.152570 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1de4507b-fd9e-4301-a290-abecd4cdb494-catalog-content\") pod \"1de4507b-fd9e-4301-a290-abecd4cdb494\" (UID: \"1de4507b-fd9e-4301-a290-abecd4cdb494\") " Jan 03 06:49:15 crc kubenswrapper[4854]: I0103 06:49:15.152873 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fx4sv\" (UniqueName: \"kubernetes.io/projected/1de4507b-fd9e-4301-a290-abecd4cdb494-kube-api-access-fx4sv\") pod \"1de4507b-fd9e-4301-a290-abecd4cdb494\" (UID: \"1de4507b-fd9e-4301-a290-abecd4cdb494\") " Jan 03 06:49:15 crc kubenswrapper[4854]: I0103 06:49:15.152990 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1de4507b-fd9e-4301-a290-abecd4cdb494-utilities\") pod \"1de4507b-fd9e-4301-a290-abecd4cdb494\" (UID: \"1de4507b-fd9e-4301-a290-abecd4cdb494\") " Jan 03 06:49:15 crc kubenswrapper[4854]: I0103 06:49:15.153990 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1de4507b-fd9e-4301-a290-abecd4cdb494-utilities" (OuterVolumeSpecName: "utilities") pod "1de4507b-fd9e-4301-a290-abecd4cdb494" (UID: "1de4507b-fd9e-4301-a290-abecd4cdb494"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:49:15 crc kubenswrapper[4854]: I0103 06:49:15.159225 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1de4507b-fd9e-4301-a290-abecd4cdb494-kube-api-access-fx4sv" (OuterVolumeSpecName: "kube-api-access-fx4sv") pod "1de4507b-fd9e-4301-a290-abecd4cdb494" (UID: "1de4507b-fd9e-4301-a290-abecd4cdb494"). InnerVolumeSpecName "kube-api-access-fx4sv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:49:15 crc kubenswrapper[4854]: I0103 06:49:15.203035 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1de4507b-fd9e-4301-a290-abecd4cdb494-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1de4507b-fd9e-4301-a290-abecd4cdb494" (UID: "1de4507b-fd9e-4301-a290-abecd4cdb494"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:49:15 crc kubenswrapper[4854]: I0103 06:49:15.255426 4854 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1de4507b-fd9e-4301-a290-abecd4cdb494-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 06:49:15 crc kubenswrapper[4854]: I0103 06:49:15.255565 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fx4sv\" (UniqueName: \"kubernetes.io/projected/1de4507b-fd9e-4301-a290-abecd4cdb494-kube-api-access-fx4sv\") on node \"crc\" DevicePath \"\"" Jan 03 06:49:15 crc kubenswrapper[4854]: I0103 06:49:15.255769 4854 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1de4507b-fd9e-4301-a290-abecd4cdb494-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 06:49:15 crc kubenswrapper[4854]: I0103 06:49:15.276722 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wtcjx" event={"ID":"1de4507b-fd9e-4301-a290-abecd4cdb494","Type":"ContainerDied","Data":"a6939bfabf1204f4b03c6ebcf49934860e1c8dcfdea83f08ba61c94ab7e5ea14"} Jan 03 06:49:15 crc kubenswrapper[4854]: I0103 06:49:15.276778 4854 scope.go:117] "RemoveContainer" containerID="45966d3f330ad3de626abe0957f7da4f078f3d57912cb615bd466d61ec17cece" Jan 03 06:49:15 crc kubenswrapper[4854]: I0103 06:49:15.276935 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wtcjx" Jan 03 06:49:15 crc kubenswrapper[4854]: I0103 06:49:15.301532 4854 scope.go:117] "RemoveContainer" containerID="e93753cb2a5b2351180f9c391e113c6b14fdc5ac6fb92ab7296ff200f10fda39" Jan 03 06:49:15 crc kubenswrapper[4854]: I0103 06:49:15.330840 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wtcjx"] Jan 03 06:49:15 crc kubenswrapper[4854]: I0103 06:49:15.344411 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wtcjx"] Jan 03 06:49:15 crc kubenswrapper[4854]: I0103 06:49:15.344801 4854 scope.go:117] "RemoveContainer" containerID="69ccb61c82b22670b2ada96f0d0a9767d2caec61732d4ed14d1063b1511495d4" Jan 03 06:49:16 crc kubenswrapper[4854]: I0103 06:49:16.131310 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1de4507b-fd9e-4301-a290-abecd4cdb494" path="/var/lib/kubelet/pods/1de4507b-fd9e-4301-a290-abecd4cdb494/volumes" Jan 03 06:49:19 crc kubenswrapper[4854]: I0103 06:49:19.118696 4854 scope.go:117] "RemoveContainer" containerID="0ad481e457e7d7b411292c10ab3c7ba3a1eabe2389a354e3396b5bc21a961e6f" Jan 03 06:49:19 crc kubenswrapper[4854]: E0103 06:49:19.119954 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:49:33 crc kubenswrapper[4854]: I0103 06:49:33.118562 4854 scope.go:117] "RemoveContainer" containerID="0ad481e457e7d7b411292c10ab3c7ba3a1eabe2389a354e3396b5bc21a961e6f" Jan 03 06:49:33 crc kubenswrapper[4854]: E0103 06:49:33.119547 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:49:47 crc kubenswrapper[4854]: I0103 06:49:47.118363 4854 scope.go:117] "RemoveContainer" containerID="0ad481e457e7d7b411292c10ab3c7ba3a1eabe2389a354e3396b5bc21a961e6f" Jan 03 06:49:47 crc kubenswrapper[4854]: I0103 06:49:47.648374 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" event={"ID":"e8c88d7d-092b-44f7-b4c8-3540be3c0e8b","Type":"ContainerStarted","Data":"16922eb0b91f8b47c8e69d4bf6267760aa034057d5e1e12b0936395e82226bf3"} Jan 03 06:49:53 crc kubenswrapper[4854]: I0103 06:49:53.139526 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="748d9586-5917-42ab-8f1f-3a811b724dae" containerName="galera" probeResult="failure" output="command timed out" Jan 03 06:50:53 crc kubenswrapper[4854]: I0103 06:50:53.134004 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="748d9586-5917-42ab-8f1f-3a811b724dae" containerName="galera" probeResult="failure" output="command timed out" Jan 03 06:51:04 crc kubenswrapper[4854]: I0103 06:51:04.871029 4854 scope.go:117] "RemoveContainer" containerID="d1d0249091d44b673d1f855e59b291c384ae33aa5919903e5682b281609fc4bd" Jan 03 06:51:04 crc kubenswrapper[4854]: I0103 06:51:04.919992 4854 scope.go:117] "RemoveContainer" containerID="67aa3bfb767380b3f3b9246afffe365eb978714334eb852f1cf068e0dbfad29d" Jan 03 06:51:04 crc kubenswrapper[4854]: I0103 06:51:04.970587 4854 scope.go:117] "RemoveContainer" containerID="1f9c46cbf2d16cd5a9e4b94df901aebc3ce6ef9efc0fdd85cac31822ee0ce9ec" Jan 03 06:52:11 crc kubenswrapper[4854]: I0103 06:52:11.755799 4854 patch_prober.go:28] interesting pod/machine-config-daemon-qdhfx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 06:52:11 crc kubenswrapper[4854]: I0103 06:52:11.756918 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 06:52:41 crc kubenswrapper[4854]: I0103 06:52:41.755822 4854 patch_prober.go:28] interesting pod/machine-config-daemon-qdhfx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 06:52:41 crc kubenswrapper[4854]: I0103 06:52:41.756463 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 06:53:11 crc kubenswrapper[4854]: I0103 06:53:11.756011 4854 patch_prober.go:28] interesting pod/machine-config-daemon-qdhfx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 06:53:11 crc kubenswrapper[4854]: I0103 06:53:11.756684 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 06:53:11 crc kubenswrapper[4854]: I0103 06:53:11.756774 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" Jan 03 06:53:11 crc kubenswrapper[4854]: I0103 06:53:11.758316 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"16922eb0b91f8b47c8e69d4bf6267760aa034057d5e1e12b0936395e82226bf3"} pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 03 06:53:11 crc kubenswrapper[4854]: I0103 06:53:11.758460 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" containerID="cri-o://16922eb0b91f8b47c8e69d4bf6267760aa034057d5e1e12b0936395e82226bf3" gracePeriod=600 Jan 03 06:53:12 crc kubenswrapper[4854]: I0103 06:53:12.496697 4854 generic.go:334] "Generic (PLEG): container finished" podID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerID="16922eb0b91f8b47c8e69d4bf6267760aa034057d5e1e12b0936395e82226bf3" exitCode=0 Jan 03 06:53:12 crc kubenswrapper[4854]: I0103 06:53:12.496764 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" event={"ID":"e8c88d7d-092b-44f7-b4c8-3540be3c0e8b","Type":"ContainerDied","Data":"16922eb0b91f8b47c8e69d4bf6267760aa034057d5e1e12b0936395e82226bf3"} Jan 03 06:53:12 crc kubenswrapper[4854]: I0103 06:53:12.497289 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" event={"ID":"e8c88d7d-092b-44f7-b4c8-3540be3c0e8b","Type":"ContainerStarted","Data":"ef2e96981972415f479c5d2719c86714ac46ddc071e288a7842e34b1b148c527"} Jan 03 06:53:12 crc kubenswrapper[4854]: I0103 06:53:12.497319 4854 scope.go:117] "RemoveContainer" containerID="0ad481e457e7d7b411292c10ab3c7ba3a1eabe2389a354e3396b5bc21a961e6f" Jan 03 06:54:42 crc kubenswrapper[4854]: I0103 06:54:42.937393 4854 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-n82hj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 06:54:42 crc kubenswrapper[4854]: I0103 06:54:42.938150 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n82hj" podUID="9ecb343a-f88c-49d3-a792-696f8b94eca3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 06:55:21 crc kubenswrapper[4854]: I0103 06:55:21.625836 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-f4dx2"] Jan 03 06:55:21 crc kubenswrapper[4854]: E0103 06:55:21.626877 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1de4507b-fd9e-4301-a290-abecd4cdb494" containerName="extract-content" Jan 03 06:55:21 crc kubenswrapper[4854]: I0103 06:55:21.626897 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="1de4507b-fd9e-4301-a290-abecd4cdb494" containerName="extract-content" Jan 03 06:55:21 crc kubenswrapper[4854]: E0103 06:55:21.626947 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1de4507b-fd9e-4301-a290-abecd4cdb494" containerName="extract-utilities" Jan 03 06:55:21 crc kubenswrapper[4854]: I0103 06:55:21.626956 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="1de4507b-fd9e-4301-a290-abecd4cdb494" containerName="extract-utilities" Jan 03 06:55:21 crc kubenswrapper[4854]: E0103 06:55:21.626972 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1de4507b-fd9e-4301-a290-abecd4cdb494" containerName="registry-server" Jan 03 06:55:21 crc kubenswrapper[4854]: I0103 06:55:21.626980 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="1de4507b-fd9e-4301-a290-abecd4cdb494" containerName="registry-server" Jan 03 06:55:21 crc kubenswrapper[4854]: I0103 06:55:21.627267 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="1de4507b-fd9e-4301-a290-abecd4cdb494" containerName="registry-server" Jan 03 06:55:21 crc kubenswrapper[4854]: I0103 06:55:21.629293 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f4dx2" Jan 03 06:55:21 crc kubenswrapper[4854]: I0103 06:55:21.649643 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f4dx2"] Jan 03 06:55:21 crc kubenswrapper[4854]: I0103 06:55:21.732612 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gt77\" (UniqueName: \"kubernetes.io/projected/1bddb19e-6e09-460f-887b-911e01826222-kube-api-access-2gt77\") pod \"redhat-marketplace-f4dx2\" (UID: \"1bddb19e-6e09-460f-887b-911e01826222\") " pod="openshift-marketplace/redhat-marketplace-f4dx2" Jan 03 06:55:21 crc kubenswrapper[4854]: I0103 06:55:21.732922 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bddb19e-6e09-460f-887b-911e01826222-utilities\") pod \"redhat-marketplace-f4dx2\" (UID: \"1bddb19e-6e09-460f-887b-911e01826222\") " pod="openshift-marketplace/redhat-marketplace-f4dx2" Jan 03 06:55:21 crc kubenswrapper[4854]: I0103 06:55:21.733021 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bddb19e-6e09-460f-887b-911e01826222-catalog-content\") pod \"redhat-marketplace-f4dx2\" (UID: \"1bddb19e-6e09-460f-887b-911e01826222\") " pod="openshift-marketplace/redhat-marketplace-f4dx2" Jan 03 06:55:21 crc kubenswrapper[4854]: I0103 06:55:21.835543 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gt77\" (UniqueName: \"kubernetes.io/projected/1bddb19e-6e09-460f-887b-911e01826222-kube-api-access-2gt77\") pod \"redhat-marketplace-f4dx2\" (UID: \"1bddb19e-6e09-460f-887b-911e01826222\") " pod="openshift-marketplace/redhat-marketplace-f4dx2" Jan 03 06:55:21 crc kubenswrapper[4854]: I0103 06:55:21.835664 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bddb19e-6e09-460f-887b-911e01826222-utilities\") pod \"redhat-marketplace-f4dx2\" (UID: \"1bddb19e-6e09-460f-887b-911e01826222\") " pod="openshift-marketplace/redhat-marketplace-f4dx2" Jan 03 06:55:21 crc kubenswrapper[4854]: I0103 06:55:21.835717 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bddb19e-6e09-460f-887b-911e01826222-catalog-content\") pod \"redhat-marketplace-f4dx2\" (UID: \"1bddb19e-6e09-460f-887b-911e01826222\") " pod="openshift-marketplace/redhat-marketplace-f4dx2" Jan 03 06:55:21 crc kubenswrapper[4854]: I0103 06:55:21.836300 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bddb19e-6e09-460f-887b-911e01826222-catalog-content\") pod \"redhat-marketplace-f4dx2\" (UID: \"1bddb19e-6e09-460f-887b-911e01826222\") " pod="openshift-marketplace/redhat-marketplace-f4dx2" Jan 03 06:55:21 crc kubenswrapper[4854]: I0103 06:55:21.836491 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bddb19e-6e09-460f-887b-911e01826222-utilities\") pod \"redhat-marketplace-f4dx2\" (UID: \"1bddb19e-6e09-460f-887b-911e01826222\") " pod="openshift-marketplace/redhat-marketplace-f4dx2" Jan 03 06:55:21 crc kubenswrapper[4854]: I0103 06:55:21.861057 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gt77\" (UniqueName: \"kubernetes.io/projected/1bddb19e-6e09-460f-887b-911e01826222-kube-api-access-2gt77\") pod \"redhat-marketplace-f4dx2\" (UID: \"1bddb19e-6e09-460f-887b-911e01826222\") " pod="openshift-marketplace/redhat-marketplace-f4dx2" Jan 03 06:55:22 crc kubenswrapper[4854]: I0103 06:55:22.021477 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f4dx2" Jan 03 06:55:22 crc kubenswrapper[4854]: I0103 06:55:22.569606 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f4dx2"] Jan 03 06:55:23 crc kubenswrapper[4854]: I0103 06:55:23.161810 4854 generic.go:334] "Generic (PLEG): container finished" podID="1bddb19e-6e09-460f-887b-911e01826222" containerID="db79594f4cb413ebc4f3bfc016a8d6c9f8463393356ac6c8d11ae0fcf70143bb" exitCode=0 Jan 03 06:55:23 crc kubenswrapper[4854]: I0103 06:55:23.161919 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f4dx2" event={"ID":"1bddb19e-6e09-460f-887b-911e01826222","Type":"ContainerDied","Data":"db79594f4cb413ebc4f3bfc016a8d6c9f8463393356ac6c8d11ae0fcf70143bb"} Jan 03 06:55:23 crc kubenswrapper[4854]: I0103 06:55:23.162271 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f4dx2" event={"ID":"1bddb19e-6e09-460f-887b-911e01826222","Type":"ContainerStarted","Data":"dee296a9856ad531a2f424577cdda54fa744363bfd30dbc9b7f9f02741c92c20"} Jan 03 06:55:23 crc kubenswrapper[4854]: I0103 06:55:23.165856 4854 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 03 06:55:24 crc kubenswrapper[4854]: I0103 06:55:24.175036 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f4dx2" event={"ID":"1bddb19e-6e09-460f-887b-911e01826222","Type":"ContainerStarted","Data":"2eb04badd8169c4d998789270e8b43c0699d81b2700a76f1a39cce8976612e91"} Jan 03 06:55:25 crc kubenswrapper[4854]: I0103 06:55:25.201118 4854 generic.go:334] "Generic (PLEG): container finished" podID="1bddb19e-6e09-460f-887b-911e01826222" containerID="2eb04badd8169c4d998789270e8b43c0699d81b2700a76f1a39cce8976612e91" exitCode=0 Jan 03 06:55:25 crc kubenswrapper[4854]: I0103 06:55:25.201215 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f4dx2" event={"ID":"1bddb19e-6e09-460f-887b-911e01826222","Type":"ContainerDied","Data":"2eb04badd8169c4d998789270e8b43c0699d81b2700a76f1a39cce8976612e91"} Jan 03 06:55:26 crc kubenswrapper[4854]: I0103 06:55:26.229618 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f4dx2" event={"ID":"1bddb19e-6e09-460f-887b-911e01826222","Type":"ContainerStarted","Data":"252773a5842d24651df7c8e6fe6b03d7cceb9d6dbf8e38f9618f2d2f5150c39f"} Jan 03 06:55:26 crc kubenswrapper[4854]: I0103 06:55:26.255673 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-f4dx2" podStartSLOduration=2.70415592 podStartE2EDuration="5.255647191s" podCreationTimestamp="2026-01-03 06:55:21 +0000 UTC" firstStartedPulling="2026-01-03 06:55:23.165525388 +0000 UTC m=+4501.492101970" lastFinishedPulling="2026-01-03 06:55:25.717016659 +0000 UTC m=+4504.043593241" observedRunningTime="2026-01-03 06:55:26.247885379 +0000 UTC m=+4504.574461961" watchObservedRunningTime="2026-01-03 06:55:26.255647191 +0000 UTC m=+4504.582223783" Jan 03 06:55:32 crc kubenswrapper[4854]: I0103 06:55:32.023490 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-f4dx2" Jan 03 06:55:32 crc kubenswrapper[4854]: I0103 06:55:32.024567 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-f4dx2" Jan 03 06:55:32 crc kubenswrapper[4854]: I0103 06:55:32.507022 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-f4dx2" Jan 03 06:55:32 crc kubenswrapper[4854]: I0103 06:55:32.570868 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-f4dx2" Jan 03 06:55:32 crc kubenswrapper[4854]: I0103 06:55:32.754413 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f4dx2"] Jan 03 06:55:34 crc kubenswrapper[4854]: I0103 06:55:34.318486 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-f4dx2" podUID="1bddb19e-6e09-460f-887b-911e01826222" containerName="registry-server" containerID="cri-o://252773a5842d24651df7c8e6fe6b03d7cceb9d6dbf8e38f9618f2d2f5150c39f" gracePeriod=2 Jan 03 06:55:34 crc kubenswrapper[4854]: I0103 06:55:34.988761 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f4dx2" Jan 03 06:55:35 crc kubenswrapper[4854]: I0103 06:55:35.179591 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bddb19e-6e09-460f-887b-911e01826222-utilities\") pod \"1bddb19e-6e09-460f-887b-911e01826222\" (UID: \"1bddb19e-6e09-460f-887b-911e01826222\") " Jan 03 06:55:35 crc kubenswrapper[4854]: I0103 06:55:35.179649 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bddb19e-6e09-460f-887b-911e01826222-catalog-content\") pod \"1bddb19e-6e09-460f-887b-911e01826222\" (UID: \"1bddb19e-6e09-460f-887b-911e01826222\") " Jan 03 06:55:35 crc kubenswrapper[4854]: I0103 06:55:35.179834 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gt77\" (UniqueName: \"kubernetes.io/projected/1bddb19e-6e09-460f-887b-911e01826222-kube-api-access-2gt77\") pod \"1bddb19e-6e09-460f-887b-911e01826222\" (UID: \"1bddb19e-6e09-460f-887b-911e01826222\") " Jan 03 06:55:35 crc kubenswrapper[4854]: I0103 06:55:35.180742 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bddb19e-6e09-460f-887b-911e01826222-utilities" (OuterVolumeSpecName: "utilities") pod "1bddb19e-6e09-460f-887b-911e01826222" (UID: "1bddb19e-6e09-460f-887b-911e01826222"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:55:35 crc kubenswrapper[4854]: I0103 06:55:35.190259 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bddb19e-6e09-460f-887b-911e01826222-kube-api-access-2gt77" (OuterVolumeSpecName: "kube-api-access-2gt77") pod "1bddb19e-6e09-460f-887b-911e01826222" (UID: "1bddb19e-6e09-460f-887b-911e01826222"). InnerVolumeSpecName "kube-api-access-2gt77". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:55:35 crc kubenswrapper[4854]: I0103 06:55:35.215375 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bddb19e-6e09-460f-887b-911e01826222-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1bddb19e-6e09-460f-887b-911e01826222" (UID: "1bddb19e-6e09-460f-887b-911e01826222"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:55:35 crc kubenswrapper[4854]: I0103 06:55:35.283368 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gt77\" (UniqueName: \"kubernetes.io/projected/1bddb19e-6e09-460f-887b-911e01826222-kube-api-access-2gt77\") on node \"crc\" DevicePath \"\"" Jan 03 06:55:35 crc kubenswrapper[4854]: I0103 06:55:35.283403 4854 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bddb19e-6e09-460f-887b-911e01826222-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 06:55:35 crc kubenswrapper[4854]: I0103 06:55:35.283412 4854 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bddb19e-6e09-460f-887b-911e01826222-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 06:55:35 crc kubenswrapper[4854]: I0103 06:55:35.329995 4854 generic.go:334] "Generic (PLEG): container finished" podID="1bddb19e-6e09-460f-887b-911e01826222" containerID="252773a5842d24651df7c8e6fe6b03d7cceb9d6dbf8e38f9618f2d2f5150c39f" exitCode=0 Jan 03 06:55:35 crc kubenswrapper[4854]: I0103 06:55:35.330054 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f4dx2" Jan 03 06:55:35 crc kubenswrapper[4854]: I0103 06:55:35.330066 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f4dx2" event={"ID":"1bddb19e-6e09-460f-887b-911e01826222","Type":"ContainerDied","Data":"252773a5842d24651df7c8e6fe6b03d7cceb9d6dbf8e38f9618f2d2f5150c39f"} Jan 03 06:55:35 crc kubenswrapper[4854]: I0103 06:55:35.330123 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f4dx2" event={"ID":"1bddb19e-6e09-460f-887b-911e01826222","Type":"ContainerDied","Data":"dee296a9856ad531a2f424577cdda54fa744363bfd30dbc9b7f9f02741c92c20"} Jan 03 06:55:35 crc kubenswrapper[4854]: I0103 06:55:35.330143 4854 scope.go:117] "RemoveContainer" containerID="252773a5842d24651df7c8e6fe6b03d7cceb9d6dbf8e38f9618f2d2f5150c39f" Jan 03 06:55:35 crc kubenswrapper[4854]: I0103 06:55:35.364632 4854 scope.go:117] "RemoveContainer" containerID="2eb04badd8169c4d998789270e8b43c0699d81b2700a76f1a39cce8976612e91" Jan 03 06:55:35 crc kubenswrapper[4854]: I0103 06:55:35.373277 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f4dx2"] Jan 03 06:55:35 crc kubenswrapper[4854]: I0103 06:55:35.382625 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-f4dx2"] Jan 03 06:55:35 crc kubenswrapper[4854]: I0103 06:55:35.391380 4854 scope.go:117] "RemoveContainer" containerID="db79594f4cb413ebc4f3bfc016a8d6c9f8463393356ac6c8d11ae0fcf70143bb" Jan 03 06:55:35 crc kubenswrapper[4854]: I0103 06:55:35.448050 4854 scope.go:117] "RemoveContainer" containerID="252773a5842d24651df7c8e6fe6b03d7cceb9d6dbf8e38f9618f2d2f5150c39f" Jan 03 06:55:35 crc kubenswrapper[4854]: E0103 06:55:35.448822 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"252773a5842d24651df7c8e6fe6b03d7cceb9d6dbf8e38f9618f2d2f5150c39f\": container with ID starting with 252773a5842d24651df7c8e6fe6b03d7cceb9d6dbf8e38f9618f2d2f5150c39f not found: ID does not exist" containerID="252773a5842d24651df7c8e6fe6b03d7cceb9d6dbf8e38f9618f2d2f5150c39f" Jan 03 06:55:35 crc kubenswrapper[4854]: I0103 06:55:35.448894 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"252773a5842d24651df7c8e6fe6b03d7cceb9d6dbf8e38f9618f2d2f5150c39f"} err="failed to get container status \"252773a5842d24651df7c8e6fe6b03d7cceb9d6dbf8e38f9618f2d2f5150c39f\": rpc error: code = NotFound desc = could not find container \"252773a5842d24651df7c8e6fe6b03d7cceb9d6dbf8e38f9618f2d2f5150c39f\": container with ID starting with 252773a5842d24651df7c8e6fe6b03d7cceb9d6dbf8e38f9618f2d2f5150c39f not found: ID does not exist" Jan 03 06:55:35 crc kubenswrapper[4854]: I0103 06:55:35.448927 4854 scope.go:117] "RemoveContainer" containerID="2eb04badd8169c4d998789270e8b43c0699d81b2700a76f1a39cce8976612e91" Jan 03 06:55:35 crc kubenswrapper[4854]: E0103 06:55:35.449367 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2eb04badd8169c4d998789270e8b43c0699d81b2700a76f1a39cce8976612e91\": container with ID starting with 2eb04badd8169c4d998789270e8b43c0699d81b2700a76f1a39cce8976612e91 not found: ID does not exist" containerID="2eb04badd8169c4d998789270e8b43c0699d81b2700a76f1a39cce8976612e91" Jan 03 06:55:35 crc kubenswrapper[4854]: I0103 06:55:35.449410 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2eb04badd8169c4d998789270e8b43c0699d81b2700a76f1a39cce8976612e91"} err="failed to get container status \"2eb04badd8169c4d998789270e8b43c0699d81b2700a76f1a39cce8976612e91\": rpc error: code = NotFound desc = could not find container \"2eb04badd8169c4d998789270e8b43c0699d81b2700a76f1a39cce8976612e91\": container with ID starting with 2eb04badd8169c4d998789270e8b43c0699d81b2700a76f1a39cce8976612e91 not found: ID does not exist" Jan 03 06:55:35 crc kubenswrapper[4854]: I0103 06:55:35.449439 4854 scope.go:117] "RemoveContainer" containerID="db79594f4cb413ebc4f3bfc016a8d6c9f8463393356ac6c8d11ae0fcf70143bb" Jan 03 06:55:35 crc kubenswrapper[4854]: E0103 06:55:35.449906 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db79594f4cb413ebc4f3bfc016a8d6c9f8463393356ac6c8d11ae0fcf70143bb\": container with ID starting with db79594f4cb413ebc4f3bfc016a8d6c9f8463393356ac6c8d11ae0fcf70143bb not found: ID does not exist" containerID="db79594f4cb413ebc4f3bfc016a8d6c9f8463393356ac6c8d11ae0fcf70143bb" Jan 03 06:55:35 crc kubenswrapper[4854]: I0103 06:55:35.449944 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db79594f4cb413ebc4f3bfc016a8d6c9f8463393356ac6c8d11ae0fcf70143bb"} err="failed to get container status \"db79594f4cb413ebc4f3bfc016a8d6c9f8463393356ac6c8d11ae0fcf70143bb\": rpc error: code = NotFound desc = could not find container \"db79594f4cb413ebc4f3bfc016a8d6c9f8463393356ac6c8d11ae0fcf70143bb\": container with ID starting with db79594f4cb413ebc4f3bfc016a8d6c9f8463393356ac6c8d11ae0fcf70143bb not found: ID does not exist" Jan 03 06:55:36 crc kubenswrapper[4854]: I0103 06:55:36.132262 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bddb19e-6e09-460f-887b-911e01826222" path="/var/lib/kubelet/pods/1bddb19e-6e09-460f-887b-911e01826222/volumes" Jan 03 06:55:41 crc kubenswrapper[4854]: I0103 06:55:41.755290 4854 patch_prober.go:28] interesting pod/machine-config-daemon-qdhfx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 06:55:41 crc kubenswrapper[4854]: I0103 06:55:41.755903 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 06:55:48 crc kubenswrapper[4854]: E0103 06:55:48.916305 4854 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.102:40410->38.102.83.102:42659: write tcp 38.102.83.102:40410->38.102.83.102:42659: write: broken pipe Jan 03 06:56:11 crc kubenswrapper[4854]: I0103 06:56:11.756322 4854 patch_prober.go:28] interesting pod/machine-config-daemon-qdhfx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 06:56:11 crc kubenswrapper[4854]: I0103 06:56:11.757094 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 06:56:41 crc kubenswrapper[4854]: I0103 06:56:41.188657 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-d6m4n"] Jan 03 06:56:41 crc kubenswrapper[4854]: E0103 06:56:41.190611 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bddb19e-6e09-460f-887b-911e01826222" containerName="extract-utilities" Jan 03 06:56:41 crc kubenswrapper[4854]: I0103 06:56:41.190706 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bddb19e-6e09-460f-887b-911e01826222" containerName="extract-utilities" Jan 03 06:56:41 crc kubenswrapper[4854]: E0103 06:56:41.190787 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bddb19e-6e09-460f-887b-911e01826222" containerName="extract-content" Jan 03 06:56:41 crc kubenswrapper[4854]: I0103 06:56:41.190844 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bddb19e-6e09-460f-887b-911e01826222" containerName="extract-content" Jan 03 06:56:41 crc kubenswrapper[4854]: E0103 06:56:41.190903 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bddb19e-6e09-460f-887b-911e01826222" containerName="registry-server" Jan 03 06:56:41 crc kubenswrapper[4854]: I0103 06:56:41.190958 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bddb19e-6e09-460f-887b-911e01826222" containerName="registry-server" Jan 03 06:56:41 crc kubenswrapper[4854]: I0103 06:56:41.191508 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bddb19e-6e09-460f-887b-911e01826222" containerName="registry-server" Jan 03 06:56:41 crc kubenswrapper[4854]: I0103 06:56:41.193345 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d6m4n" Jan 03 06:56:41 crc kubenswrapper[4854]: I0103 06:56:41.203366 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d6m4n"] Jan 03 06:56:41 crc kubenswrapper[4854]: I0103 06:56:41.290621 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6npnt\" (UniqueName: \"kubernetes.io/projected/411fdc02-a44d-44c0-a2f4-f3d28e47f10d-kube-api-access-6npnt\") pod \"redhat-operators-d6m4n\" (UID: \"411fdc02-a44d-44c0-a2f4-f3d28e47f10d\") " pod="openshift-marketplace/redhat-operators-d6m4n" Jan 03 06:56:41 crc kubenswrapper[4854]: I0103 06:56:41.290829 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/411fdc02-a44d-44c0-a2f4-f3d28e47f10d-utilities\") pod \"redhat-operators-d6m4n\" (UID: \"411fdc02-a44d-44c0-a2f4-f3d28e47f10d\") " pod="openshift-marketplace/redhat-operators-d6m4n" Jan 03 06:56:41 crc kubenswrapper[4854]: I0103 06:56:41.291158 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/411fdc02-a44d-44c0-a2f4-f3d28e47f10d-catalog-content\") pod \"redhat-operators-d6m4n\" (UID: \"411fdc02-a44d-44c0-a2f4-f3d28e47f10d\") " pod="openshift-marketplace/redhat-operators-d6m4n" Jan 03 06:56:41 crc kubenswrapper[4854]: I0103 06:56:41.394176 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/411fdc02-a44d-44c0-a2f4-f3d28e47f10d-utilities\") pod \"redhat-operators-d6m4n\" (UID: \"411fdc02-a44d-44c0-a2f4-f3d28e47f10d\") " pod="openshift-marketplace/redhat-operators-d6m4n" Jan 03 06:56:41 crc kubenswrapper[4854]: I0103 06:56:41.394960 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/411fdc02-a44d-44c0-a2f4-f3d28e47f10d-utilities\") pod \"redhat-operators-d6m4n\" (UID: \"411fdc02-a44d-44c0-a2f4-f3d28e47f10d\") " pod="openshift-marketplace/redhat-operators-d6m4n" Jan 03 06:56:41 crc kubenswrapper[4854]: I0103 06:56:41.395327 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/411fdc02-a44d-44c0-a2f4-f3d28e47f10d-catalog-content\") pod \"redhat-operators-d6m4n\" (UID: \"411fdc02-a44d-44c0-a2f4-f3d28e47f10d\") " pod="openshift-marketplace/redhat-operators-d6m4n" Jan 03 06:56:41 crc kubenswrapper[4854]: I0103 06:56:41.395608 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6npnt\" (UniqueName: \"kubernetes.io/projected/411fdc02-a44d-44c0-a2f4-f3d28e47f10d-kube-api-access-6npnt\") pod \"redhat-operators-d6m4n\" (UID: \"411fdc02-a44d-44c0-a2f4-f3d28e47f10d\") " pod="openshift-marketplace/redhat-operators-d6m4n" Jan 03 06:56:41 crc kubenswrapper[4854]: I0103 06:56:41.395871 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/411fdc02-a44d-44c0-a2f4-f3d28e47f10d-catalog-content\") pod \"redhat-operators-d6m4n\" (UID: \"411fdc02-a44d-44c0-a2f4-f3d28e47f10d\") " pod="openshift-marketplace/redhat-operators-d6m4n" Jan 03 06:56:41 crc kubenswrapper[4854]: I0103 06:56:41.417134 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6npnt\" (UniqueName: \"kubernetes.io/projected/411fdc02-a44d-44c0-a2f4-f3d28e47f10d-kube-api-access-6npnt\") pod \"redhat-operators-d6m4n\" (UID: \"411fdc02-a44d-44c0-a2f4-f3d28e47f10d\") " pod="openshift-marketplace/redhat-operators-d6m4n" Jan 03 06:56:41 crc kubenswrapper[4854]: I0103 06:56:41.535857 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d6m4n" Jan 03 06:56:41 crc kubenswrapper[4854]: I0103 06:56:41.755987 4854 patch_prober.go:28] interesting pod/machine-config-daemon-qdhfx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 06:56:41 crc kubenswrapper[4854]: I0103 06:56:41.756333 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 06:56:41 crc kubenswrapper[4854]: I0103 06:56:41.756378 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" Jan 03 06:56:41 crc kubenswrapper[4854]: I0103 06:56:41.757220 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ef2e96981972415f479c5d2719c86714ac46ddc071e288a7842e34b1b148c527"} pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 03 06:56:41 crc kubenswrapper[4854]: I0103 06:56:41.757276 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" containerID="cri-o://ef2e96981972415f479c5d2719c86714ac46ddc071e288a7842e34b1b148c527" gracePeriod=600 Jan 03 06:56:41 crc kubenswrapper[4854]: E0103 06:56:41.902692 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:56:42 crc kubenswrapper[4854]: I0103 06:56:42.057657 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d6m4n"] Jan 03 06:56:42 crc kubenswrapper[4854]: I0103 06:56:42.279826 4854 generic.go:334] "Generic (PLEG): container finished" podID="411fdc02-a44d-44c0-a2f4-f3d28e47f10d" containerID="330511ea4b3998bd119887d8e4cdf84168fcb40ad8ac3acd82db2869d6faaa5c" exitCode=0 Jan 03 06:56:42 crc kubenswrapper[4854]: I0103 06:56:42.280109 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d6m4n" event={"ID":"411fdc02-a44d-44c0-a2f4-f3d28e47f10d","Type":"ContainerDied","Data":"330511ea4b3998bd119887d8e4cdf84168fcb40ad8ac3acd82db2869d6faaa5c"} Jan 03 06:56:42 crc kubenswrapper[4854]: I0103 06:56:42.280137 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d6m4n" event={"ID":"411fdc02-a44d-44c0-a2f4-f3d28e47f10d","Type":"ContainerStarted","Data":"ecb23d3f51d9d84db9be2196aa3b83376fb68d94bfdc40fe5c3a77ecab0cb6ef"} Jan 03 06:56:42 crc kubenswrapper[4854]: I0103 06:56:42.284187 4854 generic.go:334] "Generic (PLEG): container finished" podID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerID="ef2e96981972415f479c5d2719c86714ac46ddc071e288a7842e34b1b148c527" exitCode=0 Jan 03 06:56:42 crc kubenswrapper[4854]: I0103 06:56:42.284231 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" event={"ID":"e8c88d7d-092b-44f7-b4c8-3540be3c0e8b","Type":"ContainerDied","Data":"ef2e96981972415f479c5d2719c86714ac46ddc071e288a7842e34b1b148c527"} Jan 03 06:56:42 crc kubenswrapper[4854]: I0103 06:56:42.284272 4854 scope.go:117] "RemoveContainer" containerID="16922eb0b91f8b47c8e69d4bf6267760aa034057d5e1e12b0936395e82226bf3" Jan 03 06:56:42 crc kubenswrapper[4854]: I0103 06:56:42.284617 4854 scope.go:117] "RemoveContainer" containerID="ef2e96981972415f479c5d2719c86714ac46ddc071e288a7842e34b1b148c527" Jan 03 06:56:42 crc kubenswrapper[4854]: E0103 06:56:42.284879 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:56:44 crc kubenswrapper[4854]: I0103 06:56:44.323928 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d6m4n" event={"ID":"411fdc02-a44d-44c0-a2f4-f3d28e47f10d","Type":"ContainerStarted","Data":"3a989eb439160b5e7503395a837786e5656df834acb40a50f7126a59b8c7a627"} Jan 03 06:56:48 crc kubenswrapper[4854]: I0103 06:56:48.374381 4854 generic.go:334] "Generic (PLEG): container finished" podID="411fdc02-a44d-44c0-a2f4-f3d28e47f10d" containerID="3a989eb439160b5e7503395a837786e5656df834acb40a50f7126a59b8c7a627" exitCode=0 Jan 03 06:56:48 crc kubenswrapper[4854]: I0103 06:56:48.374440 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d6m4n" event={"ID":"411fdc02-a44d-44c0-a2f4-f3d28e47f10d","Type":"ContainerDied","Data":"3a989eb439160b5e7503395a837786e5656df834acb40a50f7126a59b8c7a627"} Jan 03 06:56:50 crc kubenswrapper[4854]: I0103 06:56:50.403885 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d6m4n" event={"ID":"411fdc02-a44d-44c0-a2f4-f3d28e47f10d","Type":"ContainerStarted","Data":"7ec8c7c2ae2fe41af7d8b64adf1c3aab16af28730d8dafcc9dcc8930f18ccb03"} Jan 03 06:56:50 crc kubenswrapper[4854]: I0103 06:56:50.440769 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-d6m4n" podStartSLOduration=2.485052519 podStartE2EDuration="9.440748619s" podCreationTimestamp="2026-01-03 06:56:41 +0000 UTC" firstStartedPulling="2026-01-03 06:56:42.281845491 +0000 UTC m=+4580.608422073" lastFinishedPulling="2026-01-03 06:56:49.237541611 +0000 UTC m=+4587.564118173" observedRunningTime="2026-01-03 06:56:50.434148736 +0000 UTC m=+4588.760725308" watchObservedRunningTime="2026-01-03 06:56:50.440748619 +0000 UTC m=+4588.767325201" Jan 03 06:56:51 crc kubenswrapper[4854]: I0103 06:56:51.536624 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-d6m4n" Jan 03 06:56:51 crc kubenswrapper[4854]: I0103 06:56:51.536861 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-d6m4n" Jan 03 06:56:52 crc kubenswrapper[4854]: I0103 06:56:52.603480 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-d6m4n" podUID="411fdc02-a44d-44c0-a2f4-f3d28e47f10d" containerName="registry-server" probeResult="failure" output=< Jan 03 06:56:52 crc kubenswrapper[4854]: timeout: failed to connect service ":50051" within 1s Jan 03 06:56:52 crc kubenswrapper[4854]: > Jan 03 06:56:55 crc kubenswrapper[4854]: I0103 06:56:55.118273 4854 scope.go:117] "RemoveContainer" containerID="ef2e96981972415f479c5d2719c86714ac46ddc071e288a7842e34b1b148c527" Jan 03 06:56:55 crc kubenswrapper[4854]: E0103 06:56:55.118913 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:57:02 crc kubenswrapper[4854]: I0103 06:57:02.595421 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-d6m4n" podUID="411fdc02-a44d-44c0-a2f4-f3d28e47f10d" containerName="registry-server" probeResult="failure" output=< Jan 03 06:57:02 crc kubenswrapper[4854]: timeout: failed to connect service ":50051" within 1s Jan 03 06:57:02 crc kubenswrapper[4854]: > Jan 03 06:57:08 crc kubenswrapper[4854]: I0103 06:57:08.118766 4854 scope.go:117] "RemoveContainer" containerID="ef2e96981972415f479c5d2719c86714ac46ddc071e288a7842e34b1b148c527" Jan 03 06:57:08 crc kubenswrapper[4854]: E0103 06:57:08.120001 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:57:11 crc kubenswrapper[4854]: I0103 06:57:11.612503 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-d6m4n" Jan 03 06:57:11 crc kubenswrapper[4854]: I0103 06:57:11.708880 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-d6m4n" Jan 03 06:57:12 crc kubenswrapper[4854]: I0103 06:57:12.385433 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d6m4n"] Jan 03 06:57:12 crc kubenswrapper[4854]: I0103 06:57:12.790361 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-d6m4n" podUID="411fdc02-a44d-44c0-a2f4-f3d28e47f10d" containerName="registry-server" containerID="cri-o://7ec8c7c2ae2fe41af7d8b64adf1c3aab16af28730d8dafcc9dcc8930f18ccb03" gracePeriod=2 Jan 03 06:57:13 crc kubenswrapper[4854]: I0103 06:57:13.366069 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d6m4n" Jan 03 06:57:13 crc kubenswrapper[4854]: I0103 06:57:13.489362 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/411fdc02-a44d-44c0-a2f4-f3d28e47f10d-catalog-content\") pod \"411fdc02-a44d-44c0-a2f4-f3d28e47f10d\" (UID: \"411fdc02-a44d-44c0-a2f4-f3d28e47f10d\") " Jan 03 06:57:13 crc kubenswrapper[4854]: I0103 06:57:13.489587 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6npnt\" (UniqueName: \"kubernetes.io/projected/411fdc02-a44d-44c0-a2f4-f3d28e47f10d-kube-api-access-6npnt\") pod \"411fdc02-a44d-44c0-a2f4-f3d28e47f10d\" (UID: \"411fdc02-a44d-44c0-a2f4-f3d28e47f10d\") " Jan 03 06:57:13 crc kubenswrapper[4854]: I0103 06:57:13.489648 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/411fdc02-a44d-44c0-a2f4-f3d28e47f10d-utilities\") pod \"411fdc02-a44d-44c0-a2f4-f3d28e47f10d\" (UID: \"411fdc02-a44d-44c0-a2f4-f3d28e47f10d\") " Jan 03 06:57:13 crc kubenswrapper[4854]: I0103 06:57:13.491117 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/411fdc02-a44d-44c0-a2f4-f3d28e47f10d-utilities" (OuterVolumeSpecName: "utilities") pod "411fdc02-a44d-44c0-a2f4-f3d28e47f10d" (UID: "411fdc02-a44d-44c0-a2f4-f3d28e47f10d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:57:13 crc kubenswrapper[4854]: I0103 06:57:13.502449 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/411fdc02-a44d-44c0-a2f4-f3d28e47f10d-kube-api-access-6npnt" (OuterVolumeSpecName: "kube-api-access-6npnt") pod "411fdc02-a44d-44c0-a2f4-f3d28e47f10d" (UID: "411fdc02-a44d-44c0-a2f4-f3d28e47f10d"). InnerVolumeSpecName "kube-api-access-6npnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:57:13 crc kubenswrapper[4854]: I0103 06:57:13.593109 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6npnt\" (UniqueName: \"kubernetes.io/projected/411fdc02-a44d-44c0-a2f4-f3d28e47f10d-kube-api-access-6npnt\") on node \"crc\" DevicePath \"\"" Jan 03 06:57:13 crc kubenswrapper[4854]: I0103 06:57:13.593142 4854 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/411fdc02-a44d-44c0-a2f4-f3d28e47f10d-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 06:57:13 crc kubenswrapper[4854]: I0103 06:57:13.608526 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/411fdc02-a44d-44c0-a2f4-f3d28e47f10d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "411fdc02-a44d-44c0-a2f4-f3d28e47f10d" (UID: "411fdc02-a44d-44c0-a2f4-f3d28e47f10d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:57:13 crc kubenswrapper[4854]: I0103 06:57:13.695791 4854 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/411fdc02-a44d-44c0-a2f4-f3d28e47f10d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 06:57:13 crc kubenswrapper[4854]: I0103 06:57:13.803529 4854 generic.go:334] "Generic (PLEG): container finished" podID="411fdc02-a44d-44c0-a2f4-f3d28e47f10d" containerID="7ec8c7c2ae2fe41af7d8b64adf1c3aab16af28730d8dafcc9dcc8930f18ccb03" exitCode=0 Jan 03 06:57:13 crc kubenswrapper[4854]: I0103 06:57:13.803605 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d6m4n" event={"ID":"411fdc02-a44d-44c0-a2f4-f3d28e47f10d","Type":"ContainerDied","Data":"7ec8c7c2ae2fe41af7d8b64adf1c3aab16af28730d8dafcc9dcc8930f18ccb03"} Jan 03 06:57:13 crc kubenswrapper[4854]: I0103 06:57:13.803635 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d6m4n" Jan 03 06:57:13 crc kubenswrapper[4854]: I0103 06:57:13.803683 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d6m4n" event={"ID":"411fdc02-a44d-44c0-a2f4-f3d28e47f10d","Type":"ContainerDied","Data":"ecb23d3f51d9d84db9be2196aa3b83376fb68d94bfdc40fe5c3a77ecab0cb6ef"} Jan 03 06:57:13 crc kubenswrapper[4854]: I0103 06:57:13.803708 4854 scope.go:117] "RemoveContainer" containerID="7ec8c7c2ae2fe41af7d8b64adf1c3aab16af28730d8dafcc9dcc8930f18ccb03" Jan 03 06:57:13 crc kubenswrapper[4854]: I0103 06:57:13.839704 4854 scope.go:117] "RemoveContainer" containerID="3a989eb439160b5e7503395a837786e5656df834acb40a50f7126a59b8c7a627" Jan 03 06:57:14 crc kubenswrapper[4854]: I0103 06:57:14.115710 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d6m4n"] Jan 03 06:57:14 crc kubenswrapper[4854]: I0103 06:57:14.149771 4854 scope.go:117] "RemoveContainer" containerID="330511ea4b3998bd119887d8e4cdf84168fcb40ad8ac3acd82db2869d6faaa5c" Jan 03 06:57:14 crc kubenswrapper[4854]: I0103 06:57:14.153961 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-d6m4n"] Jan 03 06:57:14 crc kubenswrapper[4854]: I0103 06:57:14.212514 4854 scope.go:117] "RemoveContainer" containerID="7ec8c7c2ae2fe41af7d8b64adf1c3aab16af28730d8dafcc9dcc8930f18ccb03" Jan 03 06:57:14 crc kubenswrapper[4854]: E0103 06:57:14.215156 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ec8c7c2ae2fe41af7d8b64adf1c3aab16af28730d8dafcc9dcc8930f18ccb03\": container with ID starting with 7ec8c7c2ae2fe41af7d8b64adf1c3aab16af28730d8dafcc9dcc8930f18ccb03 not found: ID does not exist" containerID="7ec8c7c2ae2fe41af7d8b64adf1c3aab16af28730d8dafcc9dcc8930f18ccb03" Jan 03 06:57:14 crc kubenswrapper[4854]: I0103 06:57:14.215198 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ec8c7c2ae2fe41af7d8b64adf1c3aab16af28730d8dafcc9dcc8930f18ccb03"} err="failed to get container status \"7ec8c7c2ae2fe41af7d8b64adf1c3aab16af28730d8dafcc9dcc8930f18ccb03\": rpc error: code = NotFound desc = could not find container \"7ec8c7c2ae2fe41af7d8b64adf1c3aab16af28730d8dafcc9dcc8930f18ccb03\": container with ID starting with 7ec8c7c2ae2fe41af7d8b64adf1c3aab16af28730d8dafcc9dcc8930f18ccb03 not found: ID does not exist" Jan 03 06:57:14 crc kubenswrapper[4854]: I0103 06:57:14.215223 4854 scope.go:117] "RemoveContainer" containerID="3a989eb439160b5e7503395a837786e5656df834acb40a50f7126a59b8c7a627" Jan 03 06:57:14 crc kubenswrapper[4854]: E0103 06:57:14.216120 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a989eb439160b5e7503395a837786e5656df834acb40a50f7126a59b8c7a627\": container with ID starting with 3a989eb439160b5e7503395a837786e5656df834acb40a50f7126a59b8c7a627 not found: ID does not exist" containerID="3a989eb439160b5e7503395a837786e5656df834acb40a50f7126a59b8c7a627" Jan 03 06:57:14 crc kubenswrapper[4854]: I0103 06:57:14.216140 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a989eb439160b5e7503395a837786e5656df834acb40a50f7126a59b8c7a627"} err="failed to get container status \"3a989eb439160b5e7503395a837786e5656df834acb40a50f7126a59b8c7a627\": rpc error: code = NotFound desc = could not find container \"3a989eb439160b5e7503395a837786e5656df834acb40a50f7126a59b8c7a627\": container with ID starting with 3a989eb439160b5e7503395a837786e5656df834acb40a50f7126a59b8c7a627 not found: ID does not exist" Jan 03 06:57:14 crc kubenswrapper[4854]: I0103 06:57:14.216154 4854 scope.go:117] "RemoveContainer" containerID="330511ea4b3998bd119887d8e4cdf84168fcb40ad8ac3acd82db2869d6faaa5c" Jan 03 06:57:14 crc kubenswrapper[4854]: E0103 06:57:14.216388 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"330511ea4b3998bd119887d8e4cdf84168fcb40ad8ac3acd82db2869d6faaa5c\": container with ID starting with 330511ea4b3998bd119887d8e4cdf84168fcb40ad8ac3acd82db2869d6faaa5c not found: ID does not exist" containerID="330511ea4b3998bd119887d8e4cdf84168fcb40ad8ac3acd82db2869d6faaa5c" Jan 03 06:57:14 crc kubenswrapper[4854]: I0103 06:57:14.216409 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"330511ea4b3998bd119887d8e4cdf84168fcb40ad8ac3acd82db2869d6faaa5c"} err="failed to get container status \"330511ea4b3998bd119887d8e4cdf84168fcb40ad8ac3acd82db2869d6faaa5c\": rpc error: code = NotFound desc = could not find container \"330511ea4b3998bd119887d8e4cdf84168fcb40ad8ac3acd82db2869d6faaa5c\": container with ID starting with 330511ea4b3998bd119887d8e4cdf84168fcb40ad8ac3acd82db2869d6faaa5c not found: ID does not exist" Jan 03 06:57:16 crc kubenswrapper[4854]: I0103 06:57:16.131923 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="411fdc02-a44d-44c0-a2f4-f3d28e47f10d" path="/var/lib/kubelet/pods/411fdc02-a44d-44c0-a2f4-f3d28e47f10d/volumes" Jan 03 06:57:22 crc kubenswrapper[4854]: I0103 06:57:22.132506 4854 scope.go:117] "RemoveContainer" containerID="ef2e96981972415f479c5d2719c86714ac46ddc071e288a7842e34b1b148c527" Jan 03 06:57:22 crc kubenswrapper[4854]: E0103 06:57:22.133200 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:57:22 crc kubenswrapper[4854]: E0103 06:57:22.282609 4854 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Jan 03 06:57:24 crc kubenswrapper[4854]: E0103 06:57:24.093859 4854 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Jan 03 06:57:27 crc kubenswrapper[4854]: E0103 06:57:27.168921 4854 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.102:37466->38.102.83.102:42659: write tcp 38.102.83.102:37466->38.102.83.102:42659: write: broken pipe Jan 03 06:57:32 crc kubenswrapper[4854]: E0103 06:57:32.609114 4854 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Jan 03 06:57:37 crc kubenswrapper[4854]: I0103 06:57:37.119643 4854 scope.go:117] "RemoveContainer" containerID="ef2e96981972415f479c5d2719c86714ac46ddc071e288a7842e34b1b148c527" Jan 03 06:57:37 crc kubenswrapper[4854]: E0103 06:57:37.121129 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:57:39 crc kubenswrapper[4854]: E0103 06:57:39.358788 4854 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Jan 03 06:57:42 crc kubenswrapper[4854]: E0103 06:57:42.653338 4854 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Jan 03 06:57:48 crc kubenswrapper[4854]: E0103 06:57:48.262495 4854 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Jan 03 06:57:48 crc kubenswrapper[4854]: E0103 06:57:48.262656 4854 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Jan 03 06:57:50 crc kubenswrapper[4854]: I0103 06:57:50.119276 4854 scope.go:117] "RemoveContainer" containerID="ef2e96981972415f479c5d2719c86714ac46ddc071e288a7842e34b1b148c527" Jan 03 06:57:50 crc kubenswrapper[4854]: E0103 06:57:50.120500 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:57:52 crc kubenswrapper[4854]: E0103 06:57:52.701195 4854 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Jan 03 06:57:54 crc kubenswrapper[4854]: E0103 06:57:54.344113 4854 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Jan 03 06:58:02 crc kubenswrapper[4854]: E0103 06:58:02.800692 4854 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Jan 03 06:58:05 crc kubenswrapper[4854]: I0103 06:58:05.119228 4854 scope.go:117] "RemoveContainer" containerID="ef2e96981972415f479c5d2719c86714ac46ddc071e288a7842e34b1b148c527" Jan 03 06:58:05 crc kubenswrapper[4854]: E0103 06:58:05.120310 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:58:06 crc kubenswrapper[4854]: E0103 06:58:06.495516 4854 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.102:45574->38.102.83.102:42659: write tcp 38.102.83.102:45574->38.102.83.102:42659: write: broken pipe Jan 03 06:58:09 crc kubenswrapper[4854]: E0103 06:58:09.443871 4854 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Jan 03 06:58:12 crc kubenswrapper[4854]: I0103 06:58:12.446214 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8n9l7"] Jan 03 06:58:12 crc kubenswrapper[4854]: E0103 06:58:12.446970 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="411fdc02-a44d-44c0-a2f4-f3d28e47f10d" containerName="extract-utilities" Jan 03 06:58:12 crc kubenswrapper[4854]: I0103 06:58:12.446982 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="411fdc02-a44d-44c0-a2f4-f3d28e47f10d" containerName="extract-utilities" Jan 03 06:58:12 crc kubenswrapper[4854]: E0103 06:58:12.447019 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="411fdc02-a44d-44c0-a2f4-f3d28e47f10d" containerName="extract-content" Jan 03 06:58:12 crc kubenswrapper[4854]: I0103 06:58:12.447027 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="411fdc02-a44d-44c0-a2f4-f3d28e47f10d" containerName="extract-content" Jan 03 06:58:12 crc kubenswrapper[4854]: E0103 06:58:12.447060 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="411fdc02-a44d-44c0-a2f4-f3d28e47f10d" containerName="registry-server" Jan 03 06:58:12 crc kubenswrapper[4854]: I0103 06:58:12.447067 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="411fdc02-a44d-44c0-a2f4-f3d28e47f10d" containerName="registry-server" Jan 03 06:58:12 crc kubenswrapper[4854]: I0103 06:58:12.447312 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="411fdc02-a44d-44c0-a2f4-f3d28e47f10d" containerName="registry-server" Jan 03 06:58:12 crc kubenswrapper[4854]: I0103 06:58:12.448933 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8n9l7" Jan 03 06:58:12 crc kubenswrapper[4854]: I0103 06:58:12.464242 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8n9l7"] Jan 03 06:58:12 crc kubenswrapper[4854]: I0103 06:58:12.526880 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qnx6\" (UniqueName: \"kubernetes.io/projected/000f7ca5-2c6b-4b8a-8930-a48d8252f735-kube-api-access-5qnx6\") pod \"community-operators-8n9l7\" (UID: \"000f7ca5-2c6b-4b8a-8930-a48d8252f735\") " pod="openshift-marketplace/community-operators-8n9l7" Jan 03 06:58:12 crc kubenswrapper[4854]: I0103 06:58:12.527346 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/000f7ca5-2c6b-4b8a-8930-a48d8252f735-catalog-content\") pod \"community-operators-8n9l7\" (UID: \"000f7ca5-2c6b-4b8a-8930-a48d8252f735\") " pod="openshift-marketplace/community-operators-8n9l7" Jan 03 06:58:12 crc kubenswrapper[4854]: I0103 06:58:12.527469 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/000f7ca5-2c6b-4b8a-8930-a48d8252f735-utilities\") pod \"community-operators-8n9l7\" (UID: \"000f7ca5-2c6b-4b8a-8930-a48d8252f735\") " pod="openshift-marketplace/community-operators-8n9l7" Jan 03 06:58:12 crc kubenswrapper[4854]: I0103 06:58:12.629984 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/000f7ca5-2c6b-4b8a-8930-a48d8252f735-catalog-content\") pod \"community-operators-8n9l7\" (UID: \"000f7ca5-2c6b-4b8a-8930-a48d8252f735\") " pod="openshift-marketplace/community-operators-8n9l7" Jan 03 06:58:12 crc kubenswrapper[4854]: I0103 06:58:12.630269 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/000f7ca5-2c6b-4b8a-8930-a48d8252f735-utilities\") pod \"community-operators-8n9l7\" (UID: \"000f7ca5-2c6b-4b8a-8930-a48d8252f735\") " pod="openshift-marketplace/community-operators-8n9l7" Jan 03 06:58:12 crc kubenswrapper[4854]: I0103 06:58:12.630303 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qnx6\" (UniqueName: \"kubernetes.io/projected/000f7ca5-2c6b-4b8a-8930-a48d8252f735-kube-api-access-5qnx6\") pod \"community-operators-8n9l7\" (UID: \"000f7ca5-2c6b-4b8a-8930-a48d8252f735\") " pod="openshift-marketplace/community-operators-8n9l7" Jan 03 06:58:12 crc kubenswrapper[4854]: I0103 06:58:12.631519 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/000f7ca5-2c6b-4b8a-8930-a48d8252f735-catalog-content\") pod \"community-operators-8n9l7\" (UID: \"000f7ca5-2c6b-4b8a-8930-a48d8252f735\") " pod="openshift-marketplace/community-operators-8n9l7" Jan 03 06:58:12 crc kubenswrapper[4854]: I0103 06:58:12.631775 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/000f7ca5-2c6b-4b8a-8930-a48d8252f735-utilities\") pod \"community-operators-8n9l7\" (UID: \"000f7ca5-2c6b-4b8a-8930-a48d8252f735\") " pod="openshift-marketplace/community-operators-8n9l7" Jan 03 06:58:12 crc kubenswrapper[4854]: E0103 06:58:12.841839 4854 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Jan 03 06:58:13 crc kubenswrapper[4854]: I0103 06:58:13.039933 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qnx6\" (UniqueName: \"kubernetes.io/projected/000f7ca5-2c6b-4b8a-8930-a48d8252f735-kube-api-access-5qnx6\") pod \"community-operators-8n9l7\" (UID: \"000f7ca5-2c6b-4b8a-8930-a48d8252f735\") " pod="openshift-marketplace/community-operators-8n9l7" Jan 03 06:58:13 crc kubenswrapper[4854]: I0103 06:58:13.086993 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8n9l7" Jan 03 06:58:13 crc kubenswrapper[4854]: I0103 06:58:13.129343 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="748d9586-5917-42ab-8f1f-3a811b724dae" containerName="galera" probeResult="failure" output="command timed out" Jan 03 06:58:13 crc kubenswrapper[4854]: I0103 06:58:13.680838 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8n9l7"] Jan 03 06:58:14 crc kubenswrapper[4854]: I0103 06:58:14.573422 4854 generic.go:334] "Generic (PLEG): container finished" podID="000f7ca5-2c6b-4b8a-8930-a48d8252f735" containerID="7011a15bfa86bcbe48aa298d5583fbfd13959bf55885823560dd92afd143d42d" exitCode=0 Jan 03 06:58:14 crc kubenswrapper[4854]: I0103 06:58:14.573525 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8n9l7" event={"ID":"000f7ca5-2c6b-4b8a-8930-a48d8252f735","Type":"ContainerDied","Data":"7011a15bfa86bcbe48aa298d5583fbfd13959bf55885823560dd92afd143d42d"} Jan 03 06:58:14 crc kubenswrapper[4854]: I0103 06:58:14.574419 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8n9l7" event={"ID":"000f7ca5-2c6b-4b8a-8930-a48d8252f735","Type":"ContainerStarted","Data":"20e7966300464cbe590eea1fb39c79b4ec0ec764d7b53f5ebd48fcd974cbe356"} Jan 03 06:58:16 crc kubenswrapper[4854]: I0103 06:58:16.597649 4854 generic.go:334] "Generic (PLEG): container finished" podID="000f7ca5-2c6b-4b8a-8930-a48d8252f735" containerID="2c9a16b9a3a56a9bf62f7769cb272d0bcb25fc727c25bda22c3e8f43317acd82" exitCode=0 Jan 03 06:58:16 crc kubenswrapper[4854]: I0103 06:58:16.597718 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8n9l7" event={"ID":"000f7ca5-2c6b-4b8a-8930-a48d8252f735","Type":"ContainerDied","Data":"2c9a16b9a3a56a9bf62f7769cb272d0bcb25fc727c25bda22c3e8f43317acd82"} Jan 03 06:58:17 crc kubenswrapper[4854]: I0103 06:58:17.611985 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8n9l7" event={"ID":"000f7ca5-2c6b-4b8a-8930-a48d8252f735","Type":"ContainerStarted","Data":"7177e033c35f4232436f03f06534f6549b2442c51c63c30ccc1098d0b3128989"} Jan 03 06:58:17 crc kubenswrapper[4854]: I0103 06:58:17.640421 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8n9l7" podStartSLOduration=3.203245988 podStartE2EDuration="5.640399105s" podCreationTimestamp="2026-01-03 06:58:12 +0000 UTC" firstStartedPulling="2026-01-03 06:58:14.576163094 +0000 UTC m=+4672.902739676" lastFinishedPulling="2026-01-03 06:58:17.013316221 +0000 UTC m=+4675.339892793" observedRunningTime="2026-01-03 06:58:17.635134895 +0000 UTC m=+4675.961711467" watchObservedRunningTime="2026-01-03 06:58:17.640399105 +0000 UTC m=+4675.966975677" Jan 03 06:58:19 crc kubenswrapper[4854]: I0103 06:58:19.117916 4854 scope.go:117] "RemoveContainer" containerID="ef2e96981972415f479c5d2719c86714ac46ddc071e288a7842e34b1b148c527" Jan 03 06:58:19 crc kubenswrapper[4854]: E0103 06:58:19.118526 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:58:23 crc kubenswrapper[4854]: I0103 06:58:23.088593 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8n9l7" Jan 03 06:58:23 crc kubenswrapper[4854]: I0103 06:58:23.089114 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8n9l7" Jan 03 06:58:23 crc kubenswrapper[4854]: I0103 06:58:23.176328 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8n9l7" Jan 03 06:58:23 crc kubenswrapper[4854]: I0103 06:58:23.732601 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8n9l7" Jan 03 06:58:24 crc kubenswrapper[4854]: I0103 06:58:24.623010 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8n9l7"] Jan 03 06:58:25 crc kubenswrapper[4854]: I0103 06:58:25.698818 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8n9l7" podUID="000f7ca5-2c6b-4b8a-8930-a48d8252f735" containerName="registry-server" containerID="cri-o://7177e033c35f4232436f03f06534f6549b2442c51c63c30ccc1098d0b3128989" gracePeriod=2 Jan 03 06:58:26 crc kubenswrapper[4854]: I0103 06:58:26.240765 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8n9l7" Jan 03 06:58:26 crc kubenswrapper[4854]: I0103 06:58:26.286180 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/000f7ca5-2c6b-4b8a-8930-a48d8252f735-catalog-content\") pod \"000f7ca5-2c6b-4b8a-8930-a48d8252f735\" (UID: \"000f7ca5-2c6b-4b8a-8930-a48d8252f735\") " Jan 03 06:58:26 crc kubenswrapper[4854]: I0103 06:58:26.286227 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/000f7ca5-2c6b-4b8a-8930-a48d8252f735-utilities\") pod \"000f7ca5-2c6b-4b8a-8930-a48d8252f735\" (UID: \"000f7ca5-2c6b-4b8a-8930-a48d8252f735\") " Jan 03 06:58:26 crc kubenswrapper[4854]: I0103 06:58:26.286287 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qnx6\" (UniqueName: \"kubernetes.io/projected/000f7ca5-2c6b-4b8a-8930-a48d8252f735-kube-api-access-5qnx6\") pod \"000f7ca5-2c6b-4b8a-8930-a48d8252f735\" (UID: \"000f7ca5-2c6b-4b8a-8930-a48d8252f735\") " Jan 03 06:58:26 crc kubenswrapper[4854]: I0103 06:58:26.287781 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/000f7ca5-2c6b-4b8a-8930-a48d8252f735-utilities" (OuterVolumeSpecName: "utilities") pod "000f7ca5-2c6b-4b8a-8930-a48d8252f735" (UID: "000f7ca5-2c6b-4b8a-8930-a48d8252f735"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:58:26 crc kubenswrapper[4854]: I0103 06:58:26.293800 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/000f7ca5-2c6b-4b8a-8930-a48d8252f735-kube-api-access-5qnx6" (OuterVolumeSpecName: "kube-api-access-5qnx6") pod "000f7ca5-2c6b-4b8a-8930-a48d8252f735" (UID: "000f7ca5-2c6b-4b8a-8930-a48d8252f735"). InnerVolumeSpecName "kube-api-access-5qnx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 06:58:26 crc kubenswrapper[4854]: I0103 06:58:26.389041 4854 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/000f7ca5-2c6b-4b8a-8930-a48d8252f735-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 06:58:26 crc kubenswrapper[4854]: I0103 06:58:26.389346 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qnx6\" (UniqueName: \"kubernetes.io/projected/000f7ca5-2c6b-4b8a-8930-a48d8252f735-kube-api-access-5qnx6\") on node \"crc\" DevicePath \"\"" Jan 03 06:58:26 crc kubenswrapper[4854]: I0103 06:58:26.686428 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/000f7ca5-2c6b-4b8a-8930-a48d8252f735-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "000f7ca5-2c6b-4b8a-8930-a48d8252f735" (UID: "000f7ca5-2c6b-4b8a-8930-a48d8252f735"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 06:58:26 crc kubenswrapper[4854]: I0103 06:58:26.698036 4854 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/000f7ca5-2c6b-4b8a-8930-a48d8252f735-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 06:58:26 crc kubenswrapper[4854]: I0103 06:58:26.713974 4854 generic.go:334] "Generic (PLEG): container finished" podID="000f7ca5-2c6b-4b8a-8930-a48d8252f735" containerID="7177e033c35f4232436f03f06534f6549b2442c51c63c30ccc1098d0b3128989" exitCode=0 Jan 03 06:58:26 crc kubenswrapper[4854]: I0103 06:58:26.714020 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8n9l7" event={"ID":"000f7ca5-2c6b-4b8a-8930-a48d8252f735","Type":"ContainerDied","Data":"7177e033c35f4232436f03f06534f6549b2442c51c63c30ccc1098d0b3128989"} Jan 03 06:58:26 crc kubenswrapper[4854]: I0103 06:58:26.714037 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8n9l7" Jan 03 06:58:26 crc kubenswrapper[4854]: I0103 06:58:26.714050 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8n9l7" event={"ID":"000f7ca5-2c6b-4b8a-8930-a48d8252f735","Type":"ContainerDied","Data":"20e7966300464cbe590eea1fb39c79b4ec0ec764d7b53f5ebd48fcd974cbe356"} Jan 03 06:58:26 crc kubenswrapper[4854]: I0103 06:58:26.714070 4854 scope.go:117] "RemoveContainer" containerID="7177e033c35f4232436f03f06534f6549b2442c51c63c30ccc1098d0b3128989" Jan 03 06:58:26 crc kubenswrapper[4854]: I0103 06:58:26.772167 4854 scope.go:117] "RemoveContainer" containerID="2c9a16b9a3a56a9bf62f7769cb272d0bcb25fc727c25bda22c3e8f43317acd82" Jan 03 06:58:26 crc kubenswrapper[4854]: I0103 06:58:26.798965 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8n9l7"] Jan 03 06:58:26 crc kubenswrapper[4854]: I0103 06:58:26.811965 4854 scope.go:117] "RemoveContainer" containerID="7011a15bfa86bcbe48aa298d5583fbfd13959bf55885823560dd92afd143d42d" Jan 03 06:58:26 crc kubenswrapper[4854]: I0103 06:58:26.814300 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8n9l7"] Jan 03 06:58:26 crc kubenswrapper[4854]: I0103 06:58:26.874523 4854 scope.go:117] "RemoveContainer" containerID="7177e033c35f4232436f03f06534f6549b2442c51c63c30ccc1098d0b3128989" Jan 03 06:58:26 crc kubenswrapper[4854]: E0103 06:58:26.881663 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7177e033c35f4232436f03f06534f6549b2442c51c63c30ccc1098d0b3128989\": container with ID starting with 7177e033c35f4232436f03f06534f6549b2442c51c63c30ccc1098d0b3128989 not found: ID does not exist" containerID="7177e033c35f4232436f03f06534f6549b2442c51c63c30ccc1098d0b3128989" Jan 03 06:58:26 crc kubenswrapper[4854]: I0103 06:58:26.881726 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7177e033c35f4232436f03f06534f6549b2442c51c63c30ccc1098d0b3128989"} err="failed to get container status \"7177e033c35f4232436f03f06534f6549b2442c51c63c30ccc1098d0b3128989\": rpc error: code = NotFound desc = could not find container \"7177e033c35f4232436f03f06534f6549b2442c51c63c30ccc1098d0b3128989\": container with ID starting with 7177e033c35f4232436f03f06534f6549b2442c51c63c30ccc1098d0b3128989 not found: ID does not exist" Jan 03 06:58:26 crc kubenswrapper[4854]: I0103 06:58:26.881756 4854 scope.go:117] "RemoveContainer" containerID="2c9a16b9a3a56a9bf62f7769cb272d0bcb25fc727c25bda22c3e8f43317acd82" Jan 03 06:58:26 crc kubenswrapper[4854]: E0103 06:58:26.882061 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c9a16b9a3a56a9bf62f7769cb272d0bcb25fc727c25bda22c3e8f43317acd82\": container with ID starting with 2c9a16b9a3a56a9bf62f7769cb272d0bcb25fc727c25bda22c3e8f43317acd82 not found: ID does not exist" containerID="2c9a16b9a3a56a9bf62f7769cb272d0bcb25fc727c25bda22c3e8f43317acd82" Jan 03 06:58:26 crc kubenswrapper[4854]: I0103 06:58:26.882202 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c9a16b9a3a56a9bf62f7769cb272d0bcb25fc727c25bda22c3e8f43317acd82"} err="failed to get container status \"2c9a16b9a3a56a9bf62f7769cb272d0bcb25fc727c25bda22c3e8f43317acd82\": rpc error: code = NotFound desc = could not find container \"2c9a16b9a3a56a9bf62f7769cb272d0bcb25fc727c25bda22c3e8f43317acd82\": container with ID starting with 2c9a16b9a3a56a9bf62f7769cb272d0bcb25fc727c25bda22c3e8f43317acd82 not found: ID does not exist" Jan 03 06:58:26 crc kubenswrapper[4854]: I0103 06:58:26.882334 4854 scope.go:117] "RemoveContainer" containerID="7011a15bfa86bcbe48aa298d5583fbfd13959bf55885823560dd92afd143d42d" Jan 03 06:58:26 crc kubenswrapper[4854]: E0103 06:58:26.882746 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7011a15bfa86bcbe48aa298d5583fbfd13959bf55885823560dd92afd143d42d\": container with ID starting with 7011a15bfa86bcbe48aa298d5583fbfd13959bf55885823560dd92afd143d42d not found: ID does not exist" containerID="7011a15bfa86bcbe48aa298d5583fbfd13959bf55885823560dd92afd143d42d" Jan 03 06:58:26 crc kubenswrapper[4854]: I0103 06:58:26.882773 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7011a15bfa86bcbe48aa298d5583fbfd13959bf55885823560dd92afd143d42d"} err="failed to get container status \"7011a15bfa86bcbe48aa298d5583fbfd13959bf55885823560dd92afd143d42d\": rpc error: code = NotFound desc = could not find container \"7011a15bfa86bcbe48aa298d5583fbfd13959bf55885823560dd92afd143d42d\": container with ID starting with 7011a15bfa86bcbe48aa298d5583fbfd13959bf55885823560dd92afd143d42d not found: ID does not exist" Jan 03 06:58:28 crc kubenswrapper[4854]: I0103 06:58:28.130241 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="000f7ca5-2c6b-4b8a-8930-a48d8252f735" path="/var/lib/kubelet/pods/000f7ca5-2c6b-4b8a-8930-a48d8252f735/volumes" Jan 03 06:58:30 crc kubenswrapper[4854]: I0103 06:58:30.119182 4854 scope.go:117] "RemoveContainer" containerID="ef2e96981972415f479c5d2719c86714ac46ddc071e288a7842e34b1b148c527" Jan 03 06:58:30 crc kubenswrapper[4854]: E0103 06:58:30.120809 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:58:32 crc kubenswrapper[4854]: E0103 06:58:32.323287 4854 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.102:57210->38.102.83.102:42659: write tcp 38.102.83.102:57210->38.102.83.102:42659: write: broken pipe Jan 03 06:58:44 crc kubenswrapper[4854]: I0103 06:58:44.119164 4854 scope.go:117] "RemoveContainer" containerID="ef2e96981972415f479c5d2719c86714ac46ddc071e288a7842e34b1b148c527" Jan 03 06:58:44 crc kubenswrapper[4854]: E0103 06:58:44.120820 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:58:57 crc kubenswrapper[4854]: I0103 06:58:57.119178 4854 scope.go:117] "RemoveContainer" containerID="ef2e96981972415f479c5d2719c86714ac46ddc071e288a7842e34b1b148c527" Jan 03 06:58:57 crc kubenswrapper[4854]: E0103 06:58:57.120032 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:59:11 crc kubenswrapper[4854]: I0103 06:59:11.119829 4854 scope.go:117] "RemoveContainer" containerID="ef2e96981972415f479c5d2719c86714ac46ddc071e288a7842e34b1b148c527" Jan 03 06:59:11 crc kubenswrapper[4854]: E0103 06:59:11.120659 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:59:16 crc kubenswrapper[4854]: E0103 06:59:16.772381 4854 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.102:59632->38.102.83.102:42659: write tcp 38.102.83.102:59632->38.102.83.102:42659: write: broken pipe Jan 03 06:59:24 crc kubenswrapper[4854]: I0103 06:59:24.118312 4854 scope.go:117] "RemoveContainer" containerID="ef2e96981972415f479c5d2719c86714ac46ddc071e288a7842e34b1b148c527" Jan 03 06:59:24 crc kubenswrapper[4854]: E0103 06:59:24.119468 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:59:38 crc kubenswrapper[4854]: I0103 06:59:38.119061 4854 scope.go:117] "RemoveContainer" containerID="ef2e96981972415f479c5d2719c86714ac46ddc071e288a7842e34b1b148c527" Jan 03 06:59:38 crc kubenswrapper[4854]: E0103 06:59:38.120277 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:59:42 crc kubenswrapper[4854]: I0103 06:59:42.462709 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kz2rz"] Jan 03 06:59:42 crc kubenswrapper[4854]: E0103 06:59:42.463855 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="000f7ca5-2c6b-4b8a-8930-a48d8252f735" containerName="extract-utilities" Jan 03 06:59:42 crc kubenswrapper[4854]: I0103 06:59:42.463873 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="000f7ca5-2c6b-4b8a-8930-a48d8252f735" containerName="extract-utilities" Jan 03 06:59:42 crc kubenswrapper[4854]: E0103 06:59:42.463900 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="000f7ca5-2c6b-4b8a-8930-a48d8252f735" containerName="registry-server" Jan 03 06:59:42 crc kubenswrapper[4854]: I0103 06:59:42.463907 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="000f7ca5-2c6b-4b8a-8930-a48d8252f735" containerName="registry-server" Jan 03 06:59:42 crc kubenswrapper[4854]: E0103 06:59:42.463939 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="000f7ca5-2c6b-4b8a-8930-a48d8252f735" containerName="extract-content" Jan 03 06:59:42 crc kubenswrapper[4854]: I0103 06:59:42.463947 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="000f7ca5-2c6b-4b8a-8930-a48d8252f735" containerName="extract-content" Jan 03 06:59:42 crc kubenswrapper[4854]: I0103 06:59:42.464772 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="000f7ca5-2c6b-4b8a-8930-a48d8252f735" containerName="registry-server" Jan 03 06:59:42 crc kubenswrapper[4854]: I0103 06:59:42.466719 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kz2rz" Jan 03 06:59:42 crc kubenswrapper[4854]: I0103 06:59:42.490333 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7045758-55e8-4ad8-87ab-733ef44ee647-catalog-content\") pod \"certified-operators-kz2rz\" (UID: \"c7045758-55e8-4ad8-87ab-733ef44ee647\") " pod="openshift-marketplace/certified-operators-kz2rz" Jan 03 06:59:42 crc kubenswrapper[4854]: I0103 06:59:42.490473 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7045758-55e8-4ad8-87ab-733ef44ee647-utilities\") pod \"certified-operators-kz2rz\" (UID: \"c7045758-55e8-4ad8-87ab-733ef44ee647\") " pod="openshift-marketplace/certified-operators-kz2rz" Jan 03 06:59:42 crc kubenswrapper[4854]: I0103 06:59:42.490511 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb22f\" (UniqueName: \"kubernetes.io/projected/c7045758-55e8-4ad8-87ab-733ef44ee647-kube-api-access-vb22f\") pod \"certified-operators-kz2rz\" (UID: \"c7045758-55e8-4ad8-87ab-733ef44ee647\") " pod="openshift-marketplace/certified-operators-kz2rz" Jan 03 06:59:42 crc kubenswrapper[4854]: I0103 06:59:42.495239 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kz2rz"] Jan 03 06:59:42 crc kubenswrapper[4854]: I0103 06:59:42.592551 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb22f\" (UniqueName: \"kubernetes.io/projected/c7045758-55e8-4ad8-87ab-733ef44ee647-kube-api-access-vb22f\") pod \"certified-operators-kz2rz\" (UID: \"c7045758-55e8-4ad8-87ab-733ef44ee647\") " pod="openshift-marketplace/certified-operators-kz2rz" Jan 03 06:59:42 crc kubenswrapper[4854]: I0103 06:59:42.592764 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7045758-55e8-4ad8-87ab-733ef44ee647-catalog-content\") pod \"certified-operators-kz2rz\" (UID: \"c7045758-55e8-4ad8-87ab-733ef44ee647\") " pod="openshift-marketplace/certified-operators-kz2rz" Jan 03 06:59:42 crc kubenswrapper[4854]: I0103 06:59:42.592884 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7045758-55e8-4ad8-87ab-733ef44ee647-utilities\") pod \"certified-operators-kz2rz\" (UID: \"c7045758-55e8-4ad8-87ab-733ef44ee647\") " pod="openshift-marketplace/certified-operators-kz2rz" Jan 03 06:59:42 crc kubenswrapper[4854]: I0103 06:59:42.593428 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7045758-55e8-4ad8-87ab-733ef44ee647-catalog-content\") pod \"certified-operators-kz2rz\" (UID: \"c7045758-55e8-4ad8-87ab-733ef44ee647\") " pod="openshift-marketplace/certified-operators-kz2rz" Jan 03 06:59:42 crc kubenswrapper[4854]: I0103 06:59:42.593492 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7045758-55e8-4ad8-87ab-733ef44ee647-utilities\") pod \"certified-operators-kz2rz\" (UID: \"c7045758-55e8-4ad8-87ab-733ef44ee647\") " pod="openshift-marketplace/certified-operators-kz2rz" Jan 03 06:59:42 crc kubenswrapper[4854]: I0103 06:59:42.610543 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb22f\" (UniqueName: \"kubernetes.io/projected/c7045758-55e8-4ad8-87ab-733ef44ee647-kube-api-access-vb22f\") pod \"certified-operators-kz2rz\" (UID: \"c7045758-55e8-4ad8-87ab-733ef44ee647\") " pod="openshift-marketplace/certified-operators-kz2rz" Jan 03 06:59:42 crc kubenswrapper[4854]: I0103 06:59:42.804939 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kz2rz" Jan 03 06:59:43 crc kubenswrapper[4854]: I0103 06:59:43.346215 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kz2rz"] Jan 03 06:59:43 crc kubenswrapper[4854]: I0103 06:59:43.766674 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kz2rz" event={"ID":"c7045758-55e8-4ad8-87ab-733ef44ee647","Type":"ContainerStarted","Data":"1146749e08167f0f941d12563974ed29b982b1d07ed55fe100290771ccb9cce7"} Jan 03 06:59:44 crc kubenswrapper[4854]: I0103 06:59:44.788415 4854 generic.go:334] "Generic (PLEG): container finished" podID="c7045758-55e8-4ad8-87ab-733ef44ee647" containerID="e7b3358b2d20844744cd6185af0bec145a7287a63a61b29fe7057c4c5c83e749" exitCode=0 Jan 03 06:59:44 crc kubenswrapper[4854]: I0103 06:59:44.788916 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kz2rz" event={"ID":"c7045758-55e8-4ad8-87ab-733ef44ee647","Type":"ContainerDied","Data":"e7b3358b2d20844744cd6185af0bec145a7287a63a61b29fe7057c4c5c83e749"} Jan 03 06:59:46 crc kubenswrapper[4854]: I0103 06:59:46.815410 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kz2rz" event={"ID":"c7045758-55e8-4ad8-87ab-733ef44ee647","Type":"ContainerStarted","Data":"7a73991191a23e7acd398beb263afe59193e2a0aa7595e7da05e9ece631ba22b"} Jan 03 06:59:47 crc kubenswrapper[4854]: I0103 06:59:47.832861 4854 generic.go:334] "Generic (PLEG): container finished" podID="c7045758-55e8-4ad8-87ab-733ef44ee647" containerID="7a73991191a23e7acd398beb263afe59193e2a0aa7595e7da05e9ece631ba22b" exitCode=0 Jan 03 06:59:47 crc kubenswrapper[4854]: I0103 06:59:47.832930 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kz2rz" event={"ID":"c7045758-55e8-4ad8-87ab-733ef44ee647","Type":"ContainerDied","Data":"7a73991191a23e7acd398beb263afe59193e2a0aa7595e7da05e9ece631ba22b"} Jan 03 06:59:48 crc kubenswrapper[4854]: I0103 06:59:48.844538 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kz2rz" event={"ID":"c7045758-55e8-4ad8-87ab-733ef44ee647","Type":"ContainerStarted","Data":"922e43dfd32fa05ba6706f9c67db53be5eccd10cc29b3d8ebe3899c1ebc08006"} Jan 03 06:59:48 crc kubenswrapper[4854]: I0103 06:59:48.863502 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kz2rz" podStartSLOduration=3.339958345 podStartE2EDuration="6.863486345s" podCreationTimestamp="2026-01-03 06:59:42 +0000 UTC" firstStartedPulling="2026-01-03 06:59:44.800381194 +0000 UTC m=+4763.126957776" lastFinishedPulling="2026-01-03 06:59:48.323909204 +0000 UTC m=+4766.650485776" observedRunningTime="2026-01-03 06:59:48.861515396 +0000 UTC m=+4767.188091968" watchObservedRunningTime="2026-01-03 06:59:48.863486345 +0000 UTC m=+4767.190062917" Jan 03 06:59:50 crc kubenswrapper[4854]: I0103 06:59:50.118555 4854 scope.go:117] "RemoveContainer" containerID="ef2e96981972415f479c5d2719c86714ac46ddc071e288a7842e34b1b148c527" Jan 03 06:59:50 crc kubenswrapper[4854]: E0103 06:59:50.119391 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 06:59:52 crc kubenswrapper[4854]: I0103 06:59:52.805549 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kz2rz" Jan 03 06:59:52 crc kubenswrapper[4854]: I0103 06:59:52.806186 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kz2rz" Jan 03 06:59:52 crc kubenswrapper[4854]: I0103 06:59:52.858928 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kz2rz" Jan 03 07:00:00 crc kubenswrapper[4854]: I0103 07:00:00.153642 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29457060-s9dzl"] Jan 03 07:00:00 crc kubenswrapper[4854]: I0103 07:00:00.156145 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29457060-s9dzl" Jan 03 07:00:00 crc kubenswrapper[4854]: I0103 07:00:00.169797 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29457060-s9dzl"] Jan 03 07:00:00 crc kubenswrapper[4854]: I0103 07:00:00.184829 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 03 07:00:00 crc kubenswrapper[4854]: I0103 07:00:00.185128 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 03 07:00:00 crc kubenswrapper[4854]: I0103 07:00:00.298760 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wklt\" (UniqueName: \"kubernetes.io/projected/634c1134-1961-4dc5-aeb8-41eb523cc428-kube-api-access-6wklt\") pod \"collect-profiles-29457060-s9dzl\" (UID: \"634c1134-1961-4dc5-aeb8-41eb523cc428\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29457060-s9dzl" Jan 03 07:00:00 crc kubenswrapper[4854]: I0103 07:00:00.298905 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/634c1134-1961-4dc5-aeb8-41eb523cc428-secret-volume\") pod \"collect-profiles-29457060-s9dzl\" (UID: \"634c1134-1961-4dc5-aeb8-41eb523cc428\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29457060-s9dzl" Jan 03 07:00:00 crc kubenswrapper[4854]: I0103 07:00:00.299062 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/634c1134-1961-4dc5-aeb8-41eb523cc428-config-volume\") pod \"collect-profiles-29457060-s9dzl\" (UID: \"634c1134-1961-4dc5-aeb8-41eb523cc428\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29457060-s9dzl" Jan 03 07:00:00 crc kubenswrapper[4854]: I0103 07:00:00.471716 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wklt\" (UniqueName: \"kubernetes.io/projected/634c1134-1961-4dc5-aeb8-41eb523cc428-kube-api-access-6wklt\") pod \"collect-profiles-29457060-s9dzl\" (UID: \"634c1134-1961-4dc5-aeb8-41eb523cc428\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29457060-s9dzl" Jan 03 07:00:00 crc kubenswrapper[4854]: I0103 07:00:00.471826 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/634c1134-1961-4dc5-aeb8-41eb523cc428-secret-volume\") pod \"collect-profiles-29457060-s9dzl\" (UID: \"634c1134-1961-4dc5-aeb8-41eb523cc428\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29457060-s9dzl" Jan 03 07:00:00 crc kubenswrapper[4854]: I0103 07:00:00.471941 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/634c1134-1961-4dc5-aeb8-41eb523cc428-config-volume\") pod \"collect-profiles-29457060-s9dzl\" (UID: \"634c1134-1961-4dc5-aeb8-41eb523cc428\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29457060-s9dzl" Jan 03 07:00:00 crc kubenswrapper[4854]: I0103 07:00:00.480008 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/634c1134-1961-4dc5-aeb8-41eb523cc428-config-volume\") pod \"collect-profiles-29457060-s9dzl\" (UID: \"634c1134-1961-4dc5-aeb8-41eb523cc428\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29457060-s9dzl" Jan 03 07:00:00 crc kubenswrapper[4854]: I0103 07:00:00.487773 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/634c1134-1961-4dc5-aeb8-41eb523cc428-secret-volume\") pod \"collect-profiles-29457060-s9dzl\" (UID: \"634c1134-1961-4dc5-aeb8-41eb523cc428\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29457060-s9dzl" Jan 03 07:00:00 crc kubenswrapper[4854]: I0103 07:00:00.550959 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wklt\" (UniqueName: \"kubernetes.io/projected/634c1134-1961-4dc5-aeb8-41eb523cc428-kube-api-access-6wklt\") pod \"collect-profiles-29457060-s9dzl\" (UID: \"634c1134-1961-4dc5-aeb8-41eb523cc428\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29457060-s9dzl" Jan 03 07:00:00 crc kubenswrapper[4854]: I0103 07:00:00.779677 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29457060-s9dzl" Jan 03 07:00:01 crc kubenswrapper[4854]: I0103 07:00:01.276574 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29457060-s9dzl"] Jan 03 07:00:02 crc kubenswrapper[4854]: I0103 07:00:02.014281 4854 generic.go:334] "Generic (PLEG): container finished" podID="634c1134-1961-4dc5-aeb8-41eb523cc428" containerID="4cea13347db94510d539adc9cdf9c3796070f98c50e4c98ce4f789c5183ebd44" exitCode=0 Jan 03 07:00:02 crc kubenswrapper[4854]: I0103 07:00:02.014413 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29457060-s9dzl" event={"ID":"634c1134-1961-4dc5-aeb8-41eb523cc428","Type":"ContainerDied","Data":"4cea13347db94510d539adc9cdf9c3796070f98c50e4c98ce4f789c5183ebd44"} Jan 03 07:00:02 crc kubenswrapper[4854]: I0103 07:00:02.014951 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29457060-s9dzl" event={"ID":"634c1134-1961-4dc5-aeb8-41eb523cc428","Type":"ContainerStarted","Data":"4b7487fc70f30e3071c3c02020eb609da7b7f8f562e5077f9ae79f41b6c474cd"} Jan 03 07:00:02 crc kubenswrapper[4854]: I0103 07:00:02.857710 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kz2rz" Jan 03 07:00:03 crc kubenswrapper[4854]: I0103 07:00:03.119560 4854 scope.go:117] "RemoveContainer" containerID="ef2e96981972415f479c5d2719c86714ac46ddc071e288a7842e34b1b148c527" Jan 03 07:00:03 crc kubenswrapper[4854]: E0103 07:00:03.120045 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 07:00:03 crc kubenswrapper[4854]: I0103 07:00:03.508194 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29457060-s9dzl" Jan 03 07:00:03 crc kubenswrapper[4854]: I0103 07:00:03.647515 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/634c1134-1961-4dc5-aeb8-41eb523cc428-secret-volume\") pod \"634c1134-1961-4dc5-aeb8-41eb523cc428\" (UID: \"634c1134-1961-4dc5-aeb8-41eb523cc428\") " Jan 03 07:00:03 crc kubenswrapper[4854]: I0103 07:00:03.648223 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wklt\" (UniqueName: \"kubernetes.io/projected/634c1134-1961-4dc5-aeb8-41eb523cc428-kube-api-access-6wklt\") pod \"634c1134-1961-4dc5-aeb8-41eb523cc428\" (UID: \"634c1134-1961-4dc5-aeb8-41eb523cc428\") " Jan 03 07:00:03 crc kubenswrapper[4854]: I0103 07:00:03.648449 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/634c1134-1961-4dc5-aeb8-41eb523cc428-config-volume\") pod \"634c1134-1961-4dc5-aeb8-41eb523cc428\" (UID: \"634c1134-1961-4dc5-aeb8-41eb523cc428\") " Jan 03 07:00:03 crc kubenswrapper[4854]: I0103 07:00:03.649350 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/634c1134-1961-4dc5-aeb8-41eb523cc428-config-volume" (OuterVolumeSpecName: "config-volume") pod "634c1134-1961-4dc5-aeb8-41eb523cc428" (UID: "634c1134-1961-4dc5-aeb8-41eb523cc428"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 07:00:03 crc kubenswrapper[4854]: I0103 07:00:03.649918 4854 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/634c1134-1961-4dc5-aeb8-41eb523cc428-config-volume\") on node \"crc\" DevicePath \"\"" Jan 03 07:00:03 crc kubenswrapper[4854]: I0103 07:00:03.653498 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/634c1134-1961-4dc5-aeb8-41eb523cc428-kube-api-access-6wklt" (OuterVolumeSpecName: "kube-api-access-6wklt") pod "634c1134-1961-4dc5-aeb8-41eb523cc428" (UID: "634c1134-1961-4dc5-aeb8-41eb523cc428"). InnerVolumeSpecName "kube-api-access-6wklt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 07:00:03 crc kubenswrapper[4854]: I0103 07:00:03.655058 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/634c1134-1961-4dc5-aeb8-41eb523cc428-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "634c1134-1961-4dc5-aeb8-41eb523cc428" (UID: "634c1134-1961-4dc5-aeb8-41eb523cc428"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 07:00:03 crc kubenswrapper[4854]: I0103 07:00:03.751702 4854 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/634c1134-1961-4dc5-aeb8-41eb523cc428-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 03 07:00:03 crc kubenswrapper[4854]: I0103 07:00:03.751744 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wklt\" (UniqueName: \"kubernetes.io/projected/634c1134-1961-4dc5-aeb8-41eb523cc428-kube-api-access-6wklt\") on node \"crc\" DevicePath \"\"" Jan 03 07:00:04 crc kubenswrapper[4854]: I0103 07:00:04.043946 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29457060-s9dzl" event={"ID":"634c1134-1961-4dc5-aeb8-41eb523cc428","Type":"ContainerDied","Data":"4b7487fc70f30e3071c3c02020eb609da7b7f8f562e5077f9ae79f41b6c474cd"} Jan 03 07:00:04 crc kubenswrapper[4854]: I0103 07:00:04.043988 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b7487fc70f30e3071c3c02020eb609da7b7f8f562e5077f9ae79f41b6c474cd" Jan 03 07:00:04 crc kubenswrapper[4854]: I0103 07:00:04.044002 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29457060-s9dzl" Jan 03 07:00:04 crc kubenswrapper[4854]: I0103 07:00:04.606522 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29457015-7c4zz"] Jan 03 07:00:04 crc kubenswrapper[4854]: I0103 07:00:04.617345 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29457015-7c4zz"] Jan 03 07:00:06 crc kubenswrapper[4854]: I0103 07:00:06.133795 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7d049d5-9c6d-4970-b922-adfc41096230" path="/var/lib/kubelet/pods/e7d049d5-9c6d-4970-b922-adfc41096230/volumes" Jan 03 07:00:06 crc kubenswrapper[4854]: I0103 07:00:06.315296 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kz2rz"] Jan 03 07:00:06 crc kubenswrapper[4854]: I0103 07:00:06.316304 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kz2rz" podUID="c7045758-55e8-4ad8-87ab-733ef44ee647" containerName="registry-server" containerID="cri-o://922e43dfd32fa05ba6706f9c67db53be5eccd10cc29b3d8ebe3899c1ebc08006" gracePeriod=2 Jan 03 07:00:06 crc kubenswrapper[4854]: I0103 07:00:06.896517 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kz2rz" Jan 03 07:00:07 crc kubenswrapper[4854]: I0103 07:00:07.035379 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7045758-55e8-4ad8-87ab-733ef44ee647-catalog-content\") pod \"c7045758-55e8-4ad8-87ab-733ef44ee647\" (UID: \"c7045758-55e8-4ad8-87ab-733ef44ee647\") " Jan 03 07:00:07 crc kubenswrapper[4854]: I0103 07:00:07.035649 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7045758-55e8-4ad8-87ab-733ef44ee647-utilities\") pod \"c7045758-55e8-4ad8-87ab-733ef44ee647\" (UID: \"c7045758-55e8-4ad8-87ab-733ef44ee647\") " Jan 03 07:00:07 crc kubenswrapper[4854]: I0103 07:00:07.035748 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vb22f\" (UniqueName: \"kubernetes.io/projected/c7045758-55e8-4ad8-87ab-733ef44ee647-kube-api-access-vb22f\") pod \"c7045758-55e8-4ad8-87ab-733ef44ee647\" (UID: \"c7045758-55e8-4ad8-87ab-733ef44ee647\") " Jan 03 07:00:07 crc kubenswrapper[4854]: I0103 07:00:07.036382 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7045758-55e8-4ad8-87ab-733ef44ee647-utilities" (OuterVolumeSpecName: "utilities") pod "c7045758-55e8-4ad8-87ab-733ef44ee647" (UID: "c7045758-55e8-4ad8-87ab-733ef44ee647"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 07:00:07 crc kubenswrapper[4854]: I0103 07:00:07.036586 4854 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7045758-55e8-4ad8-87ab-733ef44ee647-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 07:00:07 crc kubenswrapper[4854]: I0103 07:00:07.043258 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7045758-55e8-4ad8-87ab-733ef44ee647-kube-api-access-vb22f" (OuterVolumeSpecName: "kube-api-access-vb22f") pod "c7045758-55e8-4ad8-87ab-733ef44ee647" (UID: "c7045758-55e8-4ad8-87ab-733ef44ee647"). InnerVolumeSpecName "kube-api-access-vb22f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 07:00:07 crc kubenswrapper[4854]: I0103 07:00:07.083108 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kz2rz" event={"ID":"c7045758-55e8-4ad8-87ab-733ef44ee647","Type":"ContainerDied","Data":"922e43dfd32fa05ba6706f9c67db53be5eccd10cc29b3d8ebe3899c1ebc08006"} Jan 03 07:00:07 crc kubenswrapper[4854]: I0103 07:00:07.083171 4854 scope.go:117] "RemoveContainer" containerID="922e43dfd32fa05ba6706f9c67db53be5eccd10cc29b3d8ebe3899c1ebc08006" Jan 03 07:00:07 crc kubenswrapper[4854]: I0103 07:00:07.083189 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kz2rz" Jan 03 07:00:07 crc kubenswrapper[4854]: I0103 07:00:07.083315 4854 generic.go:334] "Generic (PLEG): container finished" podID="c7045758-55e8-4ad8-87ab-733ef44ee647" containerID="922e43dfd32fa05ba6706f9c67db53be5eccd10cc29b3d8ebe3899c1ebc08006" exitCode=0 Jan 03 07:00:07 crc kubenswrapper[4854]: I0103 07:00:07.083344 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kz2rz" event={"ID":"c7045758-55e8-4ad8-87ab-733ef44ee647","Type":"ContainerDied","Data":"1146749e08167f0f941d12563974ed29b982b1d07ed55fe100290771ccb9cce7"} Jan 03 07:00:07 crc kubenswrapper[4854]: I0103 07:00:07.094294 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7045758-55e8-4ad8-87ab-733ef44ee647-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c7045758-55e8-4ad8-87ab-733ef44ee647" (UID: "c7045758-55e8-4ad8-87ab-733ef44ee647"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 07:00:07 crc kubenswrapper[4854]: I0103 07:00:07.143804 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vb22f\" (UniqueName: \"kubernetes.io/projected/c7045758-55e8-4ad8-87ab-733ef44ee647-kube-api-access-vb22f\") on node \"crc\" DevicePath \"\"" Jan 03 07:00:07 crc kubenswrapper[4854]: I0103 07:00:07.144899 4854 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7045758-55e8-4ad8-87ab-733ef44ee647-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 07:00:07 crc kubenswrapper[4854]: I0103 07:00:07.149255 4854 scope.go:117] "RemoveContainer" containerID="7a73991191a23e7acd398beb263afe59193e2a0aa7595e7da05e9ece631ba22b" Jan 03 07:00:07 crc kubenswrapper[4854]: I0103 07:00:07.174509 4854 scope.go:117] "RemoveContainer" containerID="e7b3358b2d20844744cd6185af0bec145a7287a63a61b29fe7057c4c5c83e749" Jan 03 07:00:07 crc kubenswrapper[4854]: I0103 07:00:07.224837 4854 scope.go:117] "RemoveContainer" containerID="922e43dfd32fa05ba6706f9c67db53be5eccd10cc29b3d8ebe3899c1ebc08006" Jan 03 07:00:07 crc kubenswrapper[4854]: E0103 07:00:07.225318 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"922e43dfd32fa05ba6706f9c67db53be5eccd10cc29b3d8ebe3899c1ebc08006\": container with ID starting with 922e43dfd32fa05ba6706f9c67db53be5eccd10cc29b3d8ebe3899c1ebc08006 not found: ID does not exist" containerID="922e43dfd32fa05ba6706f9c67db53be5eccd10cc29b3d8ebe3899c1ebc08006" Jan 03 07:00:07 crc kubenswrapper[4854]: I0103 07:00:07.225347 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"922e43dfd32fa05ba6706f9c67db53be5eccd10cc29b3d8ebe3899c1ebc08006"} err="failed to get container status \"922e43dfd32fa05ba6706f9c67db53be5eccd10cc29b3d8ebe3899c1ebc08006\": rpc error: code = NotFound desc = could not find container \"922e43dfd32fa05ba6706f9c67db53be5eccd10cc29b3d8ebe3899c1ebc08006\": container with ID starting with 922e43dfd32fa05ba6706f9c67db53be5eccd10cc29b3d8ebe3899c1ebc08006 not found: ID does not exist" Jan 03 07:00:07 crc kubenswrapper[4854]: I0103 07:00:07.225371 4854 scope.go:117] "RemoveContainer" containerID="7a73991191a23e7acd398beb263afe59193e2a0aa7595e7da05e9ece631ba22b" Jan 03 07:00:07 crc kubenswrapper[4854]: E0103 07:00:07.225583 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a73991191a23e7acd398beb263afe59193e2a0aa7595e7da05e9ece631ba22b\": container with ID starting with 7a73991191a23e7acd398beb263afe59193e2a0aa7595e7da05e9ece631ba22b not found: ID does not exist" containerID="7a73991191a23e7acd398beb263afe59193e2a0aa7595e7da05e9ece631ba22b" Jan 03 07:00:07 crc kubenswrapper[4854]: I0103 07:00:07.225609 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a73991191a23e7acd398beb263afe59193e2a0aa7595e7da05e9ece631ba22b"} err="failed to get container status \"7a73991191a23e7acd398beb263afe59193e2a0aa7595e7da05e9ece631ba22b\": rpc error: code = NotFound desc = could not find container \"7a73991191a23e7acd398beb263afe59193e2a0aa7595e7da05e9ece631ba22b\": container with ID starting with 7a73991191a23e7acd398beb263afe59193e2a0aa7595e7da05e9ece631ba22b not found: ID does not exist" Jan 03 07:00:07 crc kubenswrapper[4854]: I0103 07:00:07.225623 4854 scope.go:117] "RemoveContainer" containerID="e7b3358b2d20844744cd6185af0bec145a7287a63a61b29fe7057c4c5c83e749" Jan 03 07:00:07 crc kubenswrapper[4854]: E0103 07:00:07.225836 4854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7b3358b2d20844744cd6185af0bec145a7287a63a61b29fe7057c4c5c83e749\": container with ID starting with e7b3358b2d20844744cd6185af0bec145a7287a63a61b29fe7057c4c5c83e749 not found: ID does not exist" containerID="e7b3358b2d20844744cd6185af0bec145a7287a63a61b29fe7057c4c5c83e749" Jan 03 07:00:07 crc kubenswrapper[4854]: I0103 07:00:07.225861 4854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7b3358b2d20844744cd6185af0bec145a7287a63a61b29fe7057c4c5c83e749"} err="failed to get container status \"e7b3358b2d20844744cd6185af0bec145a7287a63a61b29fe7057c4c5c83e749\": rpc error: code = NotFound desc = could not find container \"e7b3358b2d20844744cd6185af0bec145a7287a63a61b29fe7057c4c5c83e749\": container with ID starting with e7b3358b2d20844744cd6185af0bec145a7287a63a61b29fe7057c4c5c83e749 not found: ID does not exist" Jan 03 07:00:07 crc kubenswrapper[4854]: I0103 07:00:07.441985 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kz2rz"] Jan 03 07:00:07 crc kubenswrapper[4854]: I0103 07:00:07.457922 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kz2rz"] Jan 03 07:00:08 crc kubenswrapper[4854]: I0103 07:00:08.133759 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7045758-55e8-4ad8-87ab-733ef44ee647" path="/var/lib/kubelet/pods/c7045758-55e8-4ad8-87ab-733ef44ee647/volumes" Jan 03 07:00:16 crc kubenswrapper[4854]: I0103 07:00:16.119184 4854 scope.go:117] "RemoveContainer" containerID="ef2e96981972415f479c5d2719c86714ac46ddc071e288a7842e34b1b148c527" Jan 03 07:00:16 crc kubenswrapper[4854]: E0103 07:00:16.120042 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 07:00:30 crc kubenswrapper[4854]: I0103 07:00:30.119613 4854 scope.go:117] "RemoveContainer" containerID="ef2e96981972415f479c5d2719c86714ac46ddc071e288a7842e34b1b148c527" Jan 03 07:00:30 crc kubenswrapper[4854]: E0103 07:00:30.120839 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 07:00:44 crc kubenswrapper[4854]: I0103 07:00:44.118529 4854 scope.go:117] "RemoveContainer" containerID="ef2e96981972415f479c5d2719c86714ac46ddc071e288a7842e34b1b148c527" Jan 03 07:00:44 crc kubenswrapper[4854]: E0103 07:00:44.119528 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 07:00:58 crc kubenswrapper[4854]: I0103 07:00:58.118822 4854 scope.go:117] "RemoveContainer" containerID="ef2e96981972415f479c5d2719c86714ac46ddc071e288a7842e34b1b148c527" Jan 03 07:00:58 crc kubenswrapper[4854]: E0103 07:00:58.119795 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 07:01:00 crc kubenswrapper[4854]: I0103 07:01:00.193284 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29457061-q6wxj"] Jan 03 07:01:00 crc kubenswrapper[4854]: E0103 07:01:00.194659 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7045758-55e8-4ad8-87ab-733ef44ee647" containerName="registry-server" Jan 03 07:01:00 crc kubenswrapper[4854]: I0103 07:01:00.194687 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7045758-55e8-4ad8-87ab-733ef44ee647" containerName="registry-server" Jan 03 07:01:00 crc kubenswrapper[4854]: E0103 07:01:00.194738 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7045758-55e8-4ad8-87ab-733ef44ee647" containerName="extract-content" Jan 03 07:01:00 crc kubenswrapper[4854]: I0103 07:01:00.194753 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7045758-55e8-4ad8-87ab-733ef44ee647" containerName="extract-content" Jan 03 07:01:00 crc kubenswrapper[4854]: E0103 07:01:00.194781 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="634c1134-1961-4dc5-aeb8-41eb523cc428" containerName="collect-profiles" Jan 03 07:01:00 crc kubenswrapper[4854]: I0103 07:01:00.194793 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="634c1134-1961-4dc5-aeb8-41eb523cc428" containerName="collect-profiles" Jan 03 07:01:00 crc kubenswrapper[4854]: E0103 07:01:00.194818 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7045758-55e8-4ad8-87ab-733ef44ee647" containerName="extract-utilities" Jan 03 07:01:00 crc kubenswrapper[4854]: I0103 07:01:00.194832 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7045758-55e8-4ad8-87ab-733ef44ee647" containerName="extract-utilities" Jan 03 07:01:00 crc kubenswrapper[4854]: I0103 07:01:00.195259 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="634c1134-1961-4dc5-aeb8-41eb523cc428" containerName="collect-profiles" Jan 03 07:01:00 crc kubenswrapper[4854]: I0103 07:01:00.195290 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7045758-55e8-4ad8-87ab-733ef44ee647" containerName="registry-server" Jan 03 07:01:00 crc kubenswrapper[4854]: I0103 07:01:00.196743 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29457061-q6wxj" Jan 03 07:01:00 crc kubenswrapper[4854]: I0103 07:01:00.209139 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29457061-q6wxj"] Jan 03 07:01:00 crc kubenswrapper[4854]: I0103 07:01:00.325538 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npnps\" (UniqueName: \"kubernetes.io/projected/a0029f1a-f205-451a-96cc-9d0915c2d76c-kube-api-access-npnps\") pod \"keystone-cron-29457061-q6wxj\" (UID: \"a0029f1a-f205-451a-96cc-9d0915c2d76c\") " pod="openstack/keystone-cron-29457061-q6wxj" Jan 03 07:01:00 crc kubenswrapper[4854]: I0103 07:01:00.325898 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a0029f1a-f205-451a-96cc-9d0915c2d76c-fernet-keys\") pod \"keystone-cron-29457061-q6wxj\" (UID: \"a0029f1a-f205-451a-96cc-9d0915c2d76c\") " pod="openstack/keystone-cron-29457061-q6wxj" Jan 03 07:01:00 crc kubenswrapper[4854]: I0103 07:01:00.326114 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0029f1a-f205-451a-96cc-9d0915c2d76c-combined-ca-bundle\") pod \"keystone-cron-29457061-q6wxj\" (UID: \"a0029f1a-f205-451a-96cc-9d0915c2d76c\") " pod="openstack/keystone-cron-29457061-q6wxj" Jan 03 07:01:00 crc kubenswrapper[4854]: I0103 07:01:00.326590 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0029f1a-f205-451a-96cc-9d0915c2d76c-config-data\") pod \"keystone-cron-29457061-q6wxj\" (UID: \"a0029f1a-f205-451a-96cc-9d0915c2d76c\") " pod="openstack/keystone-cron-29457061-q6wxj" Jan 03 07:01:00 crc kubenswrapper[4854]: I0103 07:01:00.428876 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0029f1a-f205-451a-96cc-9d0915c2d76c-config-data\") pod \"keystone-cron-29457061-q6wxj\" (UID: \"a0029f1a-f205-451a-96cc-9d0915c2d76c\") " pod="openstack/keystone-cron-29457061-q6wxj" Jan 03 07:01:00 crc kubenswrapper[4854]: I0103 07:01:00.428977 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npnps\" (UniqueName: \"kubernetes.io/projected/a0029f1a-f205-451a-96cc-9d0915c2d76c-kube-api-access-npnps\") pod \"keystone-cron-29457061-q6wxj\" (UID: \"a0029f1a-f205-451a-96cc-9d0915c2d76c\") " pod="openstack/keystone-cron-29457061-q6wxj" Jan 03 07:01:00 crc kubenswrapper[4854]: I0103 07:01:00.429013 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a0029f1a-f205-451a-96cc-9d0915c2d76c-fernet-keys\") pod \"keystone-cron-29457061-q6wxj\" (UID: \"a0029f1a-f205-451a-96cc-9d0915c2d76c\") " pod="openstack/keystone-cron-29457061-q6wxj" Jan 03 07:01:00 crc kubenswrapper[4854]: I0103 07:01:00.429053 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0029f1a-f205-451a-96cc-9d0915c2d76c-combined-ca-bundle\") pod \"keystone-cron-29457061-q6wxj\" (UID: \"a0029f1a-f205-451a-96cc-9d0915c2d76c\") " pod="openstack/keystone-cron-29457061-q6wxj" Jan 03 07:01:00 crc kubenswrapper[4854]: I0103 07:01:00.435320 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0029f1a-f205-451a-96cc-9d0915c2d76c-config-data\") pod \"keystone-cron-29457061-q6wxj\" (UID: \"a0029f1a-f205-451a-96cc-9d0915c2d76c\") " pod="openstack/keystone-cron-29457061-q6wxj" Jan 03 07:01:00 crc kubenswrapper[4854]: I0103 07:01:00.435707 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a0029f1a-f205-451a-96cc-9d0915c2d76c-fernet-keys\") pod \"keystone-cron-29457061-q6wxj\" (UID: \"a0029f1a-f205-451a-96cc-9d0915c2d76c\") " pod="openstack/keystone-cron-29457061-q6wxj" Jan 03 07:01:00 crc kubenswrapper[4854]: I0103 07:01:00.440654 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0029f1a-f205-451a-96cc-9d0915c2d76c-combined-ca-bundle\") pod \"keystone-cron-29457061-q6wxj\" (UID: \"a0029f1a-f205-451a-96cc-9d0915c2d76c\") " pod="openstack/keystone-cron-29457061-q6wxj" Jan 03 07:01:00 crc kubenswrapper[4854]: I0103 07:01:00.452243 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npnps\" (UniqueName: \"kubernetes.io/projected/a0029f1a-f205-451a-96cc-9d0915c2d76c-kube-api-access-npnps\") pod \"keystone-cron-29457061-q6wxj\" (UID: \"a0029f1a-f205-451a-96cc-9d0915c2d76c\") " pod="openstack/keystone-cron-29457061-q6wxj" Jan 03 07:01:00 crc kubenswrapper[4854]: I0103 07:01:00.529043 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29457061-q6wxj" Jan 03 07:01:01 crc kubenswrapper[4854]: I0103 07:01:01.039671 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29457061-q6wxj"] Jan 03 07:01:01 crc kubenswrapper[4854]: I0103 07:01:01.828131 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29457061-q6wxj" event={"ID":"a0029f1a-f205-451a-96cc-9d0915c2d76c","Type":"ContainerStarted","Data":"c58640774a5a2c1ca764d33a1fae63a4cafb0a6915e52fb01672394df27e5b48"} Jan 03 07:01:01 crc kubenswrapper[4854]: I0103 07:01:01.829619 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29457061-q6wxj" event={"ID":"a0029f1a-f205-451a-96cc-9d0915c2d76c","Type":"ContainerStarted","Data":"ae95e4744db69b3f751ba5e2013a594ebdd20001c1b1e011676e72b292847897"} Jan 03 07:01:01 crc kubenswrapper[4854]: I0103 07:01:01.866575 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29457061-q6wxj" podStartSLOduration=1.866540567 podStartE2EDuration="1.866540567s" podCreationTimestamp="2026-01-03 07:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 07:01:01.853352291 +0000 UTC m=+4840.179928873" watchObservedRunningTime="2026-01-03 07:01:01.866540567 +0000 UTC m=+4840.193117189" Jan 03 07:01:04 crc kubenswrapper[4854]: I0103 07:01:04.865367 4854 generic.go:334] "Generic (PLEG): container finished" podID="a0029f1a-f205-451a-96cc-9d0915c2d76c" containerID="c58640774a5a2c1ca764d33a1fae63a4cafb0a6915e52fb01672394df27e5b48" exitCode=0 Jan 03 07:01:04 crc kubenswrapper[4854]: I0103 07:01:04.865499 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29457061-q6wxj" event={"ID":"a0029f1a-f205-451a-96cc-9d0915c2d76c","Type":"ContainerDied","Data":"c58640774a5a2c1ca764d33a1fae63a4cafb0a6915e52fb01672394df27e5b48"} Jan 03 07:01:05 crc kubenswrapper[4854]: I0103 07:01:05.436431 4854 scope.go:117] "RemoveContainer" containerID="75162c99648dbc36cd470c0935bb302596449ec1ff34ac2718c86301c81169fa" Jan 03 07:01:06 crc kubenswrapper[4854]: I0103 07:01:06.407487 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29457061-q6wxj" Jan 03 07:01:06 crc kubenswrapper[4854]: I0103 07:01:06.501984 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0029f1a-f205-451a-96cc-9d0915c2d76c-combined-ca-bundle\") pod \"a0029f1a-f205-451a-96cc-9d0915c2d76c\" (UID: \"a0029f1a-f205-451a-96cc-9d0915c2d76c\") " Jan 03 07:01:06 crc kubenswrapper[4854]: I0103 07:01:06.502124 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npnps\" (UniqueName: \"kubernetes.io/projected/a0029f1a-f205-451a-96cc-9d0915c2d76c-kube-api-access-npnps\") pod \"a0029f1a-f205-451a-96cc-9d0915c2d76c\" (UID: \"a0029f1a-f205-451a-96cc-9d0915c2d76c\") " Jan 03 07:01:06 crc kubenswrapper[4854]: I0103 07:01:06.502274 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0029f1a-f205-451a-96cc-9d0915c2d76c-config-data\") pod \"a0029f1a-f205-451a-96cc-9d0915c2d76c\" (UID: \"a0029f1a-f205-451a-96cc-9d0915c2d76c\") " Jan 03 07:01:06 crc kubenswrapper[4854]: I0103 07:01:06.502499 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a0029f1a-f205-451a-96cc-9d0915c2d76c-fernet-keys\") pod \"a0029f1a-f205-451a-96cc-9d0915c2d76c\" (UID: \"a0029f1a-f205-451a-96cc-9d0915c2d76c\") " Jan 03 07:01:06 crc kubenswrapper[4854]: I0103 07:01:06.508185 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0029f1a-f205-451a-96cc-9d0915c2d76c-kube-api-access-npnps" (OuterVolumeSpecName: "kube-api-access-npnps") pod "a0029f1a-f205-451a-96cc-9d0915c2d76c" (UID: "a0029f1a-f205-451a-96cc-9d0915c2d76c"). InnerVolumeSpecName "kube-api-access-npnps". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 07:01:06 crc kubenswrapper[4854]: I0103 07:01:06.509864 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0029f1a-f205-451a-96cc-9d0915c2d76c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "a0029f1a-f205-451a-96cc-9d0915c2d76c" (UID: "a0029f1a-f205-451a-96cc-9d0915c2d76c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 07:01:06 crc kubenswrapper[4854]: I0103 07:01:06.548150 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0029f1a-f205-451a-96cc-9d0915c2d76c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a0029f1a-f205-451a-96cc-9d0915c2d76c" (UID: "a0029f1a-f205-451a-96cc-9d0915c2d76c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 07:01:06 crc kubenswrapper[4854]: I0103 07:01:06.584329 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0029f1a-f205-451a-96cc-9d0915c2d76c-config-data" (OuterVolumeSpecName: "config-data") pod "a0029f1a-f205-451a-96cc-9d0915c2d76c" (UID: "a0029f1a-f205-451a-96cc-9d0915c2d76c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 07:01:06 crc kubenswrapper[4854]: I0103 07:01:06.606006 4854 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0029f1a-f205-451a-96cc-9d0915c2d76c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 07:01:06 crc kubenswrapper[4854]: I0103 07:01:06.606036 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npnps\" (UniqueName: \"kubernetes.io/projected/a0029f1a-f205-451a-96cc-9d0915c2d76c-kube-api-access-npnps\") on node \"crc\" DevicePath \"\"" Jan 03 07:01:06 crc kubenswrapper[4854]: I0103 07:01:06.606046 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0029f1a-f205-451a-96cc-9d0915c2d76c-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 07:01:06 crc kubenswrapper[4854]: I0103 07:01:06.606055 4854 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a0029f1a-f205-451a-96cc-9d0915c2d76c-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 03 07:01:06 crc kubenswrapper[4854]: I0103 07:01:06.906722 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29457061-q6wxj" event={"ID":"a0029f1a-f205-451a-96cc-9d0915c2d76c","Type":"ContainerDied","Data":"ae95e4744db69b3f751ba5e2013a594ebdd20001c1b1e011676e72b292847897"} Jan 03 07:01:06 crc kubenswrapper[4854]: I0103 07:01:06.907014 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae95e4744db69b3f751ba5e2013a594ebdd20001c1b1e011676e72b292847897" Jan 03 07:01:06 crc kubenswrapper[4854]: I0103 07:01:06.906875 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29457061-q6wxj" Jan 03 07:01:11 crc kubenswrapper[4854]: I0103 07:01:11.118772 4854 scope.go:117] "RemoveContainer" containerID="ef2e96981972415f479c5d2719c86714ac46ddc071e288a7842e34b1b148c527" Jan 03 07:01:11 crc kubenswrapper[4854]: E0103 07:01:11.122892 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 07:01:22 crc kubenswrapper[4854]: I0103 07:01:22.128523 4854 scope.go:117] "RemoveContainer" containerID="ef2e96981972415f479c5d2719c86714ac46ddc071e288a7842e34b1b148c527" Jan 03 07:01:22 crc kubenswrapper[4854]: E0103 07:01:22.140359 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 07:01:36 crc kubenswrapper[4854]: I0103 07:01:36.119560 4854 scope.go:117] "RemoveContainer" containerID="ef2e96981972415f479c5d2719c86714ac46ddc071e288a7842e34b1b148c527" Jan 03 07:01:36 crc kubenswrapper[4854]: E0103 07:01:36.122327 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 07:01:47 crc kubenswrapper[4854]: I0103 07:01:47.118395 4854 scope.go:117] "RemoveContainer" containerID="ef2e96981972415f479c5d2719c86714ac46ddc071e288a7842e34b1b148c527" Jan 03 07:01:47 crc kubenswrapper[4854]: I0103 07:01:47.492330 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" event={"ID":"e8c88d7d-092b-44f7-b4c8-3540be3c0e8b","Type":"ContainerStarted","Data":"8eb267d56b3312877d76a39e133c47d96122041fd2c9482d4408e71d0d1ac7f0"} Jan 03 07:03:00 crc kubenswrapper[4854]: I0103 07:03:00.978452 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 03 07:03:00 crc kubenswrapper[4854]: E0103 07:03:00.979832 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0029f1a-f205-451a-96cc-9d0915c2d76c" containerName="keystone-cron" Jan 03 07:03:00 crc kubenswrapper[4854]: I0103 07:03:00.979850 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0029f1a-f205-451a-96cc-9d0915c2d76c" containerName="keystone-cron" Jan 03 07:03:00 crc kubenswrapper[4854]: I0103 07:03:00.980245 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0029f1a-f205-451a-96cc-9d0915c2d76c" containerName="keystone-cron" Jan 03 07:03:00 crc kubenswrapper[4854]: I0103 07:03:00.981274 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 03 07:03:00 crc kubenswrapper[4854]: I0103 07:03:00.981376 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 03 07:03:00 crc kubenswrapper[4854]: I0103 07:03:00.995240 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 03 07:03:00 crc kubenswrapper[4854]: I0103 07:03:00.995365 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 03 07:03:00 crc kubenswrapper[4854]: I0103 07:03:00.995254 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-5fscr" Jan 03 07:03:00 crc kubenswrapper[4854]: I0103 07:03:00.995582 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 03 07:03:01 crc kubenswrapper[4854]: I0103 07:03:01.144333 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e5328fb8-38ea-4119-aa67-b052d0ae7971-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"e5328fb8-38ea-4119-aa67-b052d0ae7971\") " pod="openstack/tempest-tests-tempest" Jan 03 07:03:01 crc kubenswrapper[4854]: I0103 07:03:01.144647 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzrpx\" (UniqueName: \"kubernetes.io/projected/e5328fb8-38ea-4119-aa67-b052d0ae7971-kube-api-access-jzrpx\") pod \"tempest-tests-tempest\" (UID: \"e5328fb8-38ea-4119-aa67-b052d0ae7971\") " pod="openstack/tempest-tests-tempest" Jan 03 07:03:01 crc kubenswrapper[4854]: I0103 07:03:01.144793 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"e5328fb8-38ea-4119-aa67-b052d0ae7971\") " pod="openstack/tempest-tests-tempest" Jan 03 07:03:01 crc kubenswrapper[4854]: I0103 07:03:01.144948 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e5328fb8-38ea-4119-aa67-b052d0ae7971-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"e5328fb8-38ea-4119-aa67-b052d0ae7971\") " pod="openstack/tempest-tests-tempest" Jan 03 07:03:01 crc kubenswrapper[4854]: I0103 07:03:01.145073 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e5328fb8-38ea-4119-aa67-b052d0ae7971-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"e5328fb8-38ea-4119-aa67-b052d0ae7971\") " pod="openstack/tempest-tests-tempest" Jan 03 07:03:01 crc kubenswrapper[4854]: I0103 07:03:01.145187 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e5328fb8-38ea-4119-aa67-b052d0ae7971-config-data\") pod \"tempest-tests-tempest\" (UID: \"e5328fb8-38ea-4119-aa67-b052d0ae7971\") " pod="openstack/tempest-tests-tempest" Jan 03 07:03:01 crc kubenswrapper[4854]: I0103 07:03:01.145323 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e5328fb8-38ea-4119-aa67-b052d0ae7971-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"e5328fb8-38ea-4119-aa67-b052d0ae7971\") " pod="openstack/tempest-tests-tempest" Jan 03 07:03:01 crc kubenswrapper[4854]: I0103 07:03:01.145445 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e5328fb8-38ea-4119-aa67-b052d0ae7971-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"e5328fb8-38ea-4119-aa67-b052d0ae7971\") " pod="openstack/tempest-tests-tempest" Jan 03 07:03:01 crc kubenswrapper[4854]: I0103 07:03:01.145788 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e5328fb8-38ea-4119-aa67-b052d0ae7971-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"e5328fb8-38ea-4119-aa67-b052d0ae7971\") " pod="openstack/tempest-tests-tempest" Jan 03 07:03:01 crc kubenswrapper[4854]: I0103 07:03:01.247634 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e5328fb8-38ea-4119-aa67-b052d0ae7971-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"e5328fb8-38ea-4119-aa67-b052d0ae7971\") " pod="openstack/tempest-tests-tempest" Jan 03 07:03:01 crc kubenswrapper[4854]: I0103 07:03:01.247688 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e5328fb8-38ea-4119-aa67-b052d0ae7971-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"e5328fb8-38ea-4119-aa67-b052d0ae7971\") " pod="openstack/tempest-tests-tempest" Jan 03 07:03:01 crc kubenswrapper[4854]: I0103 07:03:01.247803 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e5328fb8-38ea-4119-aa67-b052d0ae7971-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"e5328fb8-38ea-4119-aa67-b052d0ae7971\") " pod="openstack/tempest-tests-tempest" Jan 03 07:03:01 crc kubenswrapper[4854]: I0103 07:03:01.248662 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e5328fb8-38ea-4119-aa67-b052d0ae7971-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"e5328fb8-38ea-4119-aa67-b052d0ae7971\") " pod="openstack/tempest-tests-tempest" Jan 03 07:03:01 crc kubenswrapper[4854]: I0103 07:03:01.248747 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzrpx\" (UniqueName: \"kubernetes.io/projected/e5328fb8-38ea-4119-aa67-b052d0ae7971-kube-api-access-jzrpx\") pod \"tempest-tests-tempest\" (UID: \"e5328fb8-38ea-4119-aa67-b052d0ae7971\") " pod="openstack/tempest-tests-tempest" Jan 03 07:03:01 crc kubenswrapper[4854]: I0103 07:03:01.248850 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"e5328fb8-38ea-4119-aa67-b052d0ae7971\") " pod="openstack/tempest-tests-tempest" Jan 03 07:03:01 crc kubenswrapper[4854]: I0103 07:03:01.248989 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e5328fb8-38ea-4119-aa67-b052d0ae7971-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"e5328fb8-38ea-4119-aa67-b052d0ae7971\") " pod="openstack/tempest-tests-tempest" Jan 03 07:03:01 crc kubenswrapper[4854]: I0103 07:03:01.249030 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e5328fb8-38ea-4119-aa67-b052d0ae7971-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"e5328fb8-38ea-4119-aa67-b052d0ae7971\") " pod="openstack/tempest-tests-tempest" Jan 03 07:03:01 crc kubenswrapper[4854]: I0103 07:03:01.249112 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e5328fb8-38ea-4119-aa67-b052d0ae7971-config-data\") pod \"tempest-tests-tempest\" (UID: \"e5328fb8-38ea-4119-aa67-b052d0ae7971\") " pod="openstack/tempest-tests-tempest" Jan 03 07:03:01 crc kubenswrapper[4854]: I0103 07:03:01.249283 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e5328fb8-38ea-4119-aa67-b052d0ae7971-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"e5328fb8-38ea-4119-aa67-b052d0ae7971\") " pod="openstack/tempest-tests-tempest" Jan 03 07:03:01 crc kubenswrapper[4854]: I0103 07:03:01.249993 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e5328fb8-38ea-4119-aa67-b052d0ae7971-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"e5328fb8-38ea-4119-aa67-b052d0ae7971\") " pod="openstack/tempest-tests-tempest" Jan 03 07:03:01 crc kubenswrapper[4854]: I0103 07:03:01.250289 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e5328fb8-38ea-4119-aa67-b052d0ae7971-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"e5328fb8-38ea-4119-aa67-b052d0ae7971\") " pod="openstack/tempest-tests-tempest" Jan 03 07:03:01 crc kubenswrapper[4854]: I0103 07:03:01.251154 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e5328fb8-38ea-4119-aa67-b052d0ae7971-config-data\") pod \"tempest-tests-tempest\" (UID: \"e5328fb8-38ea-4119-aa67-b052d0ae7971\") " pod="openstack/tempest-tests-tempest" Jan 03 07:03:01 crc kubenswrapper[4854]: I0103 07:03:01.252216 4854 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"e5328fb8-38ea-4119-aa67-b052d0ae7971\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/tempest-tests-tempest" Jan 03 07:03:01 crc kubenswrapper[4854]: I0103 07:03:01.331736 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e5328fb8-38ea-4119-aa67-b052d0ae7971-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"e5328fb8-38ea-4119-aa67-b052d0ae7971\") " pod="openstack/tempest-tests-tempest" Jan 03 07:03:01 crc kubenswrapper[4854]: I0103 07:03:01.332179 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e5328fb8-38ea-4119-aa67-b052d0ae7971-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"e5328fb8-38ea-4119-aa67-b052d0ae7971\") " pod="openstack/tempest-tests-tempest" Jan 03 07:03:01 crc kubenswrapper[4854]: I0103 07:03:01.336593 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e5328fb8-38ea-4119-aa67-b052d0ae7971-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"e5328fb8-38ea-4119-aa67-b052d0ae7971\") " pod="openstack/tempest-tests-tempest" Jan 03 07:03:01 crc kubenswrapper[4854]: I0103 07:03:01.343174 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzrpx\" (UniqueName: \"kubernetes.io/projected/e5328fb8-38ea-4119-aa67-b052d0ae7971-kube-api-access-jzrpx\") pod \"tempest-tests-tempest\" (UID: \"e5328fb8-38ea-4119-aa67-b052d0ae7971\") " pod="openstack/tempest-tests-tempest" Jan 03 07:03:01 crc kubenswrapper[4854]: I0103 07:03:01.475369 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"e5328fb8-38ea-4119-aa67-b052d0ae7971\") " pod="openstack/tempest-tests-tempest" Jan 03 07:03:01 crc kubenswrapper[4854]: I0103 07:03:01.611210 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 03 07:03:02 crc kubenswrapper[4854]: I0103 07:03:02.369035 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 03 07:03:02 crc kubenswrapper[4854]: W0103 07:03:02.372188 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5328fb8_38ea_4119_aa67_b052d0ae7971.slice/crio-2ca63f2a01b6b67bdb1894d1e8e55816d944c4efb85af7cf8fa72a8e7d455ac0 WatchSource:0}: Error finding container 2ca63f2a01b6b67bdb1894d1e8e55816d944c4efb85af7cf8fa72a8e7d455ac0: Status 404 returned error can't find the container with id 2ca63f2a01b6b67bdb1894d1e8e55816d944c4efb85af7cf8fa72a8e7d455ac0 Jan 03 07:03:02 crc kubenswrapper[4854]: I0103 07:03:02.375886 4854 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 03 07:03:02 crc kubenswrapper[4854]: I0103 07:03:02.526646 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"e5328fb8-38ea-4119-aa67-b052d0ae7971","Type":"ContainerStarted","Data":"2ca63f2a01b6b67bdb1894d1e8e55816d944c4efb85af7cf8fa72a8e7d455ac0"} Jan 03 07:03:43 crc kubenswrapper[4854]: E0103 07:03:43.304454 4854 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 03 07:03:43 crc kubenswrapper[4854]: E0103 07:03:43.306860 4854 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jzrpx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(e5328fb8-38ea-4119-aa67-b052d0ae7971): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 03 07:03:43 crc kubenswrapper[4854]: E0103 07:03:43.308185 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="e5328fb8-38ea-4119-aa67-b052d0ae7971" Jan 03 07:03:44 crc kubenswrapper[4854]: E0103 07:03:44.190930 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="e5328fb8-38ea-4119-aa67-b052d0ae7971" Jan 03 07:03:57 crc kubenswrapper[4854]: I0103 07:03:57.581522 4854 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 03 07:03:59 crc kubenswrapper[4854]: I0103 07:03:59.359557 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"e5328fb8-38ea-4119-aa67-b052d0ae7971","Type":"ContainerStarted","Data":"ea8c0cd040983fe5129817596250c6a78376cebc85df8129b621c3c77345d4e5"} Jan 03 07:03:59 crc kubenswrapper[4854]: I0103 07:03:59.394646 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=5.194102259 podStartE2EDuration="1m0.394628017s" podCreationTimestamp="2026-01-03 07:02:59 +0000 UTC" firstStartedPulling="2026-01-03 07:03:02.375500676 +0000 UTC m=+4960.702077258" lastFinishedPulling="2026-01-03 07:03:57.576026424 +0000 UTC m=+5015.902603016" observedRunningTime="2026-01-03 07:03:59.39151789 +0000 UTC m=+5017.718094482" watchObservedRunningTime="2026-01-03 07:03:59.394628017 +0000 UTC m=+5017.721204599" Jan 03 07:04:11 crc kubenswrapper[4854]: I0103 07:04:11.755639 4854 patch_prober.go:28] interesting pod/machine-config-daemon-qdhfx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 07:04:11 crc kubenswrapper[4854]: I0103 07:04:11.756523 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 07:04:41 crc kubenswrapper[4854]: I0103 07:04:41.755800 4854 patch_prober.go:28] interesting pod/machine-config-daemon-qdhfx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 07:04:41 crc kubenswrapper[4854]: I0103 07:04:41.756458 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 07:05:11 crc kubenswrapper[4854]: I0103 07:05:11.756712 4854 patch_prober.go:28] interesting pod/machine-config-daemon-qdhfx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 07:05:11 crc kubenswrapper[4854]: I0103 07:05:11.759975 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 07:05:11 crc kubenswrapper[4854]: I0103 07:05:11.761136 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" Jan 03 07:05:11 crc kubenswrapper[4854]: I0103 07:05:11.763902 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8eb267d56b3312877d76a39e133c47d96122041fd2c9482d4408e71d0d1ac7f0"} pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 03 07:05:11 crc kubenswrapper[4854]: I0103 07:05:11.764504 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" containerID="cri-o://8eb267d56b3312877d76a39e133c47d96122041fd2c9482d4408e71d0d1ac7f0" gracePeriod=600 Jan 03 07:05:12 crc kubenswrapper[4854]: I0103 07:05:12.488854 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" event={"ID":"e8c88d7d-092b-44f7-b4c8-3540be3c0e8b","Type":"ContainerDied","Data":"8eb267d56b3312877d76a39e133c47d96122041fd2c9482d4408e71d0d1ac7f0"} Jan 03 07:05:12 crc kubenswrapper[4854]: I0103 07:05:12.488896 4854 generic.go:334] "Generic (PLEG): container finished" podID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerID="8eb267d56b3312877d76a39e133c47d96122041fd2c9482d4408e71d0d1ac7f0" exitCode=0 Jan 03 07:05:12 crc kubenswrapper[4854]: I0103 07:05:12.489684 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" event={"ID":"e8c88d7d-092b-44f7-b4c8-3540be3c0e8b","Type":"ContainerStarted","Data":"c60d8ff55fa8b8084957ed5941319569012ea8db784d8f788a5306705491233b"} Jan 03 07:05:12 crc kubenswrapper[4854]: I0103 07:05:12.490257 4854 scope.go:117] "RemoveContainer" containerID="ef2e96981972415f479c5d2719c86714ac46ddc071e288a7842e34b1b148c527" Jan 03 07:05:29 crc kubenswrapper[4854]: I0103 07:05:29.789110 4854 trace.go:236] Trace[614885411]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-cell1-server-0" (03-Jan-2026 07:05:26.588) (total time: 3198ms): Jan 03 07:05:29 crc kubenswrapper[4854]: Trace[614885411]: [3.198708813s] [3.198708813s] END Jan 03 07:05:34 crc kubenswrapper[4854]: I0103 07:05:34.936136 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-k6nnf" podUID="7d4776d0-290f-4c82-aa5c-6412b5bb4608" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:35 crc kubenswrapper[4854]: I0103 07:05:35.253309 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-9mfrk" podUID="b826d6d3-0de8-4b3d-9294-9e5f8f9faae6" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:35 crc kubenswrapper[4854]: I0103 07:05:35.253456 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-z7cfx" podUID="fe7f33a3-c4b8-44b6-81f1-c2143cbb9dd1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:35 crc kubenswrapper[4854]: I0103 07:05:35.253580 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-9mfrk" podUID="b826d6d3-0de8-4b3d-9294-9e5f8f9faae6" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:35 crc kubenswrapper[4854]: I0103 07:05:35.349268 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-ncjlb" podUID="402a077e-f741-447d-ab1c-25bc62cd24cf" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:35 crc kubenswrapper[4854]: I0103 07:05:35.349288 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-8xksh" podUID="05f5522f-8e47-4d35-be75-2edee0f16f77" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:35 crc kubenswrapper[4854]: I0103 07:05:35.386542 4854 patch_prober.go:28] interesting pod/thanos-querier-5b7f7948f-gfss8 container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.79:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:05:35 crc kubenswrapper[4854]: I0103 07:05:35.387493 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-5b7f7948f-gfss8" podUID="0ee90900-26e8-4d06-b2b4-f646a1570746" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.79:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:35 crc kubenswrapper[4854]: I0103 07:05:35.390329 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-xrghz" podUID="56476ba9-ae33-4d34-855c-0e144e4f5da3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:35 crc kubenswrapper[4854]: I0103 07:05:35.432254 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-dprp4" podUID="6515eec5-5595-42cb-8588-81baa0db47c1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:35 crc kubenswrapper[4854]: I0103 07:05:35.612395 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-qzzw2" podUID="ddf8e54e-858e-432c-ab2d-8b4d83f6282b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:35 crc kubenswrapper[4854]: I0103 07:05:35.894868 4854 patch_prober.go:28] interesting pod/oauth-openshift-6994f97844-8cxlw container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.60:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:05:35 crc kubenswrapper[4854]: I0103 07:05:35.894929 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" podUID="159783f1-b3b7-432d-b243-e8e7076ddd0a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.60:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:35 crc kubenswrapper[4854]: I0103 07:05:35.894872 4854 patch_prober.go:28] interesting pod/oauth-openshift-6994f97844-8cxlw container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.60:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:05:35 crc kubenswrapper[4854]: I0103 07:05:35.895027 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" podUID="159783f1-b3b7-432d-b243-e8e7076ddd0a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.60:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:36 crc kubenswrapper[4854]: I0103 07:05:36.374422 4854 patch_prober.go:28] interesting pod/console-67666b4d85-nwx4t container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:05:36 crc kubenswrapper[4854]: I0103 07:05:36.374498 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-67666b4d85-nwx4t" podUID="002174a6-3b57-4eba-985b-9fd7c492b143" containerName="console" probeResult="failure" output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:42 crc kubenswrapper[4854]: I0103 07:05:42.113354 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-6fczv" podUID="e29c84ac-4ca9-44ec-b886-ae50c84ba121" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:42 crc kubenswrapper[4854]: I0103 07:05:42.128670 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="10578fce-2c06-4977-9cb2-51b8593f9fed" containerName="galera" probeResult="failure" output="command timed out" Jan 03 07:05:42 crc kubenswrapper[4854]: I0103 07:05:42.130742 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="10578fce-2c06-4977-9cb2-51b8593f9fed" containerName="galera" probeResult="failure" output="command timed out" Jan 03 07:05:43 crc kubenswrapper[4854]: I0103 07:05:43.138445 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="748d9586-5917-42ab-8f1f-3a811b724dae" containerName="galera" probeResult="failure" output="command timed out" Jan 03 07:05:43 crc kubenswrapper[4854]: I0103 07:05:43.143466 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="748d9586-5917-42ab-8f1f-3a811b724dae" containerName="galera" probeResult="failure" output="command timed out" Jan 03 07:05:43 crc kubenswrapper[4854]: I0103 07:05:43.612828 4854 patch_prober.go:28] interesting pod/metrics-server-665fcf668f-65wrt container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.80:10250/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:05:43 crc kubenswrapper[4854]: I0103 07:05:43.613635 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-665fcf668f-65wrt" podUID="5899ebcd-eec0-44ae-9e07-98b443d209c1" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.80:10250/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:43 crc kubenswrapper[4854]: I0103 07:05:43.613069 4854 patch_prober.go:28] interesting pod/logging-loki-distributor-5f678c8dd6-p67sv container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:05:43 crc kubenswrapper[4854]: I0103 07:05:43.613098 4854 patch_prober.go:28] interesting pod/metrics-server-665fcf668f-65wrt container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.80:10250/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:05:43 crc kubenswrapper[4854]: I0103 07:05:43.613725 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-p67sv" podUID="128d93c6-02aa-4f68-aac6-cfcab1896a35" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:43 crc kubenswrapper[4854]: I0103 07:05:43.613772 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-665fcf668f-65wrt" podUID="5899ebcd-eec0-44ae-9e07-98b443d209c1" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.80:10250/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:43 crc kubenswrapper[4854]: I0103 07:05:43.853930 4854 patch_prober.go:28] interesting pod/logging-loki-querier-76788598db-b8thp container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:05:43 crc kubenswrapper[4854]: I0103 07:05:43.854003 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76788598db-b8thp" podUID="66f9492b-16b5-4b86-bb22-560ad0f8001c" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:43 crc kubenswrapper[4854]: I0103 07:05:43.960732 4854 patch_prober.go:28] interesting pod/logging-loki-query-frontend-69d9546745-42f7g container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:05:43 crc kubenswrapper[4854]: I0103 07:05:43.960807 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-69d9546745-42f7g" podUID="b98c17f7-1569-4c33-ab65-f4c2ba0555ae" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.53:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:44 crc kubenswrapper[4854]: I0103 07:05:44.086323 4854 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-tgcxk container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.33:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:05:44 crc kubenswrapper[4854]: I0103 07:05:44.086393 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-tgcxk" podUID="ff43f741-1a42-4dfa-bfea-11b28b56487c" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.33:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:44 crc kubenswrapper[4854]: I0103 07:05:44.086321 4854 patch_prober.go:28] interesting pod/monitoring-plugin-57f57bb94b-jb8qx container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.81:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:05:44 crc kubenswrapper[4854]: I0103 07:05:44.086699 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-57f57bb94b-jb8qx" podUID="a6f05342-5fbe-4b7a-b222-e52b87c7e754" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.81:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:44 crc kubenswrapper[4854]: I0103 07:05:44.209257 4854 patch_prober.go:28] interesting pod/logging-loki-gateway-656bf7cf7c-w49nx container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:05:44 crc kubenswrapper[4854]: I0103 07:05:44.209352 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-656bf7cf7c-w49nx" podUID="4de190f3-1f91-4bd7-9d46-df7235633d58" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:44 crc kubenswrapper[4854]: I0103 07:05:44.209444 4854 patch_prober.go:28] interesting pod/logging-loki-gateway-656bf7cf7c-w49nx container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:05:44 crc kubenswrapper[4854]: I0103 07:05:44.209524 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-656bf7cf7c-w49nx" podUID="4de190f3-1f91-4bd7-9d46-df7235633d58" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:44 crc kubenswrapper[4854]: I0103 07:05:44.337313 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-2kqhz" podUID="b1c0c51a-7edb-49cb-9b71-f7ce149bde33" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:44 crc kubenswrapper[4854]: I0103 07:05:44.337480 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-webhook-687f57d79b-2kqhz" podUID="b1c0c51a-7edb-49cb-9b71-f7ce149bde33" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:44 crc kubenswrapper[4854]: I0103 07:05:44.488736 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-7vksk" podUID="ec8a24a9-62d4-4db8-8f17-f261a85d6a47" containerName="registry-server" probeResult="failure" output=< Jan 03 07:05:44 crc kubenswrapper[4854]: timeout: failed to connect service ":50051" within 1s Jan 03 07:05:44 crc kubenswrapper[4854]: > Jan 03 07:05:44 crc kubenswrapper[4854]: I0103 07:05:44.490322 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-7vksk" podUID="ec8a24a9-62d4-4db8-8f17-f261a85d6a47" containerName="registry-server" probeResult="failure" output=< Jan 03 07:05:44 crc kubenswrapper[4854]: timeout: failed to connect service ":50051" within 1s Jan 03 07:05:44 crc kubenswrapper[4854]: > Jan 03 07:05:44 crc kubenswrapper[4854]: I0103 07:05:44.516418 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-operator-c8b457848-dg5cs" podUID="bc9994eb-5930-484d-a02c-60d4e13483e2" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.100:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:44 crc kubenswrapper[4854]: I0103 07:05:44.516578 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-operator-c8b457848-dg5cs" podUID="bc9994eb-5930-484d-a02c-60d4e13483e2" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.100:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:44 crc kubenswrapper[4854]: I0103 07:05:44.604391 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-trsxr" podUID="f5b690cb-eb48-469c-a774-eff5eda46f89" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:44 crc kubenswrapper[4854]: I0103 07:05:44.645298 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-trsxr" podUID="f5b690cb-eb48-469c-a774-eff5eda46f89" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:44 crc kubenswrapper[4854]: I0103 07:05:44.705569 4854 patch_prober.go:28] interesting pod/logging-loki-gateway-656bf7cf7c-98v92 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:05:44 crc kubenswrapper[4854]: I0103 07:05:44.705667 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-656bf7cf7c-98v92" podUID="428c2117-0003-47b2-abfa-f4f7930e126c" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:44 crc kubenswrapper[4854]: I0103 07:05:44.705693 4854 patch_prober.go:28] interesting pod/logging-loki-gateway-656bf7cf7c-98v92 container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:05:44 crc kubenswrapper[4854]: I0103 07:05:44.705792 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-656bf7cf7c-98v92" podUID="428c2117-0003-47b2-abfa-f4f7930e126c" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:44 crc kubenswrapper[4854]: I0103 07:05:44.777385 4854 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:05:44 crc kubenswrapper[4854]: I0103 07:05:44.777480 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="78ad3d84-530d-45e9-928d-c552448aec20" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.56:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:44 crc kubenswrapper[4854]: I0103 07:05:44.926447 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-hgwsb" podUID="a327e8cf-824f-41b1-9076-5fd57a8b4352" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:44 crc kubenswrapper[4854]: I0103 07:05:44.926443 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-hgwsb" podUID="a327e8cf-824f-41b1-9076-5fd57a8b4352" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:44 crc kubenswrapper[4854]: I0103 07:05:44.928667 4854 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:3101/ready\": context deadline exceeded" start-of-body= Jan 03 07:05:44 crc kubenswrapper[4854]: I0103 07:05:44.928712 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-compactor-0" podUID="7fb7ba42-5d69-44aa-87b2-28130157852b" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.0.57:3101/ready\": context deadline exceeded" Jan 03 07:05:45 crc kubenswrapper[4854]: I0103 07:05:45.023238 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-4pbfn" podUID="1d8399ce-3c90-4601-9a32-31dc20da4552" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:45 crc kubenswrapper[4854]: I0103 07:05:45.023311 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-4pbfn" podUID="1d8399ce-3c90-4601-9a32-31dc20da4552" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:45 crc kubenswrapper[4854]: I0103 07:05:45.059014 4854 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.61:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:05:45 crc kubenswrapper[4854]: I0103 07:05:45.059590 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="c61cab0d-5846-418e-94ca-35e8a6c31ca0" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.61:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:45 crc kubenswrapper[4854]: I0103 07:05:45.137095 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="ca1d3e35-8df0-4b19-891d-3f2aecc401ab" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Jan 03 07:05:45 crc kubenswrapper[4854]: I0103 07:05:45.220269 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-9mfrk" podUID="b826d6d3-0de8-4b3d-9294-9e5f8f9faae6" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:45 crc kubenswrapper[4854]: I0103 07:05:45.220367 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-9mfrk" podUID="b826d6d3-0de8-4b3d-9294-9e5f8f9faae6" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:45 crc kubenswrapper[4854]: I0103 07:05:45.302342 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-7lvxp" podUID="ad6a18d3-e1d2-446a-9b41-a9fca5e8b574" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:45 crc kubenswrapper[4854]: I0103 07:05:45.386350 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-8xksh" podUID="05f5522f-8e47-4d35-be75-2edee0f16f77" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:45 crc kubenswrapper[4854]: I0103 07:05:45.386348 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-7lvxp" podUID="ad6a18d3-e1d2-446a-9b41-a9fca5e8b574" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:45 crc kubenswrapper[4854]: I0103 07:05:45.386382 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-8xksh" podUID="05f5522f-8e47-4d35-be75-2edee0f16f77" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:45 crc kubenswrapper[4854]: I0103 07:05:45.386435 4854 patch_prober.go:28] interesting pod/thanos-querier-5b7f7948f-gfss8 container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.79:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:05:45 crc kubenswrapper[4854]: I0103 07:05:45.387407 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-5b7f7948f-gfss8" podUID="0ee90900-26e8-4d06-b2b4-f646a1570746" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.79:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:45 crc kubenswrapper[4854]: I0103 07:05:45.863237 4854 patch_prober.go:28] interesting pod/nmstate-webhook-f8fb84555-mxm65 container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.88:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:05:45 crc kubenswrapper[4854]: I0103 07:05:45.863308 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-f8fb84555-mxm65" podUID="d78e7aa0-58e7-4445-920b-ca73758f9c84" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.88:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:45 crc kubenswrapper[4854]: I0103 07:05:45.894321 4854 patch_prober.go:28] interesting pod/oauth-openshift-6994f97844-8cxlw container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.60:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:05:45 crc kubenswrapper[4854]: I0103 07:05:45.894387 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" podUID="159783f1-b3b7-432d-b243-e8e7076ddd0a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.60:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:45 crc kubenswrapper[4854]: I0103 07:05:45.894492 4854 patch_prober.go:28] interesting pod/oauth-openshift-6994f97844-8cxlw container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.60:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:05:45 crc kubenswrapper[4854]: I0103 07:05:45.894566 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" podUID="159783f1-b3b7-432d-b243-e8e7076ddd0a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.60:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:45 crc kubenswrapper[4854]: I0103 07:05:45.932502 4854 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-n82hj container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:05:45 crc kubenswrapper[4854]: I0103 07:05:45.932573 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n82hj" podUID="9ecb343a-f88c-49d3-a792-696f8b94eca3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:45 crc kubenswrapper[4854]: I0103 07:05:45.932660 4854 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-n82hj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:05:45 crc kubenswrapper[4854]: I0103 07:05:45.932679 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n82hj" podUID="9ecb343a-f88c-49d3-a792-696f8b94eca3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:46 crc kubenswrapper[4854]: I0103 07:05:46.275402 4854 patch_prober.go:28] interesting pod/controller-manager-7ff6f7c9f7-lfv4z container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.74:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:05:46 crc kubenswrapper[4854]: I0103 07:05:46.275493 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-lfv4z" podUID="82dcb747-3603-42a5-82ca-f7664d5d9027" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.74:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:46 crc kubenswrapper[4854]: I0103 07:05:46.275704 4854 patch_prober.go:28] interesting pod/controller-manager-7ff6f7c9f7-lfv4z container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.74:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:05:46 crc kubenswrapper[4854]: I0103 07:05:46.275769 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-lfv4z" podUID="82dcb747-3603-42a5-82ca-f7664d5d9027" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.74:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:46 crc kubenswrapper[4854]: I0103 07:05:46.328472 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-s6gct" podUID="99f863f5-fa79-40f0-8ee2-d3d75b6c3df2" containerName="registry-server" probeResult="failure" output=< Jan 03 07:05:46 crc kubenswrapper[4854]: timeout: failed to connect service ":50051" within 1s Jan 03 07:05:46 crc kubenswrapper[4854]: > Jan 03 07:05:46 crc kubenswrapper[4854]: I0103 07:05:46.328763 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-s6gct" podUID="99f863f5-fa79-40f0-8ee2-d3d75b6c3df2" containerName="registry-server" probeResult="failure" output=< Jan 03 07:05:46 crc kubenswrapper[4854]: timeout: failed to connect service ":50051" within 1s Jan 03 07:05:46 crc kubenswrapper[4854]: > Jan 03 07:05:46 crc kubenswrapper[4854]: I0103 07:05:46.776809 4854 patch_prober.go:28] interesting pod/route-controller-manager-799fd78b6c-wqs5s container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.71:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:05:46 crc kubenswrapper[4854]: I0103 07:05:46.777337 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-wqs5s" podUID="0f4370d7-e178-42cc-99ec-fdfeca5fb5f8" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.71:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:46 crc kubenswrapper[4854]: I0103 07:05:46.776841 4854 patch_prober.go:28] interesting pod/route-controller-manager-799fd78b6c-wqs5s container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.71:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:05:46 crc kubenswrapper[4854]: I0103 07:05:46.777486 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-wqs5s" podUID="0f4370d7-e178-42cc-99ec-fdfeca5fb5f8" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.71:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:49 crc kubenswrapper[4854]: I0103 07:05:49.209579 4854 patch_prober.go:28] interesting pod/logging-loki-gateway-656bf7cf7c-w49nx container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:05:49 crc kubenswrapper[4854]: I0103 07:05:49.210170 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-656bf7cf7c-w49nx" podUID="4de190f3-1f91-4bd7-9d46-df7235633d58" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:50 crc kubenswrapper[4854]: I0103 07:05:50.131097 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="ca1d3e35-8df0-4b19-891d-3f2aecc401ab" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 03 07:05:50 crc kubenswrapper[4854]: I0103 07:05:50.788847 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-85546d974f-nhdvf" podUID="c82f4933-ef34-46ae-8f48-f87b3ce1e90f" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.94:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:51 crc kubenswrapper[4854]: I0103 07:05:51.367743 4854 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-lltxw container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:05:51 crc kubenswrapper[4854]: I0103 07:05:51.368332 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lltxw" podUID="77084a3a-5610-4014-a3bf-6d4073a74d44" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:51 crc kubenswrapper[4854]: I0103 07:05:51.380798 4854 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-lltxw container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:05:51 crc kubenswrapper[4854]: I0103 07:05:51.380877 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lltxw" podUID="77084a3a-5610-4014-a3bf-6d4073a74d44" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:51 crc kubenswrapper[4854]: I0103 07:05:51.584277 4854 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-fwgd2 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:05:51 crc kubenswrapper[4854]: I0103 07:05:51.584343 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fwgd2" podUID="5ab7ee8b-9182-43e2-85de-f8d92aa12587" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.31:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:51 crc kubenswrapper[4854]: I0103 07:05:51.584257 4854 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-fwgd2 container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.31:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:05:51 crc kubenswrapper[4854]: I0103 07:05:51.586043 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fwgd2" podUID="5ab7ee8b-9182-43e2-85de-f8d92aa12587" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.31:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:52 crc kubenswrapper[4854]: I0103 07:05:52.117489 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-6fczv" podUID="e29c84ac-4ca9-44ec-b886-ae50c84ba121" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:52 crc kubenswrapper[4854]: I0103 07:05:52.132780 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="10578fce-2c06-4977-9cb2-51b8593f9fed" containerName="galera" probeResult="failure" output="command timed out" Jan 03 07:05:52 crc kubenswrapper[4854]: I0103 07:05:52.133096 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="10578fce-2c06-4977-9cb2-51b8593f9fed" containerName="galera" probeResult="failure" output="command timed out" Jan 03 07:05:53 crc kubenswrapper[4854]: I0103 07:05:53.541123 4854 patch_prober.go:28] interesting pod/logging-loki-distributor-5f678c8dd6-p67sv container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:05:53 crc kubenswrapper[4854]: I0103 07:05:53.541540 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-p67sv" podUID="128d93c6-02aa-4f68-aac6-cfcab1896a35" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:53 crc kubenswrapper[4854]: I0103 07:05:53.569266 4854 patch_prober.go:28] interesting pod/metrics-server-665fcf668f-65wrt container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.80:10250/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:05:53 crc kubenswrapper[4854]: I0103 07:05:53.569337 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-665fcf668f-65wrt" podUID="5899ebcd-eec0-44ae-9e07-98b443d209c1" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.80:10250/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:53 crc kubenswrapper[4854]: I0103 07:05:53.853756 4854 patch_prober.go:28] interesting pod/logging-loki-querier-76788598db-b8thp container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:05:53 crc kubenswrapper[4854]: I0103 07:05:53.854042 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76788598db-b8thp" podUID="66f9492b-16b5-4b86-bb22-560ad0f8001c" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:54 crc kubenswrapper[4854]: I0103 07:05:54.129833 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="748d9586-5917-42ab-8f1f-3a811b724dae" containerName="galera" probeResult="failure" output="command timed out" Jan 03 07:05:54 crc kubenswrapper[4854]: I0103 07:05:54.129840 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="748d9586-5917-42ab-8f1f-3a811b724dae" containerName="galera" probeResult="failure" output="command timed out" Jan 03 07:05:54 crc kubenswrapper[4854]: I0103 07:05:54.210713 4854 patch_prober.go:28] interesting pod/logging-loki-gateway-656bf7cf7c-w49nx container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:05:54 crc kubenswrapper[4854]: I0103 07:05:54.210792 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-656bf7cf7c-w49nx" podUID="4de190f3-1f91-4bd7-9d46-df7235633d58" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:54 crc kubenswrapper[4854]: I0103 07:05:54.705212 4854 patch_prober.go:28] interesting pod/logging-loki-gateway-656bf7cf7c-98v92 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:05:54 crc kubenswrapper[4854]: I0103 07:05:54.705273 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-656bf7cf7c-98v92" podUID="428c2117-0003-47b2-abfa-f4f7930e126c" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:54 crc kubenswrapper[4854]: I0103 07:05:54.937342 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-k6nnf" podUID="7d4776d0-290f-4c82-aa5c-6412b5bb4608" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:55 crc kubenswrapper[4854]: I0103 07:05:55.132609 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="ca1d3e35-8df0-4b19-891d-3f2aecc401ab" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 03 07:05:55 crc kubenswrapper[4854]: I0103 07:05:55.216375 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-9mfrk" podUID="b826d6d3-0de8-4b3d-9294-9e5f8f9faae6" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:55 crc kubenswrapper[4854]: I0103 07:05:55.216724 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/speaker-9mfrk" Jan 03 07:05:55 crc kubenswrapper[4854]: I0103 07:05:55.216819 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-9mfrk" podUID="b826d6d3-0de8-4b3d-9294-9e5f8f9faae6" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:55 crc kubenswrapper[4854]: I0103 07:05:55.217234 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-9mfrk" Jan 03 07:05:55 crc kubenswrapper[4854]: I0103 07:05:55.218497 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="speaker" containerStatusID={"Type":"cri-o","ID":"252cba5186a3c3dc8dd0d53e03137843f95f7633b9b36e4815bc42dab4f08ae0"} pod="metallb-system/speaker-9mfrk" containerMessage="Container speaker failed liveness probe, will be restarted" Jan 03 07:05:55 crc kubenswrapper[4854]: I0103 07:05:55.218655 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/speaker-9mfrk" podUID="b826d6d3-0de8-4b3d-9294-9e5f8f9faae6" containerName="speaker" containerID="cri-o://252cba5186a3c3dc8dd0d53e03137843f95f7633b9b36e4815bc42dab4f08ae0" gracePeriod=2 Jan 03 07:05:55 crc kubenswrapper[4854]: I0103 07:05:55.317299 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-jqj54" podUID="e62c43c5-cac2-4f9f-9e1b-de61827c4c94" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:55 crc kubenswrapper[4854]: I0103 07:05:55.317315 4854 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-q6v5f container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.73:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:05:55 crc kubenswrapper[4854]: I0103 07:05:55.317399 4854 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-q6v5f container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.73:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:05:55 crc kubenswrapper[4854]: I0103 07:05:55.317410 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-q6v5f" podUID="e1f91a20-c61d-488f-98ab-f966174f3764" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.73:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:55 crc kubenswrapper[4854]: I0103 07:05:55.317455 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-q6v5f" podUID="e1f91a20-c61d-488f-98ab-f966174f3764" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.73:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:55 crc kubenswrapper[4854]: I0103 07:05:55.359254 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-xrghz" podUID="56476ba9-ae33-4d34-855c-0e144e4f5da3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:55 crc kubenswrapper[4854]: I0103 07:05:55.383157 4854 patch_prober.go:28] interesting pod/thanos-querier-5b7f7948f-gfss8 container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.79:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:05:55 crc kubenswrapper[4854]: I0103 07:05:55.383216 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-5b7f7948f-gfss8" podUID="0ee90900-26e8-4d06-b2b4-f646a1570746" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.79:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:55 crc kubenswrapper[4854]: I0103 07:05:55.904407 4854 patch_prober.go:28] interesting pod/oauth-openshift-6994f97844-8cxlw container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.60:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:05:55 crc kubenswrapper[4854]: I0103 07:05:55.904457 4854 patch_prober.go:28] interesting pod/oauth-openshift-6994f97844-8cxlw container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.60:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:05:55 crc kubenswrapper[4854]: I0103 07:05:55.904422 4854 patch_prober.go:28] interesting pod/nmstate-webhook-f8fb84555-mxm65 container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.88:9443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:05:55 crc kubenswrapper[4854]: I0103 07:05:55.904512 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" podUID="159783f1-b3b7-432d-b243-e8e7076ddd0a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.60:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:55 crc kubenswrapper[4854]: I0103 07:05:55.904557 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" Jan 03 07:05:55 crc kubenswrapper[4854]: I0103 07:05:55.904562 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-f8fb84555-mxm65" podUID="d78e7aa0-58e7-4445-920b-ca73758f9c84" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.88:9443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:55 crc kubenswrapper[4854]: I0103 07:05:55.904488 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" podUID="159783f1-b3b7-432d-b243-e8e7076ddd0a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.60:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:55 crc kubenswrapper[4854]: I0103 07:05:55.904772 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" Jan 03 07:05:55 crc kubenswrapper[4854]: I0103 07:05:55.905985 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="oauth-openshift" containerStatusID={"Type":"cri-o","ID":"1c1339677d0c8a6d7d7eee61fd4fa15d6a40580599301989032bde78a8b8e7c2"} pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" containerMessage="Container oauth-openshift failed liveness probe, will be restarted" Jan 03 07:05:56 crc kubenswrapper[4854]: I0103 07:05:56.276236 4854 patch_prober.go:28] interesting pod/controller-manager-7ff6f7c9f7-lfv4z container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.74:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:05:56 crc kubenswrapper[4854]: I0103 07:05:56.276638 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-lfv4z" podUID="82dcb747-3603-42a5-82ca-f7664d5d9027" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.74:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:56 crc kubenswrapper[4854]: I0103 07:05:56.276481 4854 patch_prober.go:28] interesting pod/controller-manager-7ff6f7c9f7-lfv4z container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.74:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:05:56 crc kubenswrapper[4854]: I0103 07:05:56.276696 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-lfv4z" podUID="82dcb747-3603-42a5-82ca-f7664d5d9027" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.74:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:56 crc kubenswrapper[4854]: I0103 07:05:56.776131 4854 patch_prober.go:28] interesting pod/route-controller-manager-799fd78b6c-wqs5s container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.71:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:05:56 crc kubenswrapper[4854]: I0103 07:05:56.776528 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-wqs5s" podUID="0f4370d7-e178-42cc-99ec-fdfeca5fb5f8" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.71:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:56 crc kubenswrapper[4854]: I0103 07:05:56.776198 4854 patch_prober.go:28] interesting pod/route-controller-manager-799fd78b6c-wqs5s container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.71:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:05:56 crc kubenswrapper[4854]: I0103 07:05:56.776807 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-wqs5s" podUID="0f4370d7-e178-42cc-99ec-fdfeca5fb5f8" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.71:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:56 crc kubenswrapper[4854]: I0103 07:05:56.905604 4854 patch_prober.go:28] interesting pod/oauth-openshift-6994f97844-8cxlw container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.60:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:05:56 crc kubenswrapper[4854]: I0103 07:05:56.905680 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" podUID="159783f1-b3b7-432d-b243-e8e7076ddd0a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.60:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:57 crc kubenswrapper[4854]: I0103 07:05:57.572772 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="006530e4-7385-4334-80e8-86bfcf5f645f" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.10:8080/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:57 crc kubenswrapper[4854]: I0103 07:05:57.949400 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-s6gct" podUID="99f863f5-fa79-40f0-8ee2-d3d75b6c3df2" containerName="registry-server" probeResult="failure" output=< Jan 03 07:05:57 crc kubenswrapper[4854]: timeout: failed to connect service ":50051" within 1s Jan 03 07:05:57 crc kubenswrapper[4854]: > Jan 03 07:05:57 crc kubenswrapper[4854]: I0103 07:05:57.953642 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-s6gct" podUID="99f863f5-fa79-40f0-8ee2-d3d75b6c3df2" containerName="registry-server" probeResult="failure" output=< Jan 03 07:05:57 crc kubenswrapper[4854]: timeout: failed to connect service ":50051" within 1s Jan 03 07:05:57 crc kubenswrapper[4854]: > Jan 03 07:05:58 crc kubenswrapper[4854]: I0103 07:05:58.216967 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-9mfrk" event={"ID":"b826d6d3-0de8-4b3d-9294-9e5f8f9faae6","Type":"ContainerDied","Data":"252cba5186a3c3dc8dd0d53e03137843f95f7633b9b36e4815bc42dab4f08ae0"} Jan 03 07:05:58 crc kubenswrapper[4854]: I0103 07:05:58.217106 4854 generic.go:334] "Generic (PLEG): container finished" podID="b826d6d3-0de8-4b3d-9294-9e5f8f9faae6" containerID="252cba5186a3c3dc8dd0d53e03137843f95f7633b9b36e4815bc42dab4f08ae0" exitCode=0 Jan 03 07:05:59 crc kubenswrapper[4854]: I0103 07:05:59.129571 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="6047aa72-faf9-4f4d-95ab-df8b1230cedf" containerName="ovn-northd" probeResult="failure" output="command timed out" Jan 03 07:05:59 crc kubenswrapper[4854]: I0103 07:05:59.130614 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ovn-northd-0" podUID="6047aa72-faf9-4f4d-95ab-df8b1230cedf" containerName="ovn-northd" probeResult="failure" output="command timed out" Jan 03 07:05:59 crc kubenswrapper[4854]: I0103 07:05:59.210184 4854 patch_prober.go:28] interesting pod/logging-loki-gateway-656bf7cf7c-w49nx container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:05:59 crc kubenswrapper[4854]: I0103 07:05:59.210287 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-656bf7cf7c-w49nx" podUID="4de190f3-1f91-4bd7-9d46-df7235633d58" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:59 crc kubenswrapper[4854]: I0103 07:05:59.295518 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-2kqhz" podUID="b1c0c51a-7edb-49cb-9b71-f7ce149bde33" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:05:59 crc kubenswrapper[4854]: I0103 07:05:59.705291 4854 patch_prober.go:28] interesting pod/logging-loki-gateway-656bf7cf7c-98v92 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:05:59 crc kubenswrapper[4854]: I0103 07:05:59.705553 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-656bf7cf7c-98v92" podUID="428c2117-0003-47b2-abfa-f4f7930e126c" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:00 crc kubenswrapper[4854]: I0103 07:06:00.247274 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-9mfrk" event={"ID":"b826d6d3-0de8-4b3d-9294-9e5f8f9faae6","Type":"ContainerStarted","Data":"0e7ade64b8e2469e96b56a33f7507989607bc747da1305a20feaa1f07204144e"} Jan 03 07:06:00 crc kubenswrapper[4854]: I0103 07:06:00.247616 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-9mfrk" Jan 03 07:06:00 crc kubenswrapper[4854]: I0103 07:06:00.383629 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-7fdb976ccd-xpqws" podUID="c752fc50-5b45-4cbc-8a1c-b0cec9e720e5" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.93:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:00 crc kubenswrapper[4854]: I0103 07:06:00.582367 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qqbq9" podUID="ba0f32da-a0e3-4c43-8dde-d6212a1c63e1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:00 crc kubenswrapper[4854]: I0103 07:06:00.582406 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qqbq9" podUID="ba0f32da-a0e3-4c43-8dde-d6212a1c63e1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:00 crc kubenswrapper[4854]: I0103 07:06:00.722511 4854 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-vtpbv container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:00 crc kubenswrapper[4854]: I0103 07:06:00.722955 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-vtpbv" podUID="b0379c6e-b02d-40ef-b9ae-add1e633bc4a" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:00 crc kubenswrapper[4854]: I0103 07:06:00.851350 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-85546d974f-nhdvf" podUID="c82f4933-ef34-46ae-8f48-f87b3ce1e90f" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.94:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:00 crc kubenswrapper[4854]: I0103 07:06:00.851349 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-85546d974f-nhdvf" podUID="c82f4933-ef34-46ae-8f48-f87b3ce1e90f" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.94:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:01 crc kubenswrapper[4854]: I0103 07:06:01.021817 4854 patch_prober.go:28] interesting pod/console-operator-58897d9998-2lwzj container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:01 crc kubenswrapper[4854]: I0103 07:06:01.021903 4854 patch_prober.go:28] interesting pod/console-operator-58897d9998-2lwzj container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.17:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:01 crc kubenswrapper[4854]: I0103 07:06:01.022175 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-2lwzj" podUID="dcde1a7d-7025-45cb-92de-483da7a86296" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.17:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:01 crc kubenswrapper[4854]: I0103 07:06:01.022199 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-2lwzj" podUID="dcde1a7d-7025-45cb-92de-483da7a86296" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.17:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:01 crc kubenswrapper[4854]: I0103 07:06:01.127346 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7nclw5" podUID="1f9928f3-0c28-40df-b6ad-c871424ad3a6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:01 crc kubenswrapper[4854]: I0103 07:06:01.127762 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7nclw5" podUID="1f9928f3-0c28-40df-b6ad-c871424ad3a6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:01 crc kubenswrapper[4854]: I0103 07:06:01.133357 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="ca1d3e35-8df0-4b19-891d-3f2aecc401ab" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 03 07:06:01 crc kubenswrapper[4854]: I0103 07:06:01.133453 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Jan 03 07:06:01 crc kubenswrapper[4854]: I0103 07:06:01.134391 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-central-agent" containerStatusID={"Type":"cri-o","ID":"7def2b0ff7c9c47e2ab148307011319889a3bf934eaf31dbecc60b60a9497a0e"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-central-agent failed liveness probe, will be restarted" Jan 03 07:06:01 crc kubenswrapper[4854]: I0103 07:06:01.134494 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ca1d3e35-8df0-4b19-891d-3f2aecc401ab" containerName="ceilometer-central-agent" containerID="cri-o://7def2b0ff7c9c47e2ab148307011319889a3bf934eaf31dbecc60b60a9497a0e" gracePeriod=30 Jan 03 07:06:01 crc kubenswrapper[4854]: I0103 07:06:01.407289 4854 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-lltxw container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:01 crc kubenswrapper[4854]: I0103 07:06:01.407344 4854 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-99h4j container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:01 crc kubenswrapper[4854]: I0103 07:06:01.407362 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lltxw" podUID="77084a3a-5610-4014-a3bf-6d4073a74d44" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:01 crc kubenswrapper[4854]: I0103 07:06:01.407382 4854 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-lltxw container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:01 crc kubenswrapper[4854]: I0103 07:06:01.407412 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-99h4j" podUID="07007d77-4861-45ac-aacd-17b840bef2ee" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:01 crc kubenswrapper[4854]: I0103 07:06:01.407412 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lltxw" podUID="77084a3a-5610-4014-a3bf-6d4073a74d44" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:01 crc kubenswrapper[4854]: I0103 07:06:01.489344 4854 patch_prober.go:28] interesting pod/router-default-5444994796-tdlx9 container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:01 crc kubenswrapper[4854]: I0103 07:06:01.489407 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-tdlx9" podUID="ab6ec22e-2a2c-4e28-8242-5bd783990843" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:01 crc kubenswrapper[4854]: I0103 07:06:01.489507 4854 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-99h4j container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:01 crc kubenswrapper[4854]: I0103 07:06:01.489534 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-99h4j" podUID="07007d77-4861-45ac-aacd-17b840bef2ee" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:01 crc kubenswrapper[4854]: I0103 07:06:01.489539 4854 patch_prober.go:28] interesting pod/router-default-5444994796-tdlx9 container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:01 crc kubenswrapper[4854]: I0103 07:06:01.489586 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-tdlx9" podUID="ab6ec22e-2a2c-4e28-8242-5bd783990843" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:01 crc kubenswrapper[4854]: I0103 07:06:01.526851 4854 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-fwgd2 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="" start-of-body= Jan 03 07:06:01 crc kubenswrapper[4854]: I0103 07:06:01.526858 4854 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-fwgd2 container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="" start-of-body= Jan 03 07:06:01 crc kubenswrapper[4854]: I0103 07:06:01.572930 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-czgkw"] Jan 03 07:06:01 crc kubenswrapper[4854]: I0103 07:06:01.579313 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-czgkw" Jan 03 07:06:01 crc kubenswrapper[4854]: I0103 07:06:01.688371 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-85b679bdc6-qrnbc" podUID="7f7c87f2-5743-4000-a36a-3a9400e24cdd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:01 crc kubenswrapper[4854]: I0103 07:06:01.688496 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-85b679bdc6-qrnbc" podUID="7f7c87f2-5743-4000-a36a-3a9400e24cdd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:01 crc kubenswrapper[4854]: I0103 07:06:01.745777 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ff403f4-841a-4305-8f8d-4f5fd6b14765-utilities\") pod \"redhat-marketplace-czgkw\" (UID: \"8ff403f4-841a-4305-8f8d-4f5fd6b14765\") " pod="openshift-marketplace/redhat-marketplace-czgkw" Jan 03 07:06:01 crc kubenswrapper[4854]: I0103 07:06:01.745881 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ff403f4-841a-4305-8f8d-4f5fd6b14765-catalog-content\") pod \"redhat-marketplace-czgkw\" (UID: \"8ff403f4-841a-4305-8f8d-4f5fd6b14765\") " pod="openshift-marketplace/redhat-marketplace-czgkw" Jan 03 07:06:01 crc kubenswrapper[4854]: I0103 07:06:01.745924 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvmvd\" (UniqueName: \"kubernetes.io/projected/8ff403f4-841a-4305-8f8d-4f5fd6b14765-kube-api-access-nvmvd\") pod \"redhat-marketplace-czgkw\" (UID: \"8ff403f4-841a-4305-8f8d-4f5fd6b14765\") " pod="openshift-marketplace/redhat-marketplace-czgkw" Jan 03 07:06:01 crc kubenswrapper[4854]: I0103 07:06:01.746885 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="79760a75-c798-415d-be02-dd3a6a9c74ee" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.176:9090/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:01 crc kubenswrapper[4854]: I0103 07:06:01.747384 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="79760a75-c798-415d-be02-dd3a6a9c74ee" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.176:9090/-/healthy\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:01 crc kubenswrapper[4854]: I0103 07:06:01.851261 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ff403f4-841a-4305-8f8d-4f5fd6b14765-utilities\") pod \"redhat-marketplace-czgkw\" (UID: \"8ff403f4-841a-4305-8f8d-4f5fd6b14765\") " pod="openshift-marketplace/redhat-marketplace-czgkw" Jan 03 07:06:01 crc kubenswrapper[4854]: I0103 07:06:01.851419 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ff403f4-841a-4305-8f8d-4f5fd6b14765-catalog-content\") pod \"redhat-marketplace-czgkw\" (UID: \"8ff403f4-841a-4305-8f8d-4f5fd6b14765\") " pod="openshift-marketplace/redhat-marketplace-czgkw" Jan 03 07:06:01 crc kubenswrapper[4854]: I0103 07:06:01.851622 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvmvd\" (UniqueName: \"kubernetes.io/projected/8ff403f4-841a-4305-8f8d-4f5fd6b14765-kube-api-access-nvmvd\") pod \"redhat-marketplace-czgkw\" (UID: \"8ff403f4-841a-4305-8f8d-4f5fd6b14765\") " pod="openshift-marketplace/redhat-marketplace-czgkw" Jan 03 07:06:01 crc kubenswrapper[4854]: I0103 07:06:01.854283 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ff403f4-841a-4305-8f8d-4f5fd6b14765-utilities\") pod \"redhat-marketplace-czgkw\" (UID: \"8ff403f4-841a-4305-8f8d-4f5fd6b14765\") " pod="openshift-marketplace/redhat-marketplace-czgkw" Jan 03 07:06:01 crc kubenswrapper[4854]: I0103 07:06:01.855473 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ff403f4-841a-4305-8f8d-4f5fd6b14765-catalog-content\") pod \"redhat-marketplace-czgkw\" (UID: \"8ff403f4-841a-4305-8f8d-4f5fd6b14765\") " pod="openshift-marketplace/redhat-marketplace-czgkw" Jan 03 07:06:02 crc kubenswrapper[4854]: I0103 07:06:02.129628 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="10578fce-2c06-4977-9cb2-51b8593f9fed" containerName="galera" probeResult="failure" output="command timed out" Jan 03 07:06:02 crc kubenswrapper[4854]: I0103 07:06:02.129992 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="9afcc108-879e-4244-a52b-1c5720d08571" containerName="prometheus" probeResult="failure" output="command timed out" Jan 03 07:06:02 crc kubenswrapper[4854]: I0103 07:06:02.130192 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="10578fce-2c06-4977-9cb2-51b8593f9fed" containerName="galera" probeResult="failure" output="command timed out" Jan 03 07:06:02 crc kubenswrapper[4854]: I0103 07:06:02.130231 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="9afcc108-879e-4244-a52b-1c5720d08571" containerName="prometheus" probeResult="failure" output="command timed out" Jan 03 07:06:02 crc kubenswrapper[4854]: I0103 07:06:02.150330 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-galera-0" Jan 03 07:06:02 crc kubenswrapper[4854]: I0103 07:06:02.150409 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 03 07:06:02 crc kubenswrapper[4854]: I0103 07:06:02.151438 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"85534b86966512a3a6777d75350ffebd09d510b71ed3bd20f577bb5b92d31a74"} pod="openstack/openstack-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Jan 03 07:06:02 crc kubenswrapper[4854]: I0103 07:06:02.259256 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-lzwb4" podUID="ea9863f6-8706-4844-ad3e-93309cdbef22" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.95:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:02 crc kubenswrapper[4854]: I0103 07:06:02.259291 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-6fczv" podUID="e29c84ac-4ca9-44ec-b886-ae50c84ba121" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:02 crc kubenswrapper[4854]: I0103 07:06:02.259292 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-lzwb4" podUID="ea9863f6-8706-4844-ad3e-93309cdbef22" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.95:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:02 crc kubenswrapper[4854]: I0103 07:06:02.259371 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-6fczv" Jan 03 07:06:02 crc kubenswrapper[4854]: I0103 07:06:02.260809 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="frr" containerStatusID={"Type":"cri-o","ID":"9539e21b873a1e4b3365e05006ca6162cad1734de5e700761c66055ef4d7c3a1"} pod="metallb-system/frr-k8s-6fczv" containerMessage="Container frr failed liveness probe, will be restarted" Jan 03 07:06:02 crc kubenswrapper[4854]: I0103 07:06:02.260911 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-6fczv" podUID="e29c84ac-4ca9-44ec-b886-ae50c84ba121" containerName="frr" containerID="cri-o://9539e21b873a1e4b3365e05006ca6162cad1734de5e700761c66055ef4d7c3a1" gracePeriod=2 Jan 03 07:06:02 crc kubenswrapper[4854]: I0103 07:06:02.347371 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-6fczv" podUID="e29c84ac-4ca9-44ec-b886-ae50c84ba121" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:02 crc kubenswrapper[4854]: I0103 07:06:02.347834 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-5bddd4b946-bzjqc" podUID="d1422b70-f6c6-46f8-81b3-1d2f35800374" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.96:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:02 crc kubenswrapper[4854]: I0103 07:06:02.347896 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-6fczv" podUID="e29c84ac-4ca9-44ec-b886-ae50c84ba121" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:02 crc kubenswrapper[4854]: I0103 07:06:02.348284 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-5bddd4b946-bzjqc" podUID="d1422b70-f6c6-46f8-81b3-1d2f35800374" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.96:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:02 crc kubenswrapper[4854]: I0103 07:06:02.376575 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvmvd\" (UniqueName: \"kubernetes.io/projected/8ff403f4-841a-4305-8f8d-4f5fd6b14765-kube-api-access-nvmvd\") pod \"redhat-marketplace-czgkw\" (UID: \"8ff403f4-841a-4305-8f8d-4f5fd6b14765\") " pod="openshift-marketplace/redhat-marketplace-czgkw" Jan 03 07:06:02 crc kubenswrapper[4854]: I0103 07:06:02.518339 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-czgkw" Jan 03 07:06:03 crc kubenswrapper[4854]: I0103 07:06:03.129543 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="10578fce-2c06-4977-9cb2-51b8593f9fed" containerName="galera" probeResult="failure" output="command timed out" Jan 03 07:06:03 crc kubenswrapper[4854]: I0103 07:06:03.129564 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="748d9586-5917-42ab-8f1f-3a811b724dae" containerName="galera" probeResult="failure" output="command timed out" Jan 03 07:06:03 crc kubenswrapper[4854]: I0103 07:06:03.129964 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 03 07:06:03 crc kubenswrapper[4854]: I0103 07:06:03.130629 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="748d9586-5917-42ab-8f1f-3a811b724dae" containerName="galera" probeResult="failure" output="command timed out" Jan 03 07:06:03 crc kubenswrapper[4854]: I0103 07:06:03.130784 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 03 07:06:03 crc kubenswrapper[4854]: I0103 07:06:03.131551 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"b6c69658c671ea8316febf8922284a5f50e580024481037297ff09c1c29c326a"} pod="openstack/openstack-cell1-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Jan 03 07:06:03 crc kubenswrapper[4854]: I0103 07:06:03.297410 4854 generic.go:334] "Generic (PLEG): container finished" podID="e29c84ac-4ca9-44ec-b886-ae50c84ba121" containerID="9539e21b873a1e4b3365e05006ca6162cad1734de5e700761c66055ef4d7c3a1" exitCode=143 Jan 03 07:06:03 crc kubenswrapper[4854]: I0103 07:06:03.297845 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6fczv" event={"ID":"e29c84ac-4ca9-44ec-b886-ae50c84ba121","Type":"ContainerDied","Data":"9539e21b873a1e4b3365e05006ca6162cad1734de5e700761c66055ef4d7c3a1"} Jan 03 07:06:03 crc kubenswrapper[4854]: I0103 07:06:03.537264 4854 patch_prober.go:28] interesting pod/metrics-server-665fcf668f-65wrt container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.80:10250/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:03 crc kubenswrapper[4854]: I0103 07:06:03.537296 4854 patch_prober.go:28] interesting pod/metrics-server-665fcf668f-65wrt container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.80:10250/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:03 crc kubenswrapper[4854]: I0103 07:06:03.537662 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-665fcf668f-65wrt" podUID="5899ebcd-eec0-44ae-9e07-98b443d209c1" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.80:10250/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:03 crc kubenswrapper[4854]: I0103 07:06:03.537738 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-monitoring/metrics-server-665fcf668f-65wrt" Jan 03 07:06:03 crc kubenswrapper[4854]: I0103 07:06:03.537579 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-665fcf668f-65wrt" podUID="5899ebcd-eec0-44ae-9e07-98b443d209c1" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.80:10250/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:03 crc kubenswrapper[4854]: I0103 07:06:03.544049 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="metrics-server" containerStatusID={"Type":"cri-o","ID":"f12fe622af614b41bd44ec6bb3c9b091e81f021cd35ea69f811ff6d066d06d2b"} pod="openshift-monitoring/metrics-server-665fcf668f-65wrt" containerMessage="Container metrics-server failed liveness probe, will be restarted" Jan 03 07:06:03 crc kubenswrapper[4854]: I0103 07:06:03.544109 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/metrics-server-665fcf668f-65wrt" podUID="5899ebcd-eec0-44ae-9e07-98b443d209c1" containerName="metrics-server" containerID="cri-o://f12fe622af614b41bd44ec6bb3c9b091e81f021cd35ea69f811ff6d066d06d2b" gracePeriod=170 Jan 03 07:06:03 crc kubenswrapper[4854]: I0103 07:06:03.546973 4854 patch_prober.go:28] interesting pod/logging-loki-distributor-5f678c8dd6-p67sv container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:03 crc kubenswrapper[4854]: I0103 07:06:03.547043 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-p67sv" podUID="128d93c6-02aa-4f68-aac6-cfcab1896a35" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:03 crc kubenswrapper[4854]: I0103 07:06:03.547199 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-p67sv" Jan 03 07:06:03 crc kubenswrapper[4854]: I0103 07:06:03.853980 4854 patch_prober.go:28] interesting pod/logging-loki-querier-76788598db-b8thp container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:03 crc kubenswrapper[4854]: I0103 07:06:03.854062 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76788598db-b8thp" podUID="66f9492b-16b5-4b86-bb22-560ad0f8001c" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:03 crc kubenswrapper[4854]: I0103 07:06:03.854170 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76788598db-b8thp" Jan 03 07:06:03 crc kubenswrapper[4854]: I0103 07:06:03.967335 4854 patch_prober.go:28] interesting pod/logging-loki-query-frontend-69d9546745-42f7g container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:03 crc kubenswrapper[4854]: I0103 07:06:03.967433 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-69d9546745-42f7g" podUID="b98c17f7-1569-4c33-ab65-f4c2ba0555ae" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.53:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:04 crc kubenswrapper[4854]: I0103 07:06:04.007284 4854 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-9trnq container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.7:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:04 crc kubenswrapper[4854]: I0103 07:06:04.007636 4854 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-9trnq container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.7:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:04 crc kubenswrapper[4854]: I0103 07:06:04.007682 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-9trnq" podUID="5c8ccde8-0051-491f-b5d6-a2930440c138" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.7:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:04 crc kubenswrapper[4854]: I0103 07:06:04.008190 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-9trnq" podUID="5c8ccde8-0051-491f-b5d6-a2930440c138" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.7:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:04 crc kubenswrapper[4854]: I0103 07:06:04.091353 4854 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-tgcxk container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.33:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:04 crc kubenswrapper[4854]: I0103 07:06:04.091663 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-tgcxk" podUID="ff43f741-1a42-4dfa-bfea-11b28b56487c" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.33:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:04 crc kubenswrapper[4854]: I0103 07:06:04.091402 4854 patch_prober.go:28] interesting pod/monitoring-plugin-57f57bb94b-jb8qx container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.81:9443/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:04 crc kubenswrapper[4854]: I0103 07:06:04.091730 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-57f57bb94b-jb8qx" podUID="a6f05342-5fbe-4b7a-b222-e52b87c7e754" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.81:9443/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:04 crc kubenswrapper[4854]: I0103 07:06:04.130289 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="6047aa72-faf9-4f4d-95ab-df8b1230cedf" containerName="ovn-northd" probeResult="failure" output="command timed out" Jan 03 07:06:04 crc kubenswrapper[4854]: I0103 07:06:04.132321 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ovn-northd-0" podUID="6047aa72-faf9-4f4d-95ab-df8b1230cedf" containerName="ovn-northd" probeResult="failure" output="command timed out" Jan 03 07:06:04 crc kubenswrapper[4854]: I0103 07:06:04.133172 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-9lzwf" podUID="7e9a4f28-3133-4df6-9ed3-fbae3e03d777" containerName="registry-server" probeResult="failure" output="command timed out" Jan 03 07:06:04 crc kubenswrapper[4854]: I0103 07:06:04.133373 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-9lzwf" podUID="7e9a4f28-3133-4df6-9ed3-fbae3e03d777" containerName="registry-server" probeResult="failure" output="command timed out" Jan 03 07:06:04 crc kubenswrapper[4854]: I0103 07:06:04.210237 4854 patch_prober.go:28] interesting pod/logging-loki-gateway-656bf7cf7c-w49nx container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:04 crc kubenswrapper[4854]: I0103 07:06:04.210290 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-656bf7cf7c-w49nx" podUID="4de190f3-1f91-4bd7-9d46-df7235633d58" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:04 crc kubenswrapper[4854]: I0103 07:06:04.210333 4854 patch_prober.go:28] interesting pod/logging-loki-gateway-656bf7cf7c-w49nx container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:04 crc kubenswrapper[4854]: I0103 07:06:04.210372 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-656bf7cf7c-w49nx" podUID="4de190f3-1f91-4bd7-9d46-df7235633d58" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:04 crc kubenswrapper[4854]: I0103 07:06:04.452240 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-8f5jd" podUID="a0902db0-b7a6-496e-955c-c6f6bb3429c6" containerName="registry-server" probeResult="failure" output=< Jan 03 07:06:04 crc kubenswrapper[4854]: timeout: failed to connect service ":50051" within 1s Jan 03 07:06:04 crc kubenswrapper[4854]: > Jan 03 07:06:04 crc kubenswrapper[4854]: I0103 07:06:04.465260 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-8f5jd" podUID="a0902db0-b7a6-496e-955c-c6f6bb3429c6" containerName="registry-server" probeResult="failure" output=< Jan 03 07:06:04 crc kubenswrapper[4854]: timeout: failed to connect service ":50051" within 1s Jan 03 07:06:04 crc kubenswrapper[4854]: > Jan 03 07:06:04 crc kubenswrapper[4854]: I0103 07:06:04.465456 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-6pvh8" podUID="03a2de93-c858-46e8-ae42-a34d1d776b7c" containerName="registry-server" probeResult="failure" output=< Jan 03 07:06:04 crc kubenswrapper[4854]: timeout: failed to connect service ":50051" within 1s Jan 03 07:06:04 crc kubenswrapper[4854]: > Jan 03 07:06:04 crc kubenswrapper[4854]: I0103 07:06:04.471545 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-6pvh8" podUID="03a2de93-c858-46e8-ae42-a34d1d776b7c" containerName="registry-server" probeResult="failure" output=< Jan 03 07:06:04 crc kubenswrapper[4854]: timeout: failed to connect service ":50051" within 1s Jan 03 07:06:04 crc kubenswrapper[4854]: > Jan 03 07:06:04 crc kubenswrapper[4854]: I0103 07:06:04.547175 4854 patch_prober.go:28] interesting pod/logging-loki-distributor-5f678c8dd6-p67sv container/loki-distributor namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.51:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:04 crc kubenswrapper[4854]: I0103 07:06:04.547261 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-p67sv" podUID="128d93c6-02aa-4f68-aac6-cfcab1896a35" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.51:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:04 crc kubenswrapper[4854]: I0103 07:06:04.548371 4854 patch_prober.go:28] interesting pod/logging-loki-distributor-5f678c8dd6-p67sv container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:04 crc kubenswrapper[4854]: I0103 07:06:04.548459 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-p67sv" podUID="128d93c6-02aa-4f68-aac6-cfcab1896a35" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:04 crc kubenswrapper[4854]: I0103 07:06:04.558253 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-operator-c8b457848-dg5cs" podUID="bc9994eb-5930-484d-a02c-60d4e13483e2" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.100:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:04 crc kubenswrapper[4854]: I0103 07:06:04.558278 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-jx5q2" podUID="40ad961e-d740-49fa-9a1f-e9d950002a3e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:04 crc kubenswrapper[4854]: I0103 07:06:04.558333 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-operator-c8b457848-dg5cs" podUID="bc9994eb-5930-484d-a02c-60d4e13483e2" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.100:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:04 crc kubenswrapper[4854]: I0103 07:06:04.704932 4854 patch_prober.go:28] interesting pod/logging-loki-gateway-656bf7cf7c-98v92 container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8081/ready\": context deadline exceeded" start-of-body= Jan 03 07:06:04 crc kubenswrapper[4854]: I0103 07:06:04.705007 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-656bf7cf7c-98v92" podUID="428c2117-0003-47b2-abfa-f4f7930e126c" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.54:8081/ready\": context deadline exceeded" Jan 03 07:06:04 crc kubenswrapper[4854]: I0103 07:06:04.763301 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-msvf6" podUID="c2f6c336-91f0-41e6-b439-c5d940264b7f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:04 crc kubenswrapper[4854]: I0103 07:06:04.778314 4854 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:04 crc kubenswrapper[4854]: I0103 07:06:04.778380 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="78ad3d84-530d-45e9-928d-c552448aec20" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.56:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:04 crc kubenswrapper[4854]: I0103 07:06:04.846284 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-trsxr" podUID="f5b690cb-eb48-469c-a774-eff5eda46f89" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:04 crc kubenswrapper[4854]: I0103 07:06:04.846332 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-jvp7v" podUID="81de0b3b-e6fc-45c9-b347-995726d00213" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.101:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:04 crc kubenswrapper[4854]: I0103 07:06:04.846382 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-jvp7v" podUID="81de0b3b-e6fc-45c9-b347-995726d00213" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.101:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:04 crc kubenswrapper[4854]: I0103 07:06:04.846415 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-jx5q2" podUID="40ad961e-d740-49fa-9a1f-e9d950002a3e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:04 crc kubenswrapper[4854]: I0103 07:06:04.846447 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-msvf6" podUID="c2f6c336-91f0-41e6-b439-c5d940264b7f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:04 crc kubenswrapper[4854]: I0103 07:06:04.846423 4854 patch_prober.go:28] interesting pod/logging-loki-gateway-656bf7cf7c-98v92 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:04 crc kubenswrapper[4854]: I0103 07:06:04.846478 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-656bf7cf7c-98v92" podUID="428c2117-0003-47b2-abfa-f4f7930e126c" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:04 crc kubenswrapper[4854]: I0103 07:06:04.846499 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-trsxr" podUID="f5b690cb-eb48-469c-a774-eff5eda46f89" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:04 crc kubenswrapper[4854]: I0103 07:06:04.852860 4854 patch_prober.go:28] interesting pod/logging-loki-querier-76788598db-b8thp container/loki-querier namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.52:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:04 crc kubenswrapper[4854]: I0103 07:06:04.852944 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-querier-76788598db-b8thp" podUID="66f9492b-16b5-4b86-bb22-560ad0f8001c" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.52:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:04 crc kubenswrapper[4854]: I0103 07:06:04.855142 4854 patch_prober.go:28] interesting pod/logging-loki-querier-76788598db-b8thp container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:04 crc kubenswrapper[4854]: I0103 07:06:04.855184 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76788598db-b8thp" podUID="66f9492b-16b5-4b86-bb22-560ad0f8001c" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:04 crc kubenswrapper[4854]: I0103 07:06:04.928280 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-hgwsb" podUID="a327e8cf-824f-41b1-9076-5fd57a8b4352" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:04 crc kubenswrapper[4854]: I0103 07:06:04.928410 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-hgwsb" podUID="a327e8cf-824f-41b1-9076-5fd57a8b4352" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:05 crc kubenswrapper[4854]: I0103 07:06:05.010240 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-k6nnf" podUID="7d4776d0-290f-4c82-aa5c-6412b5bb4608" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:05 crc kubenswrapper[4854]: I0103 07:06:05.176346 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-xgtzc" podUID="04d8c7f1-6674-45b0-9506-9d62c1a2f892" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:05 crc kubenswrapper[4854]: I0103 07:06:05.176637 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-k6nnf" podUID="7d4776d0-290f-4c82-aa5c-6412b5bb4608" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:05 crc kubenswrapper[4854]: I0103 07:06:05.211207 4854 patch_prober.go:28] interesting pod/logging-loki-gateway-656bf7cf7c-w49nx container/gateway namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.55:8081/live\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:05 crc kubenswrapper[4854]: I0103 07:06:05.211537 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-656bf7cf7c-w49nx" podUID="4de190f3-1f91-4bd7-9d46-df7235633d58" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.55:8081/live\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:05 crc kubenswrapper[4854]: I0103 07:06:05.342366 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-4pbfn" podUID="1d8399ce-3c90-4601-9a32-31dc20da4552" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:05 crc kubenswrapper[4854]: I0103 07:06:05.342466 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-vdnq9" podUID="25988b2b-1924-4007-a6b1-5e5403d5dc68" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:05 crc kubenswrapper[4854]: I0103 07:06:05.342537 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-4pbfn" podUID="1d8399ce-3c90-4601-9a32-31dc20da4552" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:05 crc kubenswrapper[4854]: I0103 07:06:05.385293 4854 patch_prober.go:28] interesting pod/thanos-querier-5b7f7948f-gfss8 container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.79:9091/-/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:05 crc kubenswrapper[4854]: I0103 07:06:05.385366 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-5b7f7948f-gfss8" podUID="0ee90900-26e8-4d06-b2b4-f646a1570746" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.79:9091/-/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:05 crc kubenswrapper[4854]: I0103 07:06:05.424533 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-z7cfx" podUID="fe7f33a3-c4b8-44b6-81f1-c2143cbb9dd1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:05 crc kubenswrapper[4854]: I0103 07:06:05.424615 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-xgtzc" podUID="04d8c7f1-6674-45b0-9506-9d62c1a2f892" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:05 crc kubenswrapper[4854]: I0103 07:06:05.425514 4854 patch_prober.go:28] interesting pod/logging-loki-gateway-656bf7cf7c-w49nx container/opa namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.55:8083/live\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:05 crc kubenswrapper[4854]: I0103 07:06:05.425553 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-656bf7cf7c-w49nx" podUID="4de190f3-1f91-4bd7-9d46-df7235633d58" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/live\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:05 crc kubenswrapper[4854]: I0103 07:06:05.507441 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-568985c78-x78fv" podUID="14991c3c-8c35-4008-b1a0-1b8690074322" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:05 crc kubenswrapper[4854]: I0103 07:06:05.507445 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-7lvxp" podUID="ad6a18d3-e1d2-446a-9b41-a9fca5e8b574" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:05 crc kubenswrapper[4854]: I0103 07:06:05.507402 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/keystone-operator-controller-manager-568985c78-x78fv" podUID="14991c3c-8c35-4008-b1a0-1b8690074322" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:05 crc kubenswrapper[4854]: I0103 07:06:05.705202 4854 patch_prober.go:28] interesting pod/logging-loki-gateway-656bf7cf7c-98v92 container/gateway namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.54:8081/live\": context deadline exceeded" start-of-body= Jan 03 07:06:05 crc kubenswrapper[4854]: I0103 07:06:05.705271 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-656bf7cf7c-98v92" podUID="428c2117-0003-47b2-abfa-f4f7930e126c" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.54:8081/live\": context deadline exceeded" Jan 03 07:06:05 crc kubenswrapper[4854]: I0103 07:06:05.757408 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-jqj54" podUID="e62c43c5-cac2-4f9f-9e1b-de61827c4c94" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:05 crc kubenswrapper[4854]: I0103 07:06:05.757411 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-vdnq9" podUID="25988b2b-1924-4007-a6b1-5e5403d5dc68" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:05 crc kubenswrapper[4854]: I0103 07:06:05.757463 4854 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-q6v5f container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.73:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:05 crc kubenswrapper[4854]: I0103 07:06:05.757504 4854 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-q6v5f container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.73:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:05 crc kubenswrapper[4854]: I0103 07:06:05.757866 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-q6v5f" podUID="e1f91a20-c61d-488f-98ab-f966174f3764" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.73:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:05 crc kubenswrapper[4854]: I0103 07:06:05.757959 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-q6v5f" podUID="e1f91a20-c61d-488f-98ab-f966174f3764" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.73:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:05 crc kubenswrapper[4854]: I0103 07:06:05.847164 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-xrghz" podUID="56476ba9-ae33-4d34-855c-0e144e4f5da3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:05 crc kubenswrapper[4854]: I0103 07:06:05.847295 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-z7cfx" podUID="fe7f33a3-c4b8-44b6-81f1-c2143cbb9dd1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:06 crc kubenswrapper[4854]: I0103 07:06:05.933474 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-7lvxp" podUID="ad6a18d3-e1d2-446a-9b41-a9fca5e8b574" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:06 crc kubenswrapper[4854]: I0103 07:06:05.942507 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/telemetry-operator-controller-manager-7666dbdd4f-46t4f" podUID="8f21d9f8-0bdd-43de-8196-186dccb7b2f8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:06 crc kubenswrapper[4854]: I0103 07:06:05.958139 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-ncjlb" podUID="402a077e-f741-447d-ab1c-25bc62cd24cf" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:06 crc kubenswrapper[4854]: I0103 07:06:06.048277 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-8xksh" podUID="05f5522f-8e47-4d35-be75-2edee0f16f77" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:06 crc kubenswrapper[4854]: I0103 07:06:06.048368 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-qzzw2" podUID="ddf8e54e-858e-432c-ab2d-8b4d83f6282b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:06 crc kubenswrapper[4854]: I0103 07:06:06.048419 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-8xksh" podUID="05f5522f-8e47-4d35-be75-2edee0f16f77" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:06 crc kubenswrapper[4854]: I0103 07:06:06.048569 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-ncjlb" podUID="402a077e-f741-447d-ab1c-25bc62cd24cf" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:06 crc kubenswrapper[4854]: I0103 07:06:06.048620 4854 patch_prober.go:28] interesting pod/logging-loki-gateway-656bf7cf7c-98v92 container/opa namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.54:8083/live\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:06 crc kubenswrapper[4854]: I0103 07:06:06.048647 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-656bf7cf7c-98v92" podUID="428c2117-0003-47b2-abfa-f4f7930e126c" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/live\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:06 crc kubenswrapper[4854]: I0103 07:06:06.048692 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-jqj54" podUID="e62c43c5-cac2-4f9f-9e1b-de61827c4c94" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:06 crc kubenswrapper[4854]: I0103 07:06:06.049193 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-xrghz" podUID="56476ba9-ae33-4d34-855c-0e144e4f5da3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:06 crc kubenswrapper[4854]: I0103 07:06:06.049255 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-dprp4" podUID="6515eec5-5595-42cb-8588-81baa0db47c1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:06 crc kubenswrapper[4854]: I0103 07:06:06.050057 4854 patch_prober.go:28] interesting pod/oauth-openshift-6994f97844-8cxlw container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.60:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:06 crc kubenswrapper[4854]: I0103 07:06:06.050163 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" podUID="159783f1-b3b7-432d-b243-e8e7076ddd0a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.60:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:06 crc kubenswrapper[4854]: I0103 07:06:06.050245 4854 patch_prober.go:28] interesting pod/nmstate-webhook-f8fb84555-mxm65 container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.88:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:06 crc kubenswrapper[4854]: I0103 07:06:06.050304 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-f8fb84555-mxm65" podUID="d78e7aa0-58e7-4445-920b-ca73758f9c84" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.88:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:06 crc kubenswrapper[4854]: I0103 07:06:06.050778 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-qzzw2" podUID="ddf8e54e-858e-432c-ab2d-8b4d83f6282b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:06 crc kubenswrapper[4854]: I0103 07:06:06.055758 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-dprp4" podUID="6515eec5-5595-42cb-8588-81baa0db47c1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:06 crc kubenswrapper[4854]: I0103 07:06:06.056115 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-7666dbdd4f-46t4f" podUID="8f21d9f8-0bdd-43de-8196-186dccb7b2f8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:06 crc kubenswrapper[4854]: I0103 07:06:06.056990 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-f8fb84555-mxm65" Jan 03 07:06:06 crc kubenswrapper[4854]: I0103 07:06:06.129116 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-wwws2" podUID="6cc37176-dd9d-4138-a8f4-615d7815311a" containerName="nmstate-handler" probeResult="failure" output="command timed out" Jan 03 07:06:06 crc kubenswrapper[4854]: I0103 07:06:06.193444 4854 patch_prober.go:28] interesting pod/controller-manager-7ff6f7c9f7-lfv4z container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.74:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:06 crc kubenswrapper[4854]: I0103 07:06:06.193465 4854 patch_prober.go:28] interesting pod/controller-manager-7ff6f7c9f7-lfv4z container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.74:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:06 crc kubenswrapper[4854]: I0103 07:06:06.193522 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-lfv4z" podUID="82dcb747-3603-42a5-82ca-f7664d5d9027" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.74:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:06 crc kubenswrapper[4854]: I0103 07:06:06.193584 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-lfv4z" podUID="82dcb747-3603-42a5-82ca-f7664d5d9027" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.74:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:06 crc kubenswrapper[4854]: I0103 07:06:06.193644 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-lfv4z" Jan 03 07:06:06 crc kubenswrapper[4854]: I0103 07:06:06.195442 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="controller-manager" containerStatusID={"Type":"cri-o","ID":"51cf1354e8866c019109dd0689ead62267930f50c8e279fc80a89946e66485df"} pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-lfv4z" containerMessage="Container controller-manager failed liveness probe, will be restarted" Jan 03 07:06:06 crc kubenswrapper[4854]: I0103 07:06:06.195535 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-lfv4z" podUID="82dcb747-3603-42a5-82ca-f7664d5d9027" containerName="controller-manager" containerID="cri-o://51cf1354e8866c019109dd0689ead62267930f50c8e279fc80a89946e66485df" gracePeriod=30 Jan 03 07:06:06 crc kubenswrapper[4854]: I0103 07:06:06.352321 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6fczv" event={"ID":"e29c84ac-4ca9-44ec-b886-ae50c84ba121","Type":"ContainerStarted","Data":"238dfcbbf4a02a171398e6c275c9729552b51ad995fef4b9018da5a699cfa729"} Jan 03 07:06:06 crc kubenswrapper[4854]: I0103 07:06:06.374792 4854 patch_prober.go:28] interesting pod/console-67666b4d85-nwx4t container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:06 crc kubenswrapper[4854]: I0103 07:06:06.374916 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-67666b4d85-nwx4t" podUID="002174a6-3b57-4eba-985b-9fd7c492b143" containerName="console" probeResult="failure" output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:06 crc kubenswrapper[4854]: I0103 07:06:06.746681 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="79760a75-c798-415d-be02-dd3a6a9c74ee" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.176:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:06 crc kubenswrapper[4854]: I0103 07:06:06.746787 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="79760a75-c798-415d-be02-dd3a6a9c74ee" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.176:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:06 crc kubenswrapper[4854]: I0103 07:06:06.776097 4854 patch_prober.go:28] interesting pod/route-controller-manager-799fd78b6c-wqs5s container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.71:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:06 crc kubenswrapper[4854]: I0103 07:06:06.776182 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-wqs5s" podUID="0f4370d7-e178-42cc-99ec-fdfeca5fb5f8" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.71:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:07 crc kubenswrapper[4854]: I0103 07:06:07.022656 4854 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-n82hj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:07 crc kubenswrapper[4854]: I0103 07:06:07.022718 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n82hj" podUID="9ecb343a-f88c-49d3-a792-696f8b94eca3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:07 crc kubenswrapper[4854]: I0103 07:06:07.022754 4854 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-n82hj container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": context deadline exceeded" start-of-body= Jan 03 07:06:07 crc kubenswrapper[4854]: I0103 07:06:07.022836 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n82hj" podUID="9ecb343a-f88c-49d3-a792-696f8b94eca3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": context deadline exceeded" Jan 03 07:06:07 crc kubenswrapper[4854]: I0103 07:06:07.084939 4854 patch_prober.go:28] interesting pod/route-controller-manager-799fd78b6c-wqs5s container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.71:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:07 crc kubenswrapper[4854]: I0103 07:06:07.084998 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-wqs5s" podUID="0f4370d7-e178-42cc-99ec-fdfeca5fb5f8" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.71:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:07 crc kubenswrapper[4854]: I0103 07:06:07.085043 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-wqs5s" Jan 03 07:06:07 crc kubenswrapper[4854]: I0103 07:06:07.085487 4854 patch_prober.go:28] interesting pod/nmstate-webhook-f8fb84555-mxm65 container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.88:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:07 crc kubenswrapper[4854]: I0103 07:06:07.085539 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-f8fb84555-mxm65" podUID="d78e7aa0-58e7-4445-920b-ca73758f9c84" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.88:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:07 crc kubenswrapper[4854]: I0103 07:06:07.086138 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="route-controller-manager" containerStatusID={"Type":"cri-o","ID":"8ba167167a7457c4d989953c93e58c0a961916861f9a13e0bb90cacb5956b991"} pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-wqs5s" containerMessage="Container route-controller-manager failed liveness probe, will be restarted" Jan 03 07:06:07 crc kubenswrapper[4854]: I0103 07:06:07.086415 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-wqs5s" podUID="0f4370d7-e178-42cc-99ec-fdfeca5fb5f8" containerName="route-controller-manager" containerID="cri-o://8ba167167a7457c4d989953c93e58c0a961916861f9a13e0bb90cacb5956b991" gracePeriod=30 Jan 03 07:06:07 crc kubenswrapper[4854]: I0103 07:06:07.133509 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-7vksk" podUID="ec8a24a9-62d4-4db8-8f17-f261a85d6a47" containerName="registry-server" probeResult="failure" output="command timed out" Jan 03 07:06:07 crc kubenswrapper[4854]: I0103 07:06:07.133652 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-7vksk" podUID="ec8a24a9-62d4-4db8-8f17-f261a85d6a47" containerName="registry-server" probeResult="failure" output="command timed out" Jan 03 07:06:07 crc kubenswrapper[4854]: I0103 07:06:07.492315 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-qmphn" podUID="29a7524a-4f1c-4e10-ae41-8e05f91cbde6" containerName="hostpath-provisioner" probeResult="failure" output="Get \"http://10.217.0.32:9898/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:07 crc kubenswrapper[4854]: I0103 07:06:07.571757 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="006530e4-7385-4334-80e8-86bfcf5f645f" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.10:8080/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:07 crc kubenswrapper[4854]: I0103 07:06:07.571892 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="006530e4-7385-4334-80e8-86bfcf5f645f" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.10:8081/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:08 crc kubenswrapper[4854]: I0103 07:06:08.147135 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/heat-engine-64d84f65b5-cnzjg" podUID="f0006564-0566-4941-983d-8e5c58889f7f" containerName="heat-engine" probeResult="failure" output="command timed out" Jan 03 07:06:08 crc kubenswrapper[4854]: I0103 07:06:08.147334 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-engine-64d84f65b5-cnzjg" podUID="f0006564-0566-4941-983d-8e5c58889f7f" containerName="heat-engine" probeResult="failure" output="command timed out" Jan 03 07:06:08 crc kubenswrapper[4854]: I0103 07:06:08.147398 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="9afcc108-879e-4244-a52b-1c5720d08571" containerName="prometheus" probeResult="failure" output="command timed out" Jan 03 07:06:08 crc kubenswrapper[4854]: I0103 07:06:08.147369 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="9afcc108-879e-4244-a52b-1c5720d08571" containerName="prometheus" probeResult="failure" output="command timed out" Jan 03 07:06:08 crc kubenswrapper[4854]: I0103 07:06:08.147754 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ovs-mkvp7" podUID="babc1db7-041b-4116-86ff-b9d0c4349d49" containerName="ovs-vswitchd" probeResult="failure" output="command timed out" Jan 03 07:06:08 crc kubenswrapper[4854]: I0103 07:06:08.981337 4854 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-k8nxq container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.58:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:08 crc kubenswrapper[4854]: I0103 07:06:08.981757 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" podUID="b633dc70-c725-4f1b-9595-aee7f6c165b4" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.58:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:08 crc kubenswrapper[4854]: I0103 07:06:08.981482 4854 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-k8nxq container/registry namespace/openshift-image-registry: Liveness probe status=failure output="Get \"https://10.217.0.58:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:08 crc kubenswrapper[4854]: I0103 07:06:08.981846 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" podUID="b633dc70-c725-4f1b-9595-aee7f6c165b4" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.58:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:09 crc kubenswrapper[4854]: I0103 07:06:09.128836 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" podUID="c6c4aab5-c8ed-4323-87ef-a932943637e0" containerName="ovnkube-controller" probeResult="failure" output="command timed out" Jan 03 07:06:09 crc kubenswrapper[4854]: I0103 07:06:09.129282 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="6047aa72-faf9-4f4d-95ab-df8b1230cedf" containerName="ovn-northd" probeResult="failure" output="command timed out" Jan 03 07:06:09 crc kubenswrapper[4854]: I0103 07:06:09.129384 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 03 07:06:09 crc kubenswrapper[4854]: I0103 07:06:09.130311 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ovn-northd-0" podUID="6047aa72-faf9-4f4d-95ab-df8b1230cedf" containerName="ovn-northd" probeResult="failure" output="command timed out" Jan 03 07:06:09 crc kubenswrapper[4854]: I0103 07:06:09.130375 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ovn-northd-0" Jan 03 07:06:09 crc kubenswrapper[4854]: I0103 07:06:09.130458 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ovn-northd" containerStatusID={"Type":"cri-o","ID":"bd1d9b35d30511712419b9426f660f16ade983dd8d2b80aff8b4d7f562ca61fd"} pod="openstack/ovn-northd-0" containerMessage="Container ovn-northd failed liveness probe, will be restarted" Jan 03 07:06:09 crc kubenswrapper[4854]: I0103 07:06:09.130574 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="6047aa72-faf9-4f4d-95ab-df8b1230cedf" containerName="ovn-northd" containerID="cri-o://bd1d9b35d30511712419b9426f660f16ade983dd8d2b80aff8b4d7f562ca61fd" gracePeriod=30 Jan 03 07:06:09 crc kubenswrapper[4854]: I0103 07:06:09.132018 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-s6gct" podUID="99f863f5-fa79-40f0-8ee2-d3d75b6c3df2" containerName="registry-server" probeResult="failure" output="command timed out" Jan 03 07:06:09 crc kubenswrapper[4854]: I0103 07:06:09.132102 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/community-operators-s6gct" Jan 03 07:06:09 crc kubenswrapper[4854]: I0103 07:06:09.133142 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"dfef79ec6ba8d75de052d84ce5dcd60a393c8f2af22bc8300a68bb6818818d46"} pod="openshift-marketplace/community-operators-s6gct" containerMessage="Container registry-server failed liveness probe, will be restarted" Jan 03 07:06:09 crc kubenswrapper[4854]: I0103 07:06:09.133176 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-s6gct" podUID="99f863f5-fa79-40f0-8ee2-d3d75b6c3df2" containerName="registry-server" containerID="cri-o://dfef79ec6ba8d75de052d84ce5dcd60a393c8f2af22bc8300a68bb6818818d46" gracePeriod=30 Jan 03 07:06:09 crc kubenswrapper[4854]: E0103 07:06:09.159567 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bd1d9b35d30511712419b9426f660f16ade983dd8d2b80aff8b4d7f562ca61fd" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 03 07:06:09 crc kubenswrapper[4854]: E0103 07:06:09.165398 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bd1d9b35d30511712419b9426f660f16ade983dd8d2b80aff8b4d7f562ca61fd" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 03 07:06:09 crc kubenswrapper[4854]: E0103 07:06:09.167383 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bd1d9b35d30511712419b9426f660f16ade983dd8d2b80aff8b4d7f562ca61fd" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 03 07:06:09 crc kubenswrapper[4854]: E0103 07:06:09.167437 4854 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="6047aa72-faf9-4f4d-95ab-df8b1230cedf" containerName="ovn-northd" Jan 03 07:06:09 crc kubenswrapper[4854]: I0103 07:06:09.210596 4854 patch_prober.go:28] interesting pod/logging-loki-gateway-656bf7cf7c-w49nx container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:09 crc kubenswrapper[4854]: I0103 07:06:09.210667 4854 patch_prober.go:28] interesting pod/logging-loki-gateway-656bf7cf7c-w49nx container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:09 crc kubenswrapper[4854]: I0103 07:06:09.210751 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-656bf7cf7c-w49nx" podUID="4de190f3-1f91-4bd7-9d46-df7235633d58" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:09 crc kubenswrapper[4854]: I0103 07:06:09.210672 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-656bf7cf7c-w49nx" podUID="4de190f3-1f91-4bd7-9d46-df7235633d58" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:09 crc kubenswrapper[4854]: I0103 07:06:09.313407 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-2kqhz" podUID="b1c0c51a-7edb-49cb-9b71-f7ce149bde33" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:09 crc kubenswrapper[4854]: I0103 07:06:09.664999 4854 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:09 crc kubenswrapper[4854]: I0103 07:06:09.665413 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:09 crc kubenswrapper[4854]: I0103 07:06:09.704244 4854 patch_prober.go:28] interesting pod/logging-loki-gateway-656bf7cf7c-98v92 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:09 crc kubenswrapper[4854]: I0103 07:06:09.704607 4854 patch_prober.go:28] interesting pod/logging-loki-gateway-656bf7cf7c-98v92 container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:09 crc kubenswrapper[4854]: I0103 07:06:09.704901 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-656bf7cf7c-98v92" podUID="428c2117-0003-47b2-abfa-f4f7930e126c" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:09 crc kubenswrapper[4854]: I0103 07:06:09.704838 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-656bf7cf7c-98v92" podUID="428c2117-0003-47b2-abfa-f4f7930e126c" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:09 crc kubenswrapper[4854]: I0103 07:06:09.931972 4854 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-n82hj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:09 crc kubenswrapper[4854]: I0103 07:06:09.932041 4854 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-n82hj container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:09 crc kubenswrapper[4854]: I0103 07:06:09.932132 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n82hj" podUID="9ecb343a-f88c-49d3-a792-696f8b94eca3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:09 crc kubenswrapper[4854]: I0103 07:06:09.932046 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n82hj" podUID="9ecb343a-f88c-49d3-a792-696f8b94eca3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:10 crc kubenswrapper[4854]: I0103 07:06:10.132329 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-s6gct" podUID="99f863f5-fa79-40f0-8ee2-d3d75b6c3df2" containerName="registry-server" probeResult="failure" output="command timed out" Jan 03 07:06:10 crc kubenswrapper[4854]: I0103 07:06:10.134179 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s6gct" Jan 03 07:06:10 crc kubenswrapper[4854]: I0103 07:06:10.186260 4854 patch_prober.go:28] interesting pod/loki-operator-controller-manager-bd45dfbc8-vmrll container/manager namespace/openshift-operators-redhat: Liveness probe status=failure output="Get \"http://10.217.0.48:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:10 crc kubenswrapper[4854]: I0103 07:06:10.186320 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators-redhat/loki-operator-controller-manager-bd45dfbc8-vmrll" podUID="0c7ed8af-66a8-4ce9-95bd-4818cc646245" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.48:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:10 crc kubenswrapper[4854]: I0103 07:06:10.186383 4854 patch_prober.go:28] interesting pod/loki-operator-controller-manager-bd45dfbc8-vmrll container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.48:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:10 crc kubenswrapper[4854]: I0103 07:06:10.186398 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-bd45dfbc8-vmrll" podUID="0c7ed8af-66a8-4ce9-95bd-4818cc646245" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.48:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:10 crc kubenswrapper[4854]: I0103 07:06:10.383718 4854 patch_prober.go:28] interesting pod/thanos-querier-5b7f7948f-gfss8 container/kube-rbac-proxy-web namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.79:9091/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:10 crc kubenswrapper[4854]: I0103 07:06:10.383792 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/thanos-querier-5b7f7948f-gfss8" podUID="0ee90900-26e8-4d06-b2b4-f646a1570746" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.79:9091/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:10 crc kubenswrapper[4854]: I0103 07:06:10.383895 4854 patch_prober.go:28] interesting pod/thanos-querier-5b7f7948f-gfss8 container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.79:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:10 crc kubenswrapper[4854]: I0103 07:06:10.383978 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-5b7f7948f-gfss8" podUID="0ee90900-26e8-4d06-b2b4-f646a1570746" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.79:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:10 crc kubenswrapper[4854]: I0103 07:06:10.424323 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-7fdb976ccd-xpqws" podUID="c752fc50-5b45-4cbc-8a1c-b0cec9e720e5" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.93:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:10 crc kubenswrapper[4854]: I0103 07:06:10.574686 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qqbq9" podUID="ba0f32da-a0e3-4c43-8dde-d6212a1c63e1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:10 crc kubenswrapper[4854]: I0103 07:06:10.574785 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-qmphn" podUID="29a7524a-4f1c-4e10-ae41-8e05f91cbde6" containerName="hostpath-provisioner" probeResult="failure" output="Get \"http://10.217.0.32:9898/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:10 crc kubenswrapper[4854]: I0103 07:06:10.722417 4854 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-vtpbv container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:10 crc kubenswrapper[4854]: I0103 07:06:10.722531 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-vtpbv" podUID="b0379c6e-b02d-40ef-b9ae-add1e633bc4a" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:10 crc kubenswrapper[4854]: I0103 07:06:10.853337 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-85546d974f-nhdvf" podUID="c82f4933-ef34-46ae-8f48-f87b3ce1e90f" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.94:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:10 crc kubenswrapper[4854]: I0103 07:06:10.853673 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-85546d974f-nhdvf" Jan 03 07:06:10 crc kubenswrapper[4854]: I0103 07:06:10.853364 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-85546d974f-nhdvf" podUID="c82f4933-ef34-46ae-8f48-f87b3ce1e90f" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.94:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:11 crc kubenswrapper[4854]: I0103 07:06:11.026347 4854 patch_prober.go:28] interesting pod/downloads-7954f5f757-dmlm5 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:11 crc kubenswrapper[4854]: I0103 07:06:11.026404 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-dmlm5" podUID="d5805efa-800c-43df-ba80-7a7db226ebb3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:11 crc kubenswrapper[4854]: I0103 07:06:11.026783 4854 patch_prober.go:28] interesting pod/downloads-7954f5f757-dmlm5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:11 crc kubenswrapper[4854]: I0103 07:06:11.026799 4854 patch_prober.go:28] interesting pod/console-operator-58897d9998-2lwzj container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:11 crc kubenswrapper[4854]: I0103 07:06:11.026828 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dmlm5" podUID="d5805efa-800c-43df-ba80-7a7db226ebb3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:11 crc kubenswrapper[4854]: I0103 07:06:11.026857 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-2lwzj" podUID="dcde1a7d-7025-45cb-92de-483da7a86296" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.17:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:11 crc kubenswrapper[4854]: I0103 07:06:11.026914 4854 patch_prober.go:28] interesting pod/console-operator-58897d9998-2lwzj container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.17:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:11 crc kubenswrapper[4854]: I0103 07:06:11.026926 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-2lwzj" podUID="dcde1a7d-7025-45cb-92de-483da7a86296" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.17:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:11 crc kubenswrapper[4854]: I0103 07:06:11.071866 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-6fczv" Jan 03 07:06:11 crc kubenswrapper[4854]: I0103 07:06:11.091324 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7nclw5" podUID="1f9928f3-0c28-40df-b6ad-c871424ad3a6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:11 crc kubenswrapper[4854]: I0103 07:06:11.128582 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-wwws2" podUID="6cc37176-dd9d-4138-a8f4-615d7815311a" containerName="nmstate-handler" probeResult="failure" output="command timed out" Jan 03 07:06:11 crc kubenswrapper[4854]: I0103 07:06:11.319369 4854 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-8d5t5 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:11 crc kubenswrapper[4854]: I0103 07:06:11.319398 4854 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:11 crc kubenswrapper[4854]: I0103 07:06:11.319440 4854 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-8d5t5 container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:11 crc kubenswrapper[4854]: I0103 07:06:11.319469 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8d5t5" podUID="58b842c2-f723-45ae-9d08-9218837bb66a" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:11 crc kubenswrapper[4854]: I0103 07:06:11.319438 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8d5t5" podUID="58b842c2-f723-45ae-9d08-9218837bb66a" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:11 crc kubenswrapper[4854]: I0103 07:06:11.319493 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:11 crc kubenswrapper[4854]: I0103 07:06:11.325921 4854 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-lltxw container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:11 crc kubenswrapper[4854]: I0103 07:06:11.325949 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lltxw" podUID="77084a3a-5610-4014-a3bf-6d4073a74d44" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:11 crc kubenswrapper[4854]: I0103 07:06:11.325999 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lltxw" Jan 03 07:06:11 crc kubenswrapper[4854]: I0103 07:06:11.326425 4854 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-lltxw container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:11 crc kubenswrapper[4854]: I0103 07:06:11.326447 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lltxw" podUID="77084a3a-5610-4014-a3bf-6d4073a74d44" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:11 crc kubenswrapper[4854]: I0103 07:06:11.326534 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lltxw" Jan 03 07:06:11 crc kubenswrapper[4854]: I0103 07:06:11.327189 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="catalog-operator" containerStatusID={"Type":"cri-o","ID":"0da95cae6db85b0c5d3e13e5ed80e896d17465b69e5fcda34ad41246e340f5d0"} pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lltxw" containerMessage="Container catalog-operator failed liveness probe, will be restarted" Jan 03 07:06:11 crc kubenswrapper[4854]: I0103 07:06:11.327219 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lltxw" podUID="77084a3a-5610-4014-a3bf-6d4073a74d44" containerName="catalog-operator" containerID="cri-o://0da95cae6db85b0c5d3e13e5ed80e896d17465b69e5fcda34ad41246e340f5d0" gracePeriod=30 Jan 03 07:06:11 crc kubenswrapper[4854]: I0103 07:06:11.510394 4854 patch_prober.go:28] interesting pod/router-default-5444994796-tdlx9 container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:11 crc kubenswrapper[4854]: I0103 07:06:11.510783 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-tdlx9" podUID="ab6ec22e-2a2c-4e28-8242-5bd783990843" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:11 crc kubenswrapper[4854]: I0103 07:06:11.510407 4854 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-99h4j container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:11 crc kubenswrapper[4854]: I0103 07:06:11.510848 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-99h4j" podUID="07007d77-4861-45ac-aacd-17b840bef2ee" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:11 crc kubenswrapper[4854]: I0103 07:06:11.510518 4854 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-99h4j container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:11 crc kubenswrapper[4854]: I0103 07:06:11.510886 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-99h4j" podUID="07007d77-4861-45ac-aacd-17b840bef2ee" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:11 crc kubenswrapper[4854]: I0103 07:06:11.592356 4854 patch_prober.go:28] interesting pod/router-default-5444994796-tdlx9 container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:11 crc kubenswrapper[4854]: I0103 07:06:11.592876 4854 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-fwgd2 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:11 crc kubenswrapper[4854]: I0103 07:06:11.592405 4854 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-lcbzf container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:11 crc kubenswrapper[4854]: I0103 07:06:11.592923 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fwgd2" podUID="5ab7ee8b-9182-43e2-85de-f8d92aa12587" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.31:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:11 crc kubenswrapper[4854]: I0103 07:06:11.592955 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-lcbzf" podUID="07528198-b6c3-44c7-aec4-4647d7a06116" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:11 crc kubenswrapper[4854]: I0103 07:06:11.592980 4854 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-fwgd2 container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.31:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:11 crc kubenswrapper[4854]: I0103 07:06:11.592994 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fwgd2" podUID="5ab7ee8b-9182-43e2-85de-f8d92aa12587" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.31:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:11 crc kubenswrapper[4854]: I0103 07:06:11.593017 4854 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-lcbzf container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:11 crc kubenswrapper[4854]: I0103 07:06:11.593036 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-lcbzf" podUID="07528198-b6c3-44c7-aec4-4647d7a06116" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:11 crc kubenswrapper[4854]: I0103 07:06:11.592850 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-tdlx9" podUID="ab6ec22e-2a2c-4e28-8242-5bd783990843" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:11 crc kubenswrapper[4854]: I0103 07:06:11.649374 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-85b679bdc6-qrnbc" podUID="7f7c87f2-5743-4000-a36a-3a9400e24cdd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:11 crc kubenswrapper[4854]: I0103 07:06:11.747013 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="79760a75-c798-415d-be02-dd3a6a9c74ee" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.176:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:11 crc kubenswrapper[4854]: I0103 07:06:11.755411 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="79760a75-c798-415d-be02-dd3a6a9c74ee" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.176:9090/-/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:11 crc kubenswrapper[4854]: I0103 07:06:11.755567 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 03 07:06:11 crc kubenswrapper[4854]: I0103 07:06:11.895480 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-85546d974f-nhdvf" podUID="c82f4933-ef34-46ae-8f48-f87b3ce1e90f" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.94:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:12 crc kubenswrapper[4854]: I0103 07:06:12.129147 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="10578fce-2c06-4977-9cb2-51b8593f9fed" containerName="galera" probeResult="failure" output="command timed out" Jan 03 07:06:12 crc kubenswrapper[4854]: I0103 07:06:12.257312 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-lzwb4" podUID="ea9863f6-8706-4844-ad3e-93309cdbef22" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.95:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:12 crc kubenswrapper[4854]: I0103 07:06:12.257338 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-lzwb4" podUID="ea9863f6-8706-4844-ad3e-93309cdbef22" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.95:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:12 crc kubenswrapper[4854]: I0103 07:06:12.257435 4854 prober.go:107] "Probe failed" probeType="Startup" pod="metallb-system/frr-k8s-6fczv" podUID="e29c84ac-4ca9-44ec-b886-ae50c84ba121" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:12 crc kubenswrapper[4854]: I0103 07:06:12.339273 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-6fczv" podUID="e29c84ac-4ca9-44ec-b886-ae50c84ba121" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:12 crc kubenswrapper[4854]: I0103 07:06:12.339316 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-5bddd4b946-bzjqc" podUID="d1422b70-f6c6-46f8-81b3-1d2f35800374" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.96:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:12 crc kubenswrapper[4854]: I0103 07:06:12.339289 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-5bddd4b946-bzjqc" podUID="d1422b70-f6c6-46f8-81b3-1d2f35800374" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.96:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:12 crc kubenswrapper[4854]: I0103 07:06:12.339273 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-6fczv" podUID="e29c84ac-4ca9-44ec-b886-ae50c84ba121" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:12 crc kubenswrapper[4854]: I0103 07:06:12.339380 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-qmphn" podUID="29a7524a-4f1c-4e10-ae41-8e05f91cbde6" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 03 07:06:12 crc kubenswrapper[4854]: E0103 07:06:12.387492 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bd1d9b35d30511712419b9426f660f16ade983dd8d2b80aff8b4d7f562ca61fd" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 03 07:06:12 crc kubenswrapper[4854]: E0103 07:06:12.396131 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bd1d9b35d30511712419b9426f660f16ade983dd8d2b80aff8b4d7f562ca61fd" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 03 07:06:12 crc kubenswrapper[4854]: E0103 07:06:12.397683 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bd1d9b35d30511712419b9426f660f16ade983dd8d2b80aff8b4d7f562ca61fd" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 03 07:06:12 crc kubenswrapper[4854]: E0103 07:06:12.397717 4854 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="6047aa72-faf9-4f4d-95ab-df8b1230cedf" containerName="ovn-northd" Jan 03 07:06:12 crc kubenswrapper[4854]: I0103 07:06:12.978315 4854 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-n82hj container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:12 crc kubenswrapper[4854]: I0103 07:06:12.978377 4854 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-n82hj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:12 crc kubenswrapper[4854]: I0103 07:06:12.978748 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n82hj" podUID="9ecb343a-f88c-49d3-a792-696f8b94eca3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:12 crc kubenswrapper[4854]: I0103 07:06:12.978810 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n82hj" podUID="9ecb343a-f88c-49d3-a792-696f8b94eca3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:12 crc kubenswrapper[4854]: I0103 07:06:12.978860 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n82hj" Jan 03 07:06:12 crc kubenswrapper[4854]: I0103 07:06:12.978960 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n82hj" Jan 03 07:06:12 crc kubenswrapper[4854]: I0103 07:06:12.980662 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"21f27f09d6dbc1e7c9b44ed26845c77c7130232e16ad10ca00346ecd3f3f82a6"} pod="openshift-config-operator/openshift-config-operator-7777fb866f-n82hj" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Jan 03 07:06:12 crc kubenswrapper[4854]: I0103 07:06:12.980755 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n82hj" podUID="9ecb343a-f88c-49d3-a792-696f8b94eca3" containerName="openshift-config-operator" containerID="cri-o://21f27f09d6dbc1e7c9b44ed26845c77c7130232e16ad10ca00346ecd3f3f82a6" gracePeriod=30 Jan 03 07:06:13 crc kubenswrapper[4854]: I0103 07:06:13.133302 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ovn-controller-dll2c" podUID="04465680-9e76-4b04-aa5f-c94218a6bf28" containerName="ovn-controller" probeResult="failure" output="command timed out" Jan 03 07:06:13 crc kubenswrapper[4854]: I0103 07:06:13.133500 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="748d9586-5917-42ab-8f1f-3a811b724dae" containerName="galera" probeResult="failure" output="command timed out" Jan 03 07:06:13 crc kubenswrapper[4854]: I0103 07:06:13.133558 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-dll2c" podUID="04465680-9e76-4b04-aa5f-c94218a6bf28" containerName="ovn-controller" probeResult="failure" output="command timed out" Jan 03 07:06:13 crc kubenswrapper[4854]: I0103 07:06:13.133602 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="9afcc108-879e-4244-a52b-1c5720d08571" containerName="prometheus" probeResult="failure" output="command timed out" Jan 03 07:06:13 crc kubenswrapper[4854]: I0103 07:06:13.133627 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" podUID="c6c4aab5-c8ed-4323-87ef-a932943637e0" containerName="sbdb" probeResult="failure" output="command timed out" Jan 03 07:06:13 crc kubenswrapper[4854]: I0103 07:06:13.133643 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="9afcc108-879e-4244-a52b-1c5720d08571" containerName="prometheus" probeResult="failure" output="command timed out" Jan 03 07:06:13 crc kubenswrapper[4854]: I0103 07:06:13.133619 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ovs-mkvp7" podUID="babc1db7-041b-4116-86ff-b9d0c4349d49" containerName="ovsdb-server" probeResult="failure" output="command timed out" Jan 03 07:06:13 crc kubenswrapper[4854]: I0103 07:06:13.133644 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ovn-controller-ovs-mkvp7" podUID="babc1db7-041b-4116-86ff-b9d0c4349d49" containerName="ovsdb-server" probeResult="failure" output="command timed out" Jan 03 07:06:13 crc kubenswrapper[4854]: I0103 07:06:13.133689 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" podUID="c6c4aab5-c8ed-4323-87ef-a932943637e0" containerName="nbdb" probeResult="failure" output="command timed out" Jan 03 07:06:13 crc kubenswrapper[4854]: I0103 07:06:13.133759 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Jan 03 07:06:13 crc kubenswrapper[4854]: I0103 07:06:13.234304 4854 patch_prober.go:28] interesting pod/apiserver-76f77b778f-pzlj8 container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/readyz?exclude=etcd&exclude=etcd-readiness\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:13 crc kubenswrapper[4854]: I0103 07:06:13.234373 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-pzlj8" podUID="2259a421-a8dd-45e8-baa8-15cf1d37782e" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.6:8443/readyz?exclude=etcd&exclude=etcd-readiness\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:13 crc kubenswrapper[4854]: I0103 07:06:13.286820 4854 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:13 crc kubenswrapper[4854]: I0103 07:06:13.286871 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:13 crc kubenswrapper[4854]: I0103 07:06:13.544118 4854 patch_prober.go:28] interesting pod/logging-loki-distributor-5f678c8dd6-p67sv container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:13 crc kubenswrapper[4854]: I0103 07:06:13.544528 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-p67sv" podUID="128d93c6-02aa-4f68-aac6-cfcab1896a35" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:13 crc kubenswrapper[4854]: I0103 07:06:13.544272 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-qmphn" podUID="29a7524a-4f1c-4e10-ae41-8e05f91cbde6" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 03 07:06:13 crc kubenswrapper[4854]: I0103 07:06:13.853531 4854 patch_prober.go:28] interesting pod/logging-loki-querier-76788598db-b8thp container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:13 crc kubenswrapper[4854]: I0103 07:06:13.853622 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76788598db-b8thp" podUID="66f9492b-16b5-4b86-bb22-560ad0f8001c" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:13 crc kubenswrapper[4854]: E0103 07:06:13.989256 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dfef79ec6ba8d75de052d84ce5dcd60a393c8f2af22bc8300a68bb6818818d46" cmd=["grpc_health_probe","-addr=:50051"] Jan 03 07:06:13 crc kubenswrapper[4854]: E0103 07:06:13.990764 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dfef79ec6ba8d75de052d84ce5dcd60a393c8f2af22bc8300a68bb6818818d46" cmd=["grpc_health_probe","-addr=:50051"] Jan 03 07:06:13 crc kubenswrapper[4854]: E0103 07:06:13.992307 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dfef79ec6ba8d75de052d84ce5dcd60a393c8f2af22bc8300a68bb6818818d46" cmd=["grpc_health_probe","-addr=:50051"] Jan 03 07:06:13 crc kubenswrapper[4854]: E0103 07:06:13.992335 4854 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-marketplace/community-operators-s6gct" podUID="99f863f5-fa79-40f0-8ee2-d3d75b6c3df2" containerName="registry-server" Jan 03 07:06:13 crc kubenswrapper[4854]: I0103 07:06:13.999306 4854 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-9trnq container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.7:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:13 crc kubenswrapper[4854]: I0103 07:06:13.999397 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-9trnq" podUID="5c8ccde8-0051-491f-b5d6-a2930440c138" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.7:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:13 crc kubenswrapper[4854]: I0103 07:06:13.999464 4854 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-9trnq container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.7:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:13 crc kubenswrapper[4854]: I0103 07:06:13.999488 4854 patch_prober.go:28] interesting pod/logging-loki-query-frontend-69d9546745-42f7g container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:13 crc kubenswrapper[4854]: I0103 07:06:13.999495 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-9trnq" podUID="5c8ccde8-0051-491f-b5d6-a2930440c138" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.7:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:13 crc kubenswrapper[4854]: I0103 07:06:13.999518 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-69d9546745-42f7g" podUID="b98c17f7-1569-4c33-ab65-f4c2ba0555ae" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.53:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:14 crc kubenswrapper[4854]: I0103 07:06:14.041560 4854 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-n82hj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:14 crc kubenswrapper[4854]: I0103 07:06:14.041617 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n82hj" podUID="9ecb343a-f88c-49d3-a792-696f8b94eca3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:14 crc kubenswrapper[4854]: I0103 07:06:14.086342 4854 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-tgcxk container/perses-operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.33:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:14 crc kubenswrapper[4854]: I0103 07:06:14.086415 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/perses-operator-5bf474d74f-tgcxk" podUID="ff43f741-1a42-4dfa-bfea-11b28b56487c" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.33:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:14 crc kubenswrapper[4854]: I0103 07:06:14.127235 4854 patch_prober.go:28] interesting pod/monitoring-plugin-57f57bb94b-jb8qx container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.81:9443/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:14 crc kubenswrapper[4854]: I0103 07:06:14.127293 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-57f57bb94b-jb8qx" podUID="a6f05342-5fbe-4b7a-b222-e52b87c7e754" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.81:9443/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:14 crc kubenswrapper[4854]: I0103 07:06:14.127352 4854 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-tgcxk container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.33:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:14 crc kubenswrapper[4854]: I0103 07:06:14.127370 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-tgcxk" podUID="ff43f741-1a42-4dfa-bfea-11b28b56487c" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.33:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:14 crc kubenswrapper[4854]: I0103 07:06:14.133607 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ovs-mkvp7" podUID="babc1db7-041b-4116-86ff-b9d0c4349d49" containerName="ovs-vswitchd" probeResult="failure" output="command timed out" Jan 03 07:06:14 crc kubenswrapper[4854]: I0103 07:06:14.136863 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-9mfrk" podUID="b826d6d3-0de8-4b3d-9294-9e5f8f9faae6" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": dial tcp [::1]:29150: connect: connection refused" Jan 03 07:06:14 crc kubenswrapper[4854]: I0103 07:06:14.136976 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-9mfrk" podUID="b826d6d3-0de8-4b3d-9294-9e5f8f9faae6" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": dial tcp [::1]:29150: connect: connection refused" Jan 03 07:06:14 crc kubenswrapper[4854]: I0103 07:06:14.138666 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ovn-controller-ovs-mkvp7" podUID="babc1db7-041b-4116-86ff-b9d0c4349d49" containerName="ovs-vswitchd" probeResult="failure" output="command timed out" Jan 03 07:06:14 crc kubenswrapper[4854]: I0103 07:06:14.209956 4854 patch_prober.go:28] interesting pod/logging-loki-gateway-656bf7cf7c-w49nx container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:14 crc kubenswrapper[4854]: I0103 07:06:14.210010 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-656bf7cf7c-w49nx" podUID="4de190f3-1f91-4bd7-9d46-df7235633d58" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:14 crc kubenswrapper[4854]: I0103 07:06:14.210181 4854 patch_prober.go:28] interesting pod/logging-loki-gateway-656bf7cf7c-w49nx container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:14 crc kubenswrapper[4854]: I0103 07:06:14.210203 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-656bf7cf7c-w49nx" podUID="4de190f3-1f91-4bd7-9d46-df7235633d58" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:14 crc kubenswrapper[4854]: I0103 07:06:14.336228 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-webhook-687f57d79b-2kqhz" podUID="b1c0c51a-7edb-49cb-9b71-f7ce149bde33" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:14 crc kubenswrapper[4854]: I0103 07:06:14.336816 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-2kqhz" podUID="b1c0c51a-7edb-49cb-9b71-f7ce149bde33" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:14 crc kubenswrapper[4854]: I0103 07:06:14.473447 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-operator-c8b457848-dg5cs" podUID="bc9994eb-5930-484d-a02c-60d4e13483e2" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.100:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:14 crc kubenswrapper[4854]: I0103 07:06:14.558555 4854 trace.go:236] Trace[1306132860]: "Calculate volume metrics of wal for pod openshift-logging/logging-loki-ingester-0" (03-Jan-2026 07:06:04.601) (total time: 9955ms): Jan 03 07:06:14 crc kubenswrapper[4854]: Trace[1306132860]: [9.955746796s] [9.955746796s] END Jan 03 07:06:14 crc kubenswrapper[4854]: I0103 07:06:14.558555 4854 trace.go:236] Trace[1700489891]: "Calculate volume metrics of storage for pod openshift-logging/logging-loki-compactor-0" (03-Jan-2026 07:06:00.313) (total time: 14243ms): Jan 03 07:06:14 crc kubenswrapper[4854]: Trace[1700489891]: [14.243782353s] [14.243782353s] END Jan 03 07:06:14 crc kubenswrapper[4854]: I0103 07:06:14.558557 4854 trace.go:236] Trace[791387787]: "Calculate volume metrics of swift for pod openstack/swift-storage-0" (03-Jan-2026 07:05:59.238) (total time: 15319ms): Jan 03 07:06:14 crc kubenswrapper[4854]: Trace[791387787]: [15.319651085s] [15.319651085s] END Jan 03 07:06:14 crc kubenswrapper[4854]: I0103 07:06:14.641279 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-jx5q2" podUID="40ad961e-d740-49fa-9a1f-e9d950002a3e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:14 crc kubenswrapper[4854]: I0103 07:06:14.641306 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-msvf6" podUID="c2f6c336-91f0-41e6-b439-c5d940264b7f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:14 crc kubenswrapper[4854]: I0103 07:06:14.682407 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-trsxr" podUID="f5b690cb-eb48-469c-a774-eff5eda46f89" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:14 crc kubenswrapper[4854]: I0103 07:06:14.682467 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-jvp7v" podUID="81de0b3b-e6fc-45c9-b347-995726d00213" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.101:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:14 crc kubenswrapper[4854]: I0103 07:06:14.704758 4854 patch_prober.go:28] interesting pod/logging-loki-gateway-656bf7cf7c-98v92 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:14 crc kubenswrapper[4854]: I0103 07:06:14.704959 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-656bf7cf7c-98v92" podUID="428c2117-0003-47b2-abfa-f4f7930e126c" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:14 crc kubenswrapper[4854]: I0103 07:06:14.706016 4854 patch_prober.go:28] interesting pod/logging-loki-gateway-656bf7cf7c-98v92 container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:14 crc kubenswrapper[4854]: I0103 07:06:14.706150 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-656bf7cf7c-98v92" podUID="428c2117-0003-47b2-abfa-f4f7930e126c" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:14 crc kubenswrapper[4854]: I0103 07:06:14.756375 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="79760a75-c798-415d-be02-dd3a6a9c74ee" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.176:9090/-/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:14 crc kubenswrapper[4854]: I0103 07:06:14.756687 4854 trace.go:236] Trace[1056301313]: "Calculate volume metrics of storage for pod minio-dev/minio" (03-Jan-2026 07:06:03.567) (total time: 11188ms): Jan 03 07:06:14 crc kubenswrapper[4854]: Trace[1056301313]: [11.188933574s] [11.188933574s] END Jan 03 07:06:14 crc kubenswrapper[4854]: I0103 07:06:14.778864 4854 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:14 crc kubenswrapper[4854]: I0103 07:06:14.778925 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="78ad3d84-530d-45e9-928d-c552448aec20" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.56:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:14 crc kubenswrapper[4854]: I0103 07:06:14.846273 4854 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:14 crc kubenswrapper[4854]: I0103 07:06:14.846633 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:14 crc kubenswrapper[4854]: I0103 07:06:14.890266 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-hgwsb" podUID="a327e8cf-824f-41b1-9076-5fd57a8b4352" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:14 crc kubenswrapper[4854]: I0103 07:06:14.907674 4854 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-vd2jp container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:14 crc kubenswrapper[4854]: I0103 07:06:14.907788 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vd2jp" podUID="e05124c8-4705-4d57-82ec-b1ae0658e98e" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:14 crc kubenswrapper[4854]: I0103 07:06:14.936277 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-k6nnf" podUID="7d4776d0-290f-4c82-aa5c-6412b5bb4608" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:14 crc kubenswrapper[4854]: I0103 07:06:14.936320 4854 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:14 crc kubenswrapper[4854]: I0103 07:06:14.936501 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-compactor-0" podUID="7fb7ba42-5d69-44aa-87b2-28130157852b" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.0.57:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:14 crc kubenswrapper[4854]: I0103 07:06:14.936644 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-k6nnf" Jan 03 07:06:14 crc kubenswrapper[4854]: I0103 07:06:14.979249 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-4pbfn" podUID="1d8399ce-3c90-4601-9a32-31dc20da4552" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:15 crc kubenswrapper[4854]: I0103 07:06:15.063533 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-85bb9cd67c-w6ss9" podUID="b3991ad0-4c9f-466c-a5b2-a801fad29c1e" containerName="proxy-server" probeResult="failure" output="Get \"https://10.217.0.209:8080/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:15 crc kubenswrapper[4854]: I0103 07:06:15.133231 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-9lzwf" podUID="7e9a4f28-3133-4df6-9ed3-fbae3e03d777" containerName="registry-server" probeResult="failure" output="command timed out" Jan 03 07:06:15 crc kubenswrapper[4854]: I0103 07:06:15.133243 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="ca1d3e35-8df0-4b19-891d-3f2aecc401ab" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Jan 03 07:06:15 crc kubenswrapper[4854]: I0103 07:06:15.134584 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-9lzwf" podUID="7e9a4f28-3133-4df6-9ed3-fbae3e03d777" containerName="registry-server" probeResult="failure" output="command timed out" Jan 03 07:06:15 crc kubenswrapper[4854]: I0103 07:06:15.186374 4854 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.61:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:15 crc kubenswrapper[4854]: I0103 07:06:15.186394 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-vdnq9" podUID="25988b2b-1924-4007-a6b1-5e5403d5dc68" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:15 crc kubenswrapper[4854]: I0103 07:06:15.186428 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="c61cab0d-5846-418e-94ca-35e8a6c31ca0" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.61:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:15 crc kubenswrapper[4854]: I0103 07:06:15.186318 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-85bb9cd67c-w6ss9" podUID="b3991ad0-4c9f-466c-a5b2-a801fad29c1e" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.209:8080/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:15 crc kubenswrapper[4854]: I0103 07:06:15.186469 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-xgtzc" podUID="04d8c7f1-6674-45b0-9506-9d62c1a2f892" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:15 crc kubenswrapper[4854]: I0103 07:06:15.227405 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-z7cfx" podUID="fe7f33a3-c4b8-44b6-81f1-c2143cbb9dd1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:15 crc kubenswrapper[4854]: I0103 07:06:15.227510 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-568985c78-x78fv" podUID="14991c3c-8c35-4008-b1a0-1b8690074322" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:15 crc kubenswrapper[4854]: I0103 07:06:15.268501 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-7lvxp" podUID="ad6a18d3-e1d2-446a-9b41-a9fca5e8b574" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:15 crc kubenswrapper[4854]: I0103 07:06:15.383810 4854 patch_prober.go:28] interesting pod/thanos-querier-5b7f7948f-gfss8 container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.79:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:15 crc kubenswrapper[4854]: I0103 07:06:15.384157 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-5b7f7948f-gfss8" podUID="0ee90900-26e8-4d06-b2b4-f646a1570746" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.79:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:15 crc kubenswrapper[4854]: I0103 07:06:15.393267 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-jqj54" podUID="e62c43c5-cac2-4f9f-9e1b-de61827c4c94" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:15 crc kubenswrapper[4854]: I0103 07:06:15.393375 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-jqj54" Jan 03 07:06:15 crc kubenswrapper[4854]: I0103 07:06:15.393418 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-8xksh" podUID="05f5522f-8e47-4d35-be75-2edee0f16f77" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:15 crc kubenswrapper[4854]: I0103 07:06:15.393374 4854 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-q6v5f container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.73:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:15 crc kubenswrapper[4854]: I0103 07:06:15.393535 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-q6v5f" podUID="e1f91a20-c61d-488f-98ab-f966174f3764" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.73:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:15 crc kubenswrapper[4854]: I0103 07:06:15.393396 4854 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-q6v5f container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.73:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:15 crc kubenswrapper[4854]: I0103 07:06:15.393606 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-q6v5f" Jan 03 07:06:15 crc kubenswrapper[4854]: I0103 07:06:15.393655 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-q6v5f" podUID="e1f91a20-c61d-488f-98ab-f966174f3764" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.73:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:15 crc kubenswrapper[4854]: I0103 07:06:15.393739 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-q6v5f" Jan 03 07:06:15 crc kubenswrapper[4854]: I0103 07:06:15.395334 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="prometheus-operator-admission-webhook" containerStatusID={"Type":"cri-o","ID":"cf53552d47c3573d6b6c388776a4079b91daadea1b72bad72d69acf59404441c"} pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-q6v5f" containerMessage="Container prometheus-operator-admission-webhook failed liveness probe, will be restarted" Jan 03 07:06:15 crc kubenswrapper[4854]: I0103 07:06:15.395392 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-q6v5f" podUID="e1f91a20-c61d-488f-98ab-f966174f3764" containerName="prometheus-operator-admission-webhook" containerID="cri-o://cf53552d47c3573d6b6c388776a4079b91daadea1b72bad72d69acf59404441c" gracePeriod=30 Jan 03 07:06:15 crc kubenswrapper[4854]: I0103 07:06:15.434272 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-ncjlb" podUID="402a077e-f741-447d-ab1c-25bc62cd24cf" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:15 crc kubenswrapper[4854]: I0103 07:06:15.434301 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-xrghz" podUID="56476ba9-ae33-4d34-855c-0e144e4f5da3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:15 crc kubenswrapper[4854]: I0103 07:06:15.434368 4854 patch_prober.go:28] interesting pod/console-67666b4d85-nwx4t container/console namespace/openshift-console: Liveness probe status=failure output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:15 crc kubenswrapper[4854]: I0103 07:06:15.434393 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/console-67666b4d85-nwx4t" podUID="002174a6-3b57-4eba-985b-9fd7c492b143" containerName="console" probeResult="failure" output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:15 crc kubenswrapper[4854]: I0103 07:06:15.434417 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-xrghz" Jan 03 07:06:15 crc kubenswrapper[4854]: I0103 07:06:15.434444 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/console-67666b4d85-nwx4t" Jan 03 07:06:15 crc kubenswrapper[4854]: I0103 07:06:15.448842 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="console" containerStatusID={"Type":"cri-o","ID":"880bc6dc8873f0bbc31cde5de1f7081f573da192ec8aefac577a46a08ed98ee5"} pod="openshift-console/console-67666b4d85-nwx4t" containerMessage="Container console failed liveness probe, will be restarted" Jan 03 07:06:15 crc kubenswrapper[4854]: I0103 07:06:15.475394 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-dprp4" podUID="6515eec5-5595-42cb-8588-81baa0db47c1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:15 crc kubenswrapper[4854]: I0103 07:06:15.517220 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-7666dbdd4f-46t4f" podUID="8f21d9f8-0bdd-43de-8196-186dccb7b2f8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:15 crc kubenswrapper[4854]: I0103 07:06:15.612331 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-qzzw2" podUID="ddf8e54e-858e-432c-ab2d-8b4d83f6282b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:15 crc kubenswrapper[4854]: I0103 07:06:15.775828 4854 patch_prober.go:28] interesting pod/route-controller-manager-799fd78b6c-wqs5s container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.71:8443/healthz\": dial tcp 10.217.0.71:8443: connect: connection refused" start-of-body= Jan 03 07:06:15 crc kubenswrapper[4854]: I0103 07:06:15.776238 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-wqs5s" podUID="0f4370d7-e178-42cc-99ec-fdfeca5fb5f8" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.71:8443/healthz\": dial tcp 10.217.0.71:8443: connect: connection refused" Jan 03 07:06:15 crc kubenswrapper[4854]: I0103 07:06:15.862941 4854 patch_prober.go:28] interesting pod/nmstate-webhook-f8fb84555-mxm65 container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.88:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:15 crc kubenswrapper[4854]: I0103 07:06:15.863022 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-f8fb84555-mxm65" podUID="d78e7aa0-58e7-4445-920b-ca73758f9c84" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.88:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:15 crc kubenswrapper[4854]: I0103 07:06:15.894667 4854 patch_prober.go:28] interesting pod/oauth-openshift-6994f97844-8cxlw container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.60:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:15 crc kubenswrapper[4854]: I0103 07:06:15.894789 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" podUID="159783f1-b3b7-432d-b243-e8e7076ddd0a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.60:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:15 crc kubenswrapper[4854]: I0103 07:06:15.987379 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-k6nnf" podUID="7d4776d0-290f-4c82-aa5c-6412b5bb4608" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:15 crc kubenswrapper[4854]: I0103 07:06:15.987409 4854 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-n82hj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:15 crc kubenswrapper[4854]: I0103 07:06:15.987547 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n82hj" podUID="9ecb343a-f88c-49d3-a792-696f8b94eca3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:16 crc kubenswrapper[4854]: I0103 07:06:16.130440 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-wwws2" podUID="6cc37176-dd9d-4138-a8f4-615d7815311a" containerName="nmstate-handler" probeResult="failure" output="command timed out" Jan 03 07:06:16 crc kubenswrapper[4854]: I0103 07:06:16.131589 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-wwws2" Jan 03 07:06:16 crc kubenswrapper[4854]: I0103 07:06:16.133245 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-6pvh8" podUID="03a2de93-c858-46e8-ae42-a34d1d776b7c" containerName="registry-server" probeResult="failure" output="command timed out" Jan 03 07:06:16 crc kubenswrapper[4854]: I0103 07:06:16.134010 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-6pvh8" podUID="03a2de93-c858-46e8-ae42-a34d1d776b7c" containerName="registry-server" probeResult="failure" output="command timed out" Jan 03 07:06:16 crc kubenswrapper[4854]: I0103 07:06:16.193549 4854 patch_prober.go:28] interesting pod/controller-manager-7ff6f7c9f7-lfv4z container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.74:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:16 crc kubenswrapper[4854]: I0103 07:06:16.193610 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-lfv4z" podUID="82dcb747-3603-42a5-82ca-f7664d5d9027" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.74:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:16 crc kubenswrapper[4854]: I0103 07:06:16.285494 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="64c47821-9bcb-435f-9802-15d45eb73f52" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.7:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:16 crc kubenswrapper[4854]: I0103 07:06:16.286302 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="64c47821-9bcb-435f-9802-15d45eb73f52" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.7:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:16 crc kubenswrapper[4854]: I0103 07:06:16.374388 4854 patch_prober.go:28] interesting pod/console-67666b4d85-nwx4t container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:16 crc kubenswrapper[4854]: I0103 07:06:16.374654 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-67666b4d85-nwx4t" podUID="002174a6-3b57-4eba-985b-9fd7c492b143" containerName="console" probeResult="failure" output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:16 crc kubenswrapper[4854]: I0103 07:06:16.394059 4854 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-q6v5f container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.73:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:16 crc kubenswrapper[4854]: I0103 07:06:16.394154 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-q6v5f" podUID="e1f91a20-c61d-488f-98ab-f966174f3764" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.73:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:16 crc kubenswrapper[4854]: I0103 07:06:16.477411 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-jqj54" podUID="e62c43c5-cac2-4f9f-9e1b-de61827c4c94" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:16 crc kubenswrapper[4854]: I0103 07:06:16.477486 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-xrghz" podUID="56476ba9-ae33-4d34-855c-0e144e4f5da3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:16 crc kubenswrapper[4854]: I0103 07:06:16.485323 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-5f9d458c9d-vtsmw" podUID="c5aa7cd7-25a7-4228-a047-5fef936c6a9a" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.202:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:16 crc kubenswrapper[4854]: I0103 07:06:16.485908 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-5f9d458c9d-vtsmw" podUID="c5aa7cd7-25a7-4228-a047-5fef936c6a9a" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.202:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:16 crc kubenswrapper[4854]: I0103 07:06:16.746529 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="79760a75-c798-415d-be02-dd3a6a9c74ee" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.176:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:16 crc kubenswrapper[4854]: I0103 07:06:16.792421 4854 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:16 crc kubenswrapper[4854]: I0103 07:06:16.792499 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:17 crc kubenswrapper[4854]: I0103 07:06:17.112590 4854 prober.go:107] "Probe failed" probeType="Startup" pod="metallb-system/frr-k8s-6fczv" podUID="e29c84ac-4ca9-44ec-b886-ae50c84ba121" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:17 crc kubenswrapper[4854]: I0103 07:06:17.130260 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="9afcc108-879e-4244-a52b-1c5720d08571" containerName="prometheus" probeResult="failure" output="command timed out" Jan 03 07:06:17 crc kubenswrapper[4854]: I0103 07:06:17.130961 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="9afcc108-879e-4244-a52b-1c5720d08571" containerName="prometheus" probeResult="failure" output="command timed out" Jan 03 07:06:17 crc kubenswrapper[4854]: I0103 07:06:17.133791 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-8f5jd" podUID="a0902db0-b7a6-496e-955c-c6f6bb3429c6" containerName="registry-server" probeResult="failure" output="command timed out" Jan 03 07:06:17 crc kubenswrapper[4854]: I0103 07:06:17.137366 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-8f5jd" podUID="a0902db0-b7a6-496e-955c-c6f6bb3429c6" containerName="registry-server" probeResult="failure" output="command timed out" Jan 03 07:06:17 crc kubenswrapper[4854]: E0103 07:06:17.387019 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bd1d9b35d30511712419b9426f660f16ade983dd8d2b80aff8b4d7f562ca61fd is running failed: container process not found" containerID="bd1d9b35d30511712419b9426f660f16ade983dd8d2b80aff8b4d7f562ca61fd" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 03 07:06:17 crc kubenswrapper[4854]: E0103 07:06:17.388217 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bd1d9b35d30511712419b9426f660f16ade983dd8d2b80aff8b4d7f562ca61fd is running failed: container process not found" containerID="bd1d9b35d30511712419b9426f660f16ade983dd8d2b80aff8b4d7f562ca61fd" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 03 07:06:17 crc kubenswrapper[4854]: E0103 07:06:17.388615 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bd1d9b35d30511712419b9426f660f16ade983dd8d2b80aff8b4d7f562ca61fd is running failed: container process not found" containerID="bd1d9b35d30511712419b9426f660f16ade983dd8d2b80aff8b4d7f562ca61fd" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 03 07:06:17 crc kubenswrapper[4854]: E0103 07:06:17.388684 4854 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bd1d9b35d30511712419b9426f660f16ade983dd8d2b80aff8b4d7f562ca61fd is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="6047aa72-faf9-4f4d-95ab-df8b1230cedf" containerName="ovn-northd" Jan 03 07:06:17 crc kubenswrapper[4854]: I0103 07:06:17.482715 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lltxw" event={"ID":"77084a3a-5610-4014-a3bf-6d4073a74d44","Type":"ContainerDied","Data":"0da95cae6db85b0c5d3e13e5ed80e896d17465b69e5fcda34ad41246e340f5d0"} Jan 03 07:06:17 crc kubenswrapper[4854]: I0103 07:06:17.482786 4854 generic.go:334] "Generic (PLEG): container finished" podID="77084a3a-5610-4014-a3bf-6d4073a74d44" containerID="0da95cae6db85b0c5d3e13e5ed80e896d17465b69e5fcda34ad41246e340f5d0" exitCode=0 Jan 03 07:06:17 crc kubenswrapper[4854]: I0103 07:06:17.487044 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_6047aa72-faf9-4f4d-95ab-df8b1230cedf/ovn-northd/0.log" Jan 03 07:06:17 crc kubenswrapper[4854]: I0103 07:06:17.487113 4854 generic.go:334] "Generic (PLEG): container finished" podID="6047aa72-faf9-4f4d-95ab-df8b1230cedf" containerID="bd1d9b35d30511712419b9426f660f16ade983dd8d2b80aff8b4d7f562ca61fd" exitCode=139 Jan 03 07:06:17 crc kubenswrapper[4854]: I0103 07:06:17.487146 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"6047aa72-faf9-4f4d-95ab-df8b1230cedf","Type":"ContainerDied","Data":"bd1d9b35d30511712419b9426f660f16ade983dd8d2b80aff8b4d7f562ca61fd"} Jan 03 07:06:17 crc kubenswrapper[4854]: I0103 07:06:17.491380 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-qmphn" podUID="29a7524a-4f1c-4e10-ae41-8e05f91cbde6" containerName="hostpath-provisioner" probeResult="failure" output="Get \"http://10.217.0.32:9898/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:17 crc kubenswrapper[4854]: I0103 07:06:17.492026 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="hostpath-provisioner/csi-hostpathplugin-qmphn" Jan 03 07:06:17 crc kubenswrapper[4854]: I0103 07:06:17.493772 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="hostpath-provisioner" containerStatusID={"Type":"cri-o","ID":"161aa8978d0e162a2d5fef70db9445adca1e8119b53f12d630da478dbffc384e"} pod="hostpath-provisioner/csi-hostpathplugin-qmphn" containerMessage="Container hostpath-provisioner failed liveness probe, will be restarted" Jan 03 07:06:17 crc kubenswrapper[4854]: I0103 07:06:17.493856 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="hostpath-provisioner/csi-hostpathplugin-qmphn" podUID="29a7524a-4f1c-4e10-ae41-8e05f91cbde6" containerName="hostpath-provisioner" containerID="cri-o://161aa8978d0e162a2d5fef70db9445adca1e8119b53f12d630da478dbffc384e" gracePeriod=30 Jan 03 07:06:17 crc kubenswrapper[4854]: I0103 07:06:17.572397 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="006530e4-7385-4334-80e8-86bfcf5f645f" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.10:8081/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:17 crc kubenswrapper[4854]: I0103 07:06:17.572447 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="006530e4-7385-4334-80e8-86bfcf5f645f" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.10:8080/livez\": context deadline exceeded" Jan 03 07:06:17 crc kubenswrapper[4854]: I0103 07:06:17.572532 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/kube-state-metrics-0" Jan 03 07:06:17 crc kubenswrapper[4854]: I0103 07:06:17.573910 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-state-metrics" containerStatusID={"Type":"cri-o","ID":"b58f07a91b101d402e5ad5f9c0b5494b8e40ab02b756053d8fab7a7e58a30fc7"} pod="openstack/kube-state-metrics-0" containerMessage="Container kube-state-metrics failed liveness probe, will be restarted" Jan 03 07:06:17 crc kubenswrapper[4854]: I0103 07:06:17.573961 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="006530e4-7385-4334-80e8-86bfcf5f645f" containerName="kube-state-metrics" containerID="cri-o://b58f07a91b101d402e5ad5f9c0b5494b8e40ab02b756053d8fab7a7e58a30fc7" gracePeriod=30 Jan 03 07:06:17 crc kubenswrapper[4854]: I0103 07:06:17.757513 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="79760a75-c798-415d-be02-dd3a6a9c74ee" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.176:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:17 crc kubenswrapper[4854]: I0103 07:06:17.863695 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-wwws2" Jan 03 07:06:17 crc kubenswrapper[4854]: I0103 07:06:17.931549 4854 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-n82hj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 03 07:06:17 crc kubenswrapper[4854]: I0103 07:06:17.931806 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n82hj" podUID="9ecb343a-f88c-49d3-a792-696f8b94eca3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 03 07:06:18 crc kubenswrapper[4854]: I0103 07:06:18.099282 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="d8d21d2a-7f73-4026-87ab-632c4a623577" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.207:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:18 crc kubenswrapper[4854]: I0103 07:06:18.099287 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="d8d21d2a-7f73-4026-87ab-632c4a623577" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.207:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:18 crc kubenswrapper[4854]: I0103 07:06:18.133806 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-7vksk" podUID="ec8a24a9-62d4-4db8-8f17-f261a85d6a47" containerName="registry-server" probeResult="failure" output="command timed out" Jan 03 07:06:18 crc kubenswrapper[4854]: I0103 07:06:18.134245 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-7vksk" podUID="ec8a24a9-62d4-4db8-8f17-f261a85d6a47" containerName="registry-server" probeResult="failure" output="command timed out" Jan 03 07:06:18 crc kubenswrapper[4854]: I0103 07:06:18.235234 4854 patch_prober.go:28] interesting pod/apiserver-76f77b778f-pzlj8 container/openshift-apiserver namespace/openshift-apiserver: Liveness probe status=failure output="Get \"https://10.217.0.6:8443/livez?exclude=etcd\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:18 crc kubenswrapper[4854]: I0103 07:06:18.235283 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-apiserver/apiserver-76f77b778f-pzlj8" podUID="2259a421-a8dd-45e8-baa8-15cf1d37782e" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.6:8443/livez?exclude=etcd\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:18 crc kubenswrapper[4854]: I0103 07:06:18.288543 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-858654f9db-98h9c" podUID="c0e603b1-39cd-4500-a0d6-190b7a522734" containerName="cert-manager-controller" probeResult="failure" output="Get \"http://10.217.0.41:9403/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:19 crc kubenswrapper[4854]: I0103 07:06:19.136272 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-engine-64d84f65b5-cnzjg" podUID="f0006564-0566-4941-983d-8e5c58889f7f" containerName="heat-engine" probeResult="failure" output="command timed out" Jan 03 07:06:19 crc kubenswrapper[4854]: I0103 07:06:19.144753 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/heat-engine-64d84f65b5-cnzjg" podUID="f0006564-0566-4941-983d-8e5c58889f7f" containerName="heat-engine" probeResult="failure" output="command timed out" Jan 03 07:06:19 crc kubenswrapper[4854]: I0103 07:06:19.211003 4854 patch_prober.go:28] interesting pod/logging-loki-gateway-656bf7cf7c-w49nx container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:19 crc kubenswrapper[4854]: I0103 07:06:19.211221 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-656bf7cf7c-w49nx" podUID="4de190f3-1f91-4bd7-9d46-df7235633d58" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:19 crc kubenswrapper[4854]: I0103 07:06:19.213322 4854 patch_prober.go:28] interesting pod/logging-loki-gateway-656bf7cf7c-w49nx container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:19 crc kubenswrapper[4854]: I0103 07:06:19.213430 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-656bf7cf7c-w49nx" podUID="4de190f3-1f91-4bd7-9d46-df7235633d58" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:19 crc kubenswrapper[4854]: I0103 07:06:19.295383 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-2kqhz" podUID="b1c0c51a-7edb-49cb-9b71-f7ce149bde33" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:19 crc kubenswrapper[4854]: I0103 07:06:19.295948 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-2kqhz" Jan 03 07:06:19 crc kubenswrapper[4854]: I0103 07:06:19.303955 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-2kqhz" Jan 03 07:06:19 crc kubenswrapper[4854]: I0103 07:06:19.404942 4854 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:19 crc kubenswrapper[4854]: I0103 07:06:19.405013 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:19 crc kubenswrapper[4854]: I0103 07:06:19.507715 4854 generic.go:334] "Generic (PLEG): container finished" podID="006530e4-7385-4334-80e8-86bfcf5f645f" containerID="b58f07a91b101d402e5ad5f9c0b5494b8e40ab02b756053d8fab7a7e58a30fc7" exitCode=2 Jan 03 07:06:19 crc kubenswrapper[4854]: I0103 07:06:19.507827 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"006530e4-7385-4334-80e8-86bfcf5f645f","Type":"ContainerDied","Data":"b58f07a91b101d402e5ad5f9c0b5494b8e40ab02b756053d8fab7a7e58a30fc7"} Jan 03 07:06:19 crc kubenswrapper[4854]: I0103 07:06:19.664362 4854 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:19 crc kubenswrapper[4854]: I0103 07:06:19.664732 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:19 crc kubenswrapper[4854]: I0103 07:06:19.703885 4854 patch_prober.go:28] interesting pod/logging-loki-gateway-656bf7cf7c-98v92 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:19 crc kubenswrapper[4854]: I0103 07:06:19.703975 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-656bf7cf7c-98v92" podUID="428c2117-0003-47b2-abfa-f4f7930e126c" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:19 crc kubenswrapper[4854]: I0103 07:06:19.704959 4854 patch_prober.go:28] interesting pod/logging-loki-gateway-656bf7cf7c-98v92 container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:19 crc kubenswrapper[4854]: I0103 07:06:19.705037 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-656bf7cf7c-98v92" podUID="428c2117-0003-47b2-abfa-f4f7930e126c" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:19 crc kubenswrapper[4854]: I0103 07:06:19.908414 4854 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-vd2jp container/oauth-apiserver namespace/openshift-oauth-apiserver: Liveness probe status=failure output="Get \"https://10.217.0.13:8443/livez?exclude=etcd\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:19 crc kubenswrapper[4854]: I0103 07:06:19.908479 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vd2jp" podUID="e05124c8-4705-4d57-82ec-b1ae0658e98e" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.13:8443/livez?exclude=etcd\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:20 crc kubenswrapper[4854]: I0103 07:06:20.133807 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="9afcc108-879e-4244-a52b-1c5720d08571" containerName="prometheus" probeResult="failure" output="command timed out" Jan 03 07:06:20 crc kubenswrapper[4854]: I0103 07:06:20.145976 4854 patch_prober.go:28] interesting pod/loki-operator-controller-manager-bd45dfbc8-vmrll container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.48:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:20 crc kubenswrapper[4854]: I0103 07:06:20.146033 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-bd45dfbc8-vmrll" podUID="0c7ed8af-66a8-4ce9-95bd-4818cc646245" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.48:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:20 crc kubenswrapper[4854]: I0103 07:06:20.325326 4854 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-lltxw container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Jan 03 07:06:20 crc kubenswrapper[4854]: I0103 07:06:20.325587 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lltxw" podUID="77084a3a-5610-4014-a3bf-6d4073a74d44" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Jan 03 07:06:20 crc kubenswrapper[4854]: I0103 07:06:20.384906 4854 patch_prober.go:28] interesting pod/thanos-querier-5b7f7948f-gfss8 container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.79:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:20 crc kubenswrapper[4854]: I0103 07:06:20.384981 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-5b7f7948f-gfss8" podUID="0ee90900-26e8-4d06-b2b4-f646a1570746" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.79:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:20 crc kubenswrapper[4854]: I0103 07:06:20.520888 4854 generic.go:334] "Generic (PLEG): container finished" podID="ca1d3e35-8df0-4b19-891d-3f2aecc401ab" containerID="7def2b0ff7c9c47e2ab148307011319889a3bf934eaf31dbecc60b60a9497a0e" exitCode=0 Jan 03 07:06:20 crc kubenswrapper[4854]: I0103 07:06:20.520930 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca1d3e35-8df0-4b19-891d-3f2aecc401ab","Type":"ContainerDied","Data":"7def2b0ff7c9c47e2ab148307011319889a3bf934eaf31dbecc60b60a9497a0e"} Jan 03 07:06:20 crc kubenswrapper[4854]: I0103 07:06:20.580389 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qqbq9" podUID="ba0f32da-a0e3-4c43-8dde-d6212a1c63e1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:20 crc kubenswrapper[4854]: I0103 07:06:20.580456 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qqbq9" podUID="ba0f32da-a0e3-4c43-8dde-d6212a1c63e1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:20 crc kubenswrapper[4854]: I0103 07:06:20.580860 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qqbq9" Jan 03 07:06:20 crc kubenswrapper[4854]: I0103 07:06:20.722550 4854 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-vtpbv container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:20 crc kubenswrapper[4854]: I0103 07:06:20.722610 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-vtpbv" podUID="b0379c6e-b02d-40ef-b9ae-add1e633bc4a" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:20 crc kubenswrapper[4854]: I0103 07:06:20.722656 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-69f744f599-vtpbv" Jan 03 07:06:20 crc kubenswrapper[4854]: I0103 07:06:20.724100 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="authentication-operator" containerStatusID={"Type":"cri-o","ID":"b003306c709cdb4b4c71e9bbbb037118b8466a77a516a31cf224abfbcfbcd931"} pod="openshift-authentication-operator/authentication-operator-69f744f599-vtpbv" containerMessage="Container authentication-operator failed liveness probe, will be restarted" Jan 03 07:06:20 crc kubenswrapper[4854]: I0103 07:06:20.724149 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication-operator/authentication-operator-69f744f599-vtpbv" podUID="b0379c6e-b02d-40ef-b9ae-add1e633bc4a" containerName="authentication-operator" containerID="cri-o://b003306c709cdb4b4c71e9bbbb037118b8466a77a516a31cf224abfbcfbcd931" gracePeriod=30 Jan 03 07:06:20 crc kubenswrapper[4854]: I0103 07:06:20.854272 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-85546d974f-nhdvf" podUID="c82f4933-ef34-46ae-8f48-f87b3ce1e90f" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.94:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:20 crc kubenswrapper[4854]: I0103 07:06:20.854295 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-85546d974f-nhdvf" podUID="c82f4933-ef34-46ae-8f48-f87b3ce1e90f" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.94:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:20 crc kubenswrapper[4854]: I0103 07:06:20.854352 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/metallb-operator-webhook-server-85546d974f-nhdvf" Jan 03 07:06:20 crc kubenswrapper[4854]: I0103 07:06:20.855328 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="webhook-server" containerStatusID={"Type":"cri-o","ID":"920c20a2aca36567c8d53a27e449dedf658aa6fc46392a08b3ed436f3b4ece63"} pod="metallb-system/metallb-operator-webhook-server-85546d974f-nhdvf" containerMessage="Container webhook-server failed liveness probe, will be restarted" Jan 03 07:06:20 crc kubenswrapper[4854]: I0103 07:06:20.855373 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/metallb-operator-webhook-server-85546d974f-nhdvf" podUID="c82f4933-ef34-46ae-8f48-f87b3ce1e90f" containerName="webhook-server" containerID="cri-o://920c20a2aca36567c8d53a27e449dedf658aa6fc46392a08b3ed436f3b4ece63" gracePeriod=2 Jan 03 07:06:20 crc kubenswrapper[4854]: I0103 07:06:20.932417 4854 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-n82hj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 03 07:06:20 crc kubenswrapper[4854]: I0103 07:06:20.932484 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n82hj" podUID="9ecb343a-f88c-49d3-a792-696f8b94eca3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 03 07:06:21 crc kubenswrapper[4854]: I0103 07:06:21.020827 4854 patch_prober.go:28] interesting pod/console-operator-58897d9998-2lwzj container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.17:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:21 crc kubenswrapper[4854]: I0103 07:06:21.020827 4854 patch_prober.go:28] interesting pod/console-operator-58897d9998-2lwzj container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:21 crc kubenswrapper[4854]: I0103 07:06:21.021346 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-2lwzj" podUID="dcde1a7d-7025-45cb-92de-483da7a86296" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.17:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:21 crc kubenswrapper[4854]: I0103 07:06:21.021620 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console-operator/console-operator-58897d9998-2lwzj" Jan 03 07:06:21 crc kubenswrapper[4854]: I0103 07:06:21.021532 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-2lwzj" podUID="dcde1a7d-7025-45cb-92de-483da7a86296" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.17:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:21 crc kubenswrapper[4854]: I0103 07:06:21.021848 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-2lwzj" Jan 03 07:06:21 crc kubenswrapper[4854]: I0103 07:06:21.023377 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="console-operator" containerStatusID={"Type":"cri-o","ID":"e471bf3617e0cc1e81dc107d06d8c5e3583056df720e8fbf7f91a51f43b4521b"} pod="openshift-console-operator/console-operator-58897d9998-2lwzj" containerMessage="Container console-operator failed liveness probe, will be restarted" Jan 03 07:06:21 crc kubenswrapper[4854]: I0103 07:06:21.023433 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console-operator/console-operator-58897d9998-2lwzj" podUID="dcde1a7d-7025-45cb-92de-483da7a86296" containerName="console-operator" containerID="cri-o://e471bf3617e0cc1e81dc107d06d8c5e3583056df720e8fbf7f91a51f43b4521b" gracePeriod=30 Jan 03 07:06:21 crc kubenswrapper[4854]: I0103 07:06:21.032348 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/heat-cfnapi-5b8559b4dd-7xq2s" podUID="667c29ce-e696-4ad7-97f1-4b43f3eba910" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.1.20:8000/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:21 crc kubenswrapper[4854]: I0103 07:06:21.129315 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7nclw5" podUID="1f9928f3-0c28-40df-b6ad-c871424ad3a6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:21 crc kubenswrapper[4854]: I0103 07:06:21.129662 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7nclw5" podUID="1f9928f3-0c28-40df-b6ad-c871424ad3a6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:21 crc kubenswrapper[4854]: I0103 07:06:21.129954 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7nclw5" Jan 03 07:06:21 crc kubenswrapper[4854]: I0103 07:06:21.158250 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/heat-api-678d8c789d-4cfwq" podUID="725ed672-0c58-4f2c-b6c2-eb51c516a7a9" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.1.19:8004/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:21 crc kubenswrapper[4854]: I0103 07:06:21.158283 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-678d8c789d-4cfwq" podUID="725ed672-0c58-4f2c-b6c2-eb51c516a7a9" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.1.19:8004/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:21 crc kubenswrapper[4854]: I0103 07:06:21.158240 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-5b8559b4dd-7xq2s" podUID="667c29ce-e696-4ad7-97f1-4b43f3eba910" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.1.20:8000/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:21 crc kubenswrapper[4854]: I0103 07:06:21.295359 4854 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-8d5t5 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:21 crc kubenswrapper[4854]: I0103 07:06:21.295417 4854 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-8d5t5 container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:21 crc kubenswrapper[4854]: I0103 07:06:21.295434 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8d5t5" podUID="58b842c2-f723-45ae-9d08-9218837bb66a" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:21 crc kubenswrapper[4854]: I0103 07:06:21.295501 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8d5t5" podUID="58b842c2-f723-45ae-9d08-9218837bb66a" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:21 crc kubenswrapper[4854]: I0103 07:06:21.442378 4854 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-99h4j container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:21 crc kubenswrapper[4854]: I0103 07:06:21.442459 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-99h4j" podUID="07007d77-4861-45ac-aacd-17b840bef2ee" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:21 crc kubenswrapper[4854]: I0103 07:06:21.442573 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-99h4j" Jan 03 07:06:21 crc kubenswrapper[4854]: I0103 07:06:21.484305 4854 patch_prober.go:28] interesting pod/router-default-5444994796-tdlx9 container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:21 crc kubenswrapper[4854]: I0103 07:06:21.484374 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-tdlx9" podUID="ab6ec22e-2a2c-4e28-8242-5bd783990843" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:21 crc kubenswrapper[4854]: I0103 07:06:21.484426 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-ingress/router-default-5444994796-tdlx9" Jan 03 07:06:21 crc kubenswrapper[4854]: I0103 07:06:21.485605 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"63cc6c355f6397dba553d8cb89d15fb9ff68767748c5f862c6a7a5d7d0806e07"} pod="openshift-ingress/router-default-5444994796-tdlx9" containerMessage="Container router failed liveness probe, will be restarted" Jan 03 07:06:21 crc kubenswrapper[4854]: I0103 07:06:21.485648 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-5444994796-tdlx9" podUID="ab6ec22e-2a2c-4e28-8242-5bd783990843" containerName="router" containerID="cri-o://63cc6c355f6397dba553d8cb89d15fb9ff68767748c5f862c6a7a5d7d0806e07" gracePeriod=10 Jan 03 07:06:21 crc kubenswrapper[4854]: I0103 07:06:21.526285 4854 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:21 crc kubenswrapper[4854]: I0103 07:06:21.526333 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:21 crc kubenswrapper[4854]: I0103 07:06:21.526332 4854 patch_prober.go:28] interesting pod/router-default-5444994796-tdlx9 container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:21 crc kubenswrapper[4854]: I0103 07:06:21.526405 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-tdlx9" podUID="ab6ec22e-2a2c-4e28-8242-5bd783990843" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:21 crc kubenswrapper[4854]: I0103 07:06:21.526522 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-tdlx9" Jan 03 07:06:21 crc kubenswrapper[4854]: I0103 07:06:21.526552 4854 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-99h4j container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:21 crc kubenswrapper[4854]: I0103 07:06:21.526578 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-99h4j" podUID="07007d77-4861-45ac-aacd-17b840bef2ee" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:21 crc kubenswrapper[4854]: I0103 07:06:21.526609 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-99h4j" Jan 03 07:06:21 crc kubenswrapper[4854]: I0103 07:06:21.538061 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="package-server-manager" containerStatusID={"Type":"cri-o","ID":"cd5d4db41f7b67e9b596b2078363907a0118e5e595d471db2365026bf43e6851"} pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-99h4j" containerMessage="Container package-server-manager failed liveness probe, will be restarted" Jan 03 07:06:21 crc kubenswrapper[4854]: I0103 07:06:21.538130 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-99h4j" podUID="07007d77-4861-45ac-aacd-17b840bef2ee" containerName="package-server-manager" containerID="cri-o://cd5d4db41f7b67e9b596b2078363907a0118e5e595d471db2365026bf43e6851" gracePeriod=30 Jan 03 07:06:21 crc kubenswrapper[4854]: I0103 07:06:21.538804 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lltxw" event={"ID":"77084a3a-5610-4014-a3bf-6d4073a74d44","Type":"ContainerStarted","Data":"ba38b7cbc9885962e72e1148760628338b7d0291a4bce4fb9f806ec41b0e73b8"} Jan 03 07:06:21 crc kubenswrapper[4854]: I0103 07:06:21.554328 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lltxw" Jan 03 07:06:21 crc kubenswrapper[4854]: I0103 07:06:21.608219 4854 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-fwgd2 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:21 crc kubenswrapper[4854]: I0103 07:06:21.608244 4854 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-lcbzf container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:21 crc kubenswrapper[4854]: I0103 07:06:21.608277 4854 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-fwgd2 container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.31:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:21 crc kubenswrapper[4854]: I0103 07:06:21.608309 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fwgd2" podUID="5ab7ee8b-9182-43e2-85de-f8d92aa12587" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.31:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:21 crc kubenswrapper[4854]: I0103 07:06:21.608320 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-lcbzf" podUID="07528198-b6c3-44c7-aec4-4647d7a06116" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:21 crc kubenswrapper[4854]: I0103 07:06:21.608359 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fwgd2" podUID="5ab7ee8b-9182-43e2-85de-f8d92aa12587" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.31:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:21 crc kubenswrapper[4854]: I0103 07:06:21.608454 4854 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-lcbzf container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:21 crc kubenswrapper[4854]: I0103 07:06:21.608485 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-lcbzf" podUID="07528198-b6c3-44c7-aec4-4647d7a06116" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:21 crc kubenswrapper[4854]: I0103 07:06:21.650269 4854 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:21 crc kubenswrapper[4854]: I0103 07:06:21.650330 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:21 crc kubenswrapper[4854]: I0103 07:06:21.650690 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qqbq9" podUID="ba0f32da-a0e3-4c43-8dde-d6212a1c63e1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:21 crc kubenswrapper[4854]: I0103 07:06:21.733263 4854 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-lltxw container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Jan 03 07:06:21 crc kubenswrapper[4854]: I0103 07:06:21.733264 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-85b679bdc6-qrnbc" podUID="7f7c87f2-5743-4000-a36a-3a9400e24cdd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:21 crc kubenswrapper[4854]: I0103 07:06:21.733300 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-85b679bdc6-qrnbc" podUID="7f7c87f2-5743-4000-a36a-3a9400e24cdd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:21 crc kubenswrapper[4854]: I0103 07:06:21.733347 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lltxw" podUID="77084a3a-5610-4014-a3bf-6d4073a74d44" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Jan 03 07:06:21 crc kubenswrapper[4854]: I0103 07:06:21.733421 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-85b679bdc6-qrnbc" Jan 03 07:06:21 crc kubenswrapper[4854]: I0103 07:06:21.748434 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="79760a75-c798-415d-be02-dd3a6a9c74ee" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.176:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:21 crc kubenswrapper[4854]: I0103 07:06:21.748583 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="79760a75-c798-415d-be02-dd3a6a9c74ee" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.176:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:21 crc kubenswrapper[4854]: I0103 07:06:21.915416 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-85546d974f-nhdvf" podUID="c82f4933-ef34-46ae-8f48-f87b3ce1e90f" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.94:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:22 crc kubenswrapper[4854]: I0103 07:06:22.022299 4854 patch_prober.go:28] interesting pod/console-operator-58897d9998-2lwzj container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:22 crc kubenswrapper[4854]: I0103 07:06:22.022377 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-2lwzj" podUID="dcde1a7d-7025-45cb-92de-483da7a86296" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.17:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:22 crc kubenswrapper[4854]: I0103 07:06:22.051210 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" podUID="159783f1-b3b7-432d-b243-e8e7076ddd0a" containerName="oauth-openshift" containerID="cri-o://1c1339677d0c8a6d7d7eee61fd4fa15d6a40580599301989032bde78a8b8e7c2" gracePeriod=14 Jan 03 07:06:22 crc kubenswrapper[4854]: I0103 07:06:22.130499 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="9afcc108-879e-4244-a52b-1c5720d08571" containerName="prometheus" probeResult="failure" output="command timed out" Jan 03 07:06:22 crc kubenswrapper[4854]: I0103 07:06:22.130522 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="10578fce-2c06-4977-9cb2-51b8593f9fed" containerName="galera" probeResult="failure" output="command timed out" Jan 03 07:06:22 crc kubenswrapper[4854]: I0103 07:06:22.131350 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" podUID="c6c4aab5-c8ed-4323-87ef-a932943637e0" containerName="nbdb" probeResult="failure" output="command timed out" Jan 03 07:06:22 crc kubenswrapper[4854]: I0103 07:06:22.131411 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-cs97n" podUID="c6c4aab5-c8ed-4323-87ef-a932943637e0" containerName="sbdb" probeResult="failure" output="command timed out" Jan 03 07:06:22 crc kubenswrapper[4854]: I0103 07:06:22.133239 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-lzwb4" podUID="ea9863f6-8706-4844-ad3e-93309cdbef22" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.95:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:22 crc kubenswrapper[4854]: I0103 07:06:22.133279 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-lzwb4" Jan 03 07:06:22 crc kubenswrapper[4854]: I0103 07:06:22.134979 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="frr-k8s-webhook-server" containerStatusID={"Type":"cri-o","ID":"c5a1cc71bf27b754936bdde8e575bd7aa1f0da15a1c6e03b51b95f04ffc0c08b"} pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-lzwb4" containerMessage="Container frr-k8s-webhook-server failed liveness probe, will be restarted" Jan 03 07:06:22 crc kubenswrapper[4854]: I0103 07:06:22.135026 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-lzwb4" podUID="ea9863f6-8706-4844-ad3e-93309cdbef22" containerName="frr-k8s-webhook-server" containerID="cri-o://c5a1cc71bf27b754936bdde8e575bd7aa1f0da15a1c6e03b51b95f04ffc0c08b" gracePeriod=10 Jan 03 07:06:22 crc kubenswrapper[4854]: I0103 07:06:22.258233 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-lzwb4" podUID="ea9863f6-8706-4844-ad3e-93309cdbef22" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.95:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:22 crc kubenswrapper[4854]: I0103 07:06:22.258271 4854 prober.go:107] "Probe failed" probeType="Startup" pod="metallb-system/frr-k8s-6fczv" podUID="e29c84ac-4ca9-44ec-b886-ae50c84ba121" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:22 crc kubenswrapper[4854]: I0103 07:06:22.258536 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-lzwb4" Jan 03 07:06:22 crc kubenswrapper[4854]: I0103 07:06:22.300281 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7nclw5" podUID="1f9928f3-0c28-40df-b6ad-c871424ad3a6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:22 crc kubenswrapper[4854]: I0103 07:06:22.384301 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-6fczv" podUID="e29c84ac-4ca9-44ec-b886-ae50c84ba121" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:22 crc kubenswrapper[4854]: I0103 07:06:22.384380 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-6fczv" Jan 03 07:06:22 crc kubenswrapper[4854]: I0103 07:06:22.384460 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-5bddd4b946-bzjqc" podUID="d1422b70-f6c6-46f8-81b3-1d2f35800374" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.96:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:22 crc kubenswrapper[4854]: I0103 07:06:22.384556 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/controller-5bddd4b946-bzjqc" Jan 03 07:06:22 crc kubenswrapper[4854]: I0103 07:06:22.384670 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-6fczv" podUID="e29c84ac-4ca9-44ec-b886-ae50c84ba121" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:22 crc kubenswrapper[4854]: I0103 07:06:22.384775 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-5bddd4b946-bzjqc" podUID="d1422b70-f6c6-46f8-81b3-1d2f35800374" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.96:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:22 crc kubenswrapper[4854]: I0103 07:06:22.385059 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-6fczv" Jan 03 07:06:22 crc kubenswrapper[4854]: I0103 07:06:22.385286 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-5bddd4b946-bzjqc" Jan 03 07:06:22 crc kubenswrapper[4854]: I0103 07:06:22.385838 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="controller" containerStatusID={"Type":"cri-o","ID":"820d48d16dc8ad8bfb1070482e2c87343667e66671ad0c016f3473c4af9b4abf"} pod="metallb-system/frr-k8s-6fczv" containerMessage="Container controller failed liveness probe, will be restarted" Jan 03 07:06:22 crc kubenswrapper[4854]: I0103 07:06:22.386432 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-6fczv" podUID="e29c84ac-4ca9-44ec-b886-ae50c84ba121" containerName="controller" containerID="cri-o://820d48d16dc8ad8bfb1070482e2c87343667e66671ad0c016f3473c4af9b4abf" gracePeriod=2 Jan 03 07:06:22 crc kubenswrapper[4854]: I0103 07:06:22.387875 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="controller" containerStatusID={"Type":"cri-o","ID":"179fce328f6e20fdc3653c24e9f94fa6adc4fd59f7d79ee8575929659e342509"} pod="metallb-system/controller-5bddd4b946-bzjqc" containerMessage="Container controller failed liveness probe, will be restarted" Jan 03 07:06:22 crc kubenswrapper[4854]: I0103 07:06:22.388012 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/controller-5bddd4b946-bzjqc" podUID="d1422b70-f6c6-46f8-81b3-1d2f35800374" containerName="controller" containerID="cri-o://179fce328f6e20fdc3653c24e9f94fa6adc4fd59f7d79ee8575929659e342509" gracePeriod=2 Jan 03 07:06:22 crc kubenswrapper[4854]: E0103 07:06:22.388506 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bd1d9b35d30511712419b9426f660f16ade983dd8d2b80aff8b4d7f562ca61fd is running failed: container process not found" containerID="bd1d9b35d30511712419b9426f660f16ade983dd8d2b80aff8b4d7f562ca61fd" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 03 07:06:22 crc kubenswrapper[4854]: E0103 07:06:22.388950 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bd1d9b35d30511712419b9426f660f16ade983dd8d2b80aff8b4d7f562ca61fd is running failed: container process not found" containerID="bd1d9b35d30511712419b9426f660f16ade983dd8d2b80aff8b4d7f562ca61fd" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 03 07:06:22 crc kubenswrapper[4854]: E0103 07:06:22.389867 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bd1d9b35d30511712419b9426f660f16ade983dd8d2b80aff8b4d7f562ca61fd is running failed: container process not found" containerID="bd1d9b35d30511712419b9426f660f16ade983dd8d2b80aff8b4d7f562ca61fd" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 03 07:06:22 crc kubenswrapper[4854]: E0103 07:06:22.389937 4854 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bd1d9b35d30511712419b9426f660f16ade983dd8d2b80aff8b4d7f562ca61fd is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="6047aa72-faf9-4f4d-95ab-df8b1230cedf" containerName="ovn-northd" Jan 03 07:06:22 crc kubenswrapper[4854]: I0103 07:06:22.488293 4854 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-99h4j container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:22 crc kubenswrapper[4854]: I0103 07:06:22.488369 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-99h4j" podUID="07007d77-4861-45ac-aacd-17b840bef2ee" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:22 crc kubenswrapper[4854]: I0103 07:06:22.558508 4854 generic.go:334] "Generic (PLEG): container finished" podID="e1f91a20-c61d-488f-98ab-f966174f3764" containerID="cf53552d47c3573d6b6c388776a4079b91daadea1b72bad72d69acf59404441c" exitCode=0 Jan 03 07:06:22 crc kubenswrapper[4854]: I0103 07:06:22.558606 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-q6v5f" event={"ID":"e1f91a20-c61d-488f-98ab-f966174f3764","Type":"ContainerDied","Data":"cf53552d47c3573d6b6c388776a4079b91daadea1b72bad72d69acf59404441c"} Jan 03 07:06:22 crc kubenswrapper[4854]: I0103 07:06:22.559613 4854 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-lltxw container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Jan 03 07:06:22 crc kubenswrapper[4854]: I0103 07:06:22.559766 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lltxw" podUID="77084a3a-5610-4014-a3bf-6d4073a74d44" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Jan 03 07:06:22 crc kubenswrapper[4854]: I0103 07:06:22.572304 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="006530e4-7385-4334-80e8-86bfcf5f645f" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.10:8081/readyz\": dial tcp 10.217.1.10:8081: connect: connection refused" Jan 03 07:06:22 crc kubenswrapper[4854]: I0103 07:06:22.572420 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 03 07:06:22 crc kubenswrapper[4854]: I0103 07:06:22.774504 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-85b679bdc6-qrnbc" podUID="7f7c87f2-5743-4000-a36a-3a9400e24cdd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:23 crc kubenswrapper[4854]: I0103 07:06:23.129330 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="748d9586-5917-42ab-8f1f-3a811b724dae" containerName="galera" probeResult="failure" output="command timed out" Jan 03 07:06:23 crc kubenswrapper[4854]: I0103 07:06:23.130988 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ovn-controller-ovs-mkvp7" podUID="babc1db7-041b-4116-86ff-b9d0c4349d49" containerName="ovs-vswitchd" probeResult="failure" output="command timed out" Jan 03 07:06:23 crc kubenswrapper[4854]: I0103 07:06:23.131049 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ovs-mkvp7" podUID="babc1db7-041b-4116-86ff-b9d0c4349d49" containerName="ovsdb-server" probeResult="failure" output="command timed out" Jan 03 07:06:23 crc kubenswrapper[4854]: I0103 07:06:23.131339 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ovn-controller-dll2c" podUID="04465680-9e76-4b04-aa5f-c94218a6bf28" containerName="ovn-controller" probeResult="failure" output="command timed out" Jan 03 07:06:23 crc kubenswrapper[4854]: I0103 07:06:23.133267 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ovs-mkvp7" podUID="babc1db7-041b-4116-86ff-b9d0c4349d49" containerName="ovs-vswitchd" probeResult="failure" output="command timed out" Jan 03 07:06:23 crc kubenswrapper[4854]: I0103 07:06:23.133277 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-dll2c" podUID="04465680-9e76-4b04-aa5f-c94218a6bf28" containerName="ovn-controller" probeResult="failure" output="command timed out" Jan 03 07:06:23 crc kubenswrapper[4854]: I0103 07:06:23.134518 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ovn-controller-ovs-mkvp7" podUID="babc1db7-041b-4116-86ff-b9d0c4349d49" containerName="ovsdb-server" probeResult="failure" output="command timed out" Jan 03 07:06:23 crc kubenswrapper[4854]: I0103 07:06:23.235316 4854 patch_prober.go:28] interesting pod/apiserver-76f77b778f-pzlj8 container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/readyz?exclude=etcd&exclude=etcd-readiness\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:23 crc kubenswrapper[4854]: I0103 07:06:23.235392 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-pzlj8" podUID="2259a421-a8dd-45e8-baa8-15cf1d37782e" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.6:8443/readyz?exclude=etcd&exclude=etcd-readiness\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:23 crc kubenswrapper[4854]: I0103 07:06:23.288040 4854 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:23 crc kubenswrapper[4854]: I0103 07:06:23.288112 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:23 crc kubenswrapper[4854]: I0103 07:06:23.426324 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-5bddd4b946-bzjqc" podUID="d1422b70-f6c6-46f8-81b3-1d2f35800374" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.96:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:23 crc kubenswrapper[4854]: I0103 07:06:23.446557 4854 trace.go:236] Trace[1069790503]: "Calculate volume metrics of storage for pod openshift-logging/logging-loki-ingester-0" (03-Jan-2026 07:06:14.560) (total time: 8885ms): Jan 03 07:06:23 crc kubenswrapper[4854]: Trace[1069790503]: [8.885535362s] [8.885535362s] END Jan 03 07:06:23 crc kubenswrapper[4854]: I0103 07:06:23.446562 4854 trace.go:236] Trace[1354677668]: "Calculate volume metrics of registry-storage for pod openshift-image-registry/image-registry-66df7c8f76-k8nxq" (03-Jan-2026 07:06:14.724) (total time: 8721ms): Jan 03 07:06:23 crc kubenswrapper[4854]: Trace[1354677668]: [8.721399077s] [8.721399077s] END Jan 03 07:06:23 crc kubenswrapper[4854]: I0103 07:06:23.472578 4854 trace.go:236] Trace[1277829901]: "Calculate volume metrics of mysql-db for pod openstack/openstack-galera-0" (03-Jan-2026 07:06:15.855) (total time: 7617ms): Jan 03 07:06:23 crc kubenswrapper[4854]: Trace[1277829901]: [7.617295837s] [7.617295837s] END Jan 03 07:06:23 crc kubenswrapper[4854]: I0103 07:06:23.530390 4854 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-99h4j container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:23 crc kubenswrapper[4854]: I0103 07:06:23.530454 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-99h4j" podUID="07007d77-4861-45ac-aacd-17b840bef2ee" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:23 crc kubenswrapper[4854]: I0103 07:06:23.530795 4854 patch_prober.go:28] interesting pod/metrics-server-665fcf668f-65wrt container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.80:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:23 crc kubenswrapper[4854]: I0103 07:06:23.530831 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-665fcf668f-65wrt" podUID="5899ebcd-eec0-44ae-9e07-98b443d209c1" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.80:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:23 crc kubenswrapper[4854]: I0103 07:06:23.539711 4854 patch_prober.go:28] interesting pod/logging-loki-distributor-5f678c8dd6-p67sv container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:23 crc kubenswrapper[4854]: I0103 07:06:23.539773 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-p67sv" podUID="128d93c6-02aa-4f68-aac6-cfcab1896a35" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:23 crc kubenswrapper[4854]: I0103 07:06:23.571385 4854 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-lltxw container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Jan 03 07:06:23 crc kubenswrapper[4854]: I0103 07:06:23.571434 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lltxw" podUID="77084a3a-5610-4014-a3bf-6d4073a74d44" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Jan 03 07:06:23 crc kubenswrapper[4854]: I0103 07:06:23.853488 4854 patch_prober.go:28] interesting pod/logging-loki-querier-76788598db-b8thp container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:23 crc kubenswrapper[4854]: I0103 07:06:23.853801 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76788598db-b8thp" podUID="66f9492b-16b5-4b86-bb22-560ad0f8001c" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:23 crc kubenswrapper[4854]: E0103 07:06:23.989348 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dfef79ec6ba8d75de052d84ce5dcd60a393c8f2af22bc8300a68bb6818818d46" cmd=["grpc_health_probe","-addr=:50051"] Jan 03 07:06:23 crc kubenswrapper[4854]: E0103 07:06:23.990525 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dfef79ec6ba8d75de052d84ce5dcd60a393c8f2af22bc8300a68bb6818818d46" cmd=["grpc_health_probe","-addr=:50051"] Jan 03 07:06:23 crc kubenswrapper[4854]: E0103 07:06:23.991919 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dfef79ec6ba8d75de052d84ce5dcd60a393c8f2af22bc8300a68bb6818818d46" cmd=["grpc_health_probe","-addr=:50051"] Jan 03 07:06:23 crc kubenswrapper[4854]: E0103 07:06:23.991946 4854 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-marketplace/community-operators-s6gct" podUID="99f863f5-fa79-40f0-8ee2-d3d75b6c3df2" containerName="registry-server" Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.000302 4854 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-9trnq container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.7:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.000390 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-9trnq" podUID="5c8ccde8-0051-491f-b5d6-a2930440c138" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.7:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.000540 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-9trnq" Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.000996 4854 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-n82hj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.001064 4854 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-9trnq container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.7:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.001064 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n82hj" podUID="9ecb343a-f88c-49d3-a792-696f8b94eca3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.001111 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-9trnq" podUID="5c8ccde8-0051-491f-b5d6-a2930440c138" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.7:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.001146 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operators/observability-operator-59bdc8b94-9trnq" Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.001171 4854 patch_prober.go:28] interesting pod/logging-loki-query-frontend-69d9546745-42f7g container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:3101/loki/api/v1/status/buildinfo\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.001196 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-69d9546745-42f7g" podUID="b98c17f7-1569-4c33-ab65-f4c2ba0555ae" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.53:3101/loki/api/v1/status/buildinfo\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.001287 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-69d9546745-42f7g" Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.001856 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="operator" containerStatusID={"Type":"cri-o","ID":"1fef4fe0b5cd3735e92b2987769721a91e3baf81c3b158e62352607a1dd17e36"} pod="openshift-operators/observability-operator-59bdc8b94-9trnq" containerMessage="Container operator failed liveness probe, will be restarted" Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.001904 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operators/observability-operator-59bdc8b94-9trnq" podUID="5c8ccde8-0051-491f-b5d6-a2930440c138" containerName="operator" containerID="cri-o://1fef4fe0b5cd3735e92b2987769721a91e3baf81c3b158e62352607a1dd17e36" gracePeriod=30 Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.085326 4854 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-tgcxk container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.33:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.085358 4854 patch_prober.go:28] interesting pod/monitoring-plugin-57f57bb94b-jb8qx container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.81:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.085669 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-tgcxk" podUID="ff43f741-1a42-4dfa-bfea-11b28b56487c" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.33:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.085714 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-57f57bb94b-jb8qx" podUID="a6f05342-5fbe-4b7a-b222-e52b87c7e754" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.81:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.085773 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-tgcxk" Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.085824 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-57f57bb94b-jb8qx" Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.132434 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-9lzwf" podUID="7e9a4f28-3133-4df6-9ed3-fbae3e03d777" containerName="registry-server" probeResult="failure" output="command timed out" Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.132712 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-9lzwf" podUID="7e9a4f28-3133-4df6-9ed3-fbae3e03d777" containerName="registry-server" probeResult="failure" output="command timed out" Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.133054 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9lzwf" Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.133107 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/redhat-operators-9lzwf" Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.132342 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="9afcc108-879e-4244-a52b-1c5720d08571" containerName="prometheus" probeResult="failure" output="command timed out" Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.134673 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"a0936da6f25419621b27d0418a43e754bb7970fc15020eb69cec2bc27be795a1"} pod="openshift-marketplace/redhat-operators-9lzwf" containerMessage="Container registry-server failed liveness probe, will be restarted" Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.134982 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9lzwf" podUID="7e9a4f28-3133-4df6-9ed3-fbae3e03d777" containerName="registry-server" containerID="cri-o://a0936da6f25419621b27d0418a43e754bb7970fc15020eb69cec2bc27be795a1" gracePeriod=30 Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.137184 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-9mfrk" podUID="b826d6d3-0de8-4b3d-9294-9e5f8f9faae6" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": dial tcp [::1]:29150: connect: connection refused" Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.137187 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-9mfrk" podUID="b826d6d3-0de8-4b3d-9294-9e5f8f9faae6" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": dial tcp [::1]:29150: connect: connection refused" Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.209886 4854 patch_prober.go:28] interesting pod/logging-loki-gateway-656bf7cf7c-w49nx container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.209931 4854 patch_prober.go:28] interesting pod/logging-loki-gateway-656bf7cf7c-w49nx container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.209956 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-656bf7cf7c-w49nx" podUID="4de190f3-1f91-4bd7-9d46-df7235633d58" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.209999 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-656bf7cf7c-w49nx" podUID="4de190f3-1f91-4bd7-9d46-df7235633d58" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.338254 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-2kqhz" podUID="b1c0c51a-7edb-49cb-9b71-f7ce149bde33" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.338644 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-webhook-687f57d79b-2kqhz" podUID="b1c0c51a-7edb-49cb-9b71-f7ce149bde33" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.339043 4854 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-q6v5f container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.73:8443/healthz\": dial tcp 10.217.0.73:8443: connect: connection refused" start-of-body= Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.339103 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-q6v5f" podUID="e1f91a20-c61d-488f-98ab-f966174f3764" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.73:8443/healthz\": dial tcp 10.217.0.73:8443: connect: connection refused" Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.515283 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-operator-c8b457848-dg5cs" podUID="bc9994eb-5930-484d-a02c-60d4e13483e2" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.100:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.515659 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-operator-controller-operator-c8b457848-dg5cs" Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.515294 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-operator-c8b457848-dg5cs" podUID="bc9994eb-5930-484d-a02c-60d4e13483e2" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.100:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.515896 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-c8b457848-dg5cs" Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.516827 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="operator" containerStatusID={"Type":"cri-o","ID":"423337c31d04ab35f34cc1bfe20f120baa0b2e3d55c33fe2710212e8b1497b88"} pod="openstack-operators/openstack-operator-controller-operator-c8b457848-dg5cs" containerMessage="Container operator failed liveness probe, will be restarted" Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.516880 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-controller-operator-c8b457848-dg5cs" podUID="bc9994eb-5930-484d-a02c-60d4e13483e2" containerName="operator" containerID="cri-o://423337c31d04ab35f34cc1bfe20f120baa0b2e3d55c33fe2710212e8b1497b88" gracePeriod=10 Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.584139 4854 generic.go:334] "Generic (PLEG): container finished" podID="c82f4933-ef34-46ae-8f48-f87b3ce1e90f" containerID="920c20a2aca36567c8d53a27e449dedf658aa6fc46392a08b3ed436f3b4ece63" exitCode=137 Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.584193 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-85546d974f-nhdvf" event={"ID":"c82f4933-ef34-46ae-8f48-f87b3ce1e90f","Type":"ContainerDied","Data":"920c20a2aca36567c8d53a27e449dedf658aa6fc46392a08b3ed436f3b4ece63"} Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.763321 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-jx5q2" podUID="40ad961e-d740-49fa-9a1f-e9d950002a3e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.763368 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-msvf6" podUID="c2f6c336-91f0-41e6-b439-c5d940264b7f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.763487 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-jx5q2" Jan 03 07:06:24 crc kubenswrapper[4854]: E0103 07:06:24.785176 4854 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.846422 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-jvp7v" podUID="81de0b3b-e6fc-45c9-b347-995726d00213" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.101:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.846485 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-jvp7v" podUID="81de0b3b-e6fc-45c9-b347-995726d00213" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.101:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.846582 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-jx5q2" podUID="40ad961e-d740-49fa-9a1f-e9d950002a3e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.846591 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-jvp7v" Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.846654 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-trsxr" podUID="f5b690cb-eb48-469c-a774-eff5eda46f89" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.846654 4854 patch_prober.go:28] interesting pod/logging-loki-gateway-656bf7cf7c-98v92 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.846717 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-656bf7cf7c-98v92" podUID="428c2117-0003-47b2-abfa-f4f7930e126c" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.846739 4854 patch_prober.go:28] interesting pod/logging-loki-gateway-656bf7cf7c-98v92 container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.846765 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-656bf7cf7c-98v92" podUID="428c2117-0003-47b2-abfa-f4f7930e126c" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.846935 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-trsxr" Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.847887 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-msvf6" podUID="c2f6c336-91f0-41e6-b439-c5d940264b7f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.848002 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-msvf6" Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.849334 4854 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.849404 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="78ad3d84-530d-45e9-928d-c552448aec20" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.56:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.849491 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.849612 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-trsxr" podUID="f5b690cb-eb48-469c-a774-eff5eda46f89" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.849693 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-trsxr" Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.850794 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="manager" containerStatusID={"Type":"cri-o","ID":"8564758a866053787b8c7c3719c9e2c0aafc3cfb635325e6e19ddeef1b7ed0e6"} pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-trsxr" containerMessage="Container manager failed liveness probe, will be restarted" Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.850845 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-trsxr" podUID="f5b690cb-eb48-469c-a774-eff5eda46f89" containerName="manager" containerID="cri-o://8564758a866053787b8c7c3719c9e2c0aafc3cfb635325e6e19ddeef1b7ed0e6" gracePeriod=10 Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.908347 4854 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-vd2jp container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.908468 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vd2jp" podUID="e05124c8-4705-4d57-82ec-b1ae0658e98e" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.929436 4854 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.929483 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.929435 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-hgwsb" podUID="a327e8cf-824f-41b1-9076-5fd57a8b4352" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:24 crc kubenswrapper[4854]: I0103 07:06:24.929646 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-hgwsb" Jan 03 07:06:25 crc kubenswrapper[4854]: I0103 07:06:25.011334 4854 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:25 crc kubenswrapper[4854]: I0103 07:06:25.011400 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-compactor-0" podUID="7fb7ba42-5d69-44aa-87b2-28130157852b" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.0.57:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:25 crc kubenswrapper[4854]: I0103 07:06:25.011891 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-hgwsb" podUID="a327e8cf-824f-41b1-9076-5fd57a8b4352" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:25 crc kubenswrapper[4854]: I0103 07:06:25.011983 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-hgwsb" Jan 03 07:06:25 crc kubenswrapper[4854]: I0103 07:06:25.194246 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-k6nnf" podUID="7d4776d0-290f-4c82-aa5c-6412b5bb4608" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:25 crc kubenswrapper[4854]: I0103 07:06:25.384433 4854 patch_prober.go:28] interesting pod/thanos-querier-5b7f7948f-gfss8 container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.79:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:25 crc kubenswrapper[4854]: I0103 07:06:25.384504 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-5b7f7948f-gfss8" podUID="0ee90900-26e8-4d06-b2b4-f646a1570746" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.79:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:25 crc kubenswrapper[4854]: I0103 07:06:25.399369 4854 patch_prober.go:28] interesting pod/logging-loki-query-frontend-69d9546745-42f7g container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:25 crc kubenswrapper[4854]: I0103 07:06:25.399432 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-69d9546745-42f7g" podUID="b98c17f7-1569-4c33-ab65-f4c2ba0555ae" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.53:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:25 crc kubenswrapper[4854]: I0103 07:06:25.399457 4854 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.61:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:25 crc kubenswrapper[4854]: I0103 07:06:25.399488 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-k6nnf" podUID="7d4776d0-290f-4c82-aa5c-6412b5bb4608" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:25 crc kubenswrapper[4854]: I0103 07:06:25.399507 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="c61cab0d-5846-418e-94ca-35e8a6c31ca0" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.61:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:25 crc kubenswrapper[4854]: I0103 07:06:25.399523 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-4pbfn" podUID="1d8399ce-3c90-4601-9a32-31dc20da4552" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:25 crc kubenswrapper[4854]: I0103 07:06:25.399551 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-4pbfn" Jan 03 07:06:25 crc kubenswrapper[4854]: I0103 07:06:25.399552 4854 patch_prober.go:28] interesting pod/monitoring-plugin-57f57bb94b-jb8qx container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.81:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:25 crc kubenswrapper[4854]: I0103 07:06:25.399611 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-57f57bb94b-jb8qx" podUID="a6f05342-5fbe-4b7a-b222-e52b87c7e754" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.81:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:25 crc kubenswrapper[4854]: I0103 07:06:25.399657 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-4pbfn" podUID="1d8399ce-3c90-4601-9a32-31dc20da4552" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:25 crc kubenswrapper[4854]: I0103 07:06:25.399760 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-4pbfn" Jan 03 07:06:25 crc kubenswrapper[4854]: I0103 07:06:25.400602 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="manager" containerStatusID={"Type":"cri-o","ID":"02a72f3526af7de403502873243d9df5f78fcecb9259c58c32cf0517bd4002fe"} pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-4pbfn" containerMessage="Container manager failed liveness probe, will be restarted" Jan 03 07:06:25 crc kubenswrapper[4854]: I0103 07:06:25.400649 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-4pbfn" podUID="1d8399ce-3c90-4601-9a32-31dc20da4552" containerName="manager" containerID="cri-o://02a72f3526af7de403502873243d9df5f78fcecb9259c58c32cf0517bd4002fe" gracePeriod=10 Jan 03 07:06:25 crc kubenswrapper[4854]: I0103 07:06:25.522373 4854 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:25 crc kubenswrapper[4854]: I0103 07:06:25.522429 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:25 crc kubenswrapper[4854]: I0103 07:06:25.522470 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-xgtzc" podUID="04d8c7f1-6674-45b0-9506-9d62c1a2f892" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:25 crc kubenswrapper[4854]: I0103 07:06:25.522471 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-xgtzc" podUID="04d8c7f1-6674-45b0-9506-9d62c1a2f892" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:25 crc kubenswrapper[4854]: I0103 07:06:25.522573 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-xgtzc" Jan 03 07:06:25 crc kubenswrapper[4854]: I0103 07:06:25.597752 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_6047aa72-faf9-4f4d-95ab-df8b1230cedf/ovn-northd/0.log" Jan 03 07:06:25 crc kubenswrapper[4854]: I0103 07:06:25.598138 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"6047aa72-faf9-4f4d-95ab-df8b1230cedf","Type":"ContainerStarted","Data":"b758c8bdda60889268554782023392360493f705ab67eb67336bb789643f97e9"} Jan 03 07:06:25 crc kubenswrapper[4854]: I0103 07:06:25.598510 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 03 07:06:25 crc kubenswrapper[4854]: I0103 07:06:25.604320 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-568985c78-x78fv" podUID="14991c3c-8c35-4008-b1a0-1b8690074322" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:25 crc kubenswrapper[4854]: I0103 07:06:25.604418 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-568985c78-x78fv" Jan 03 07:06:25 crc kubenswrapper[4854]: I0103 07:06:25.606849 4854 generic.go:334] "Generic (PLEG): container finished" podID="e29c84ac-4ca9-44ec-b886-ae50c84ba121" containerID="820d48d16dc8ad8bfb1070482e2c87343667e66671ad0c016f3473c4af9b4abf" exitCode=0 Jan 03 07:06:25 crc kubenswrapper[4854]: I0103 07:06:25.606912 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6fczv" event={"ID":"e29c84ac-4ca9-44ec-b886-ae50c84ba121","Type":"ContainerDied","Data":"820d48d16dc8ad8bfb1070482e2c87343667e66671ad0c016f3473c4af9b4abf"} Jan 03 07:06:25 crc kubenswrapper[4854]: I0103 07:06:25.608739 4854 generic.go:334] "Generic (PLEG): container finished" podID="ea9863f6-8706-4844-ad3e-93309cdbef22" containerID="c5a1cc71bf27b754936bdde8e575bd7aa1f0da15a1c6e03b51b95f04ffc0c08b" exitCode=0 Jan 03 07:06:25 crc kubenswrapper[4854]: I0103 07:06:25.608844 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-lzwb4" event={"ID":"ea9863f6-8706-4844-ad3e-93309cdbef22","Type":"ContainerDied","Data":"c5a1cc71bf27b754936bdde8e575bd7aa1f0da15a1c6e03b51b95f04ffc0c08b"} Jan 03 07:06:25 crc kubenswrapper[4854]: I0103 07:06:25.612287 4854 generic.go:334] "Generic (PLEG): container finished" podID="d1422b70-f6c6-46f8-81b3-1d2f35800374" containerID="179fce328f6e20fdc3653c24e9f94fa6adc4fd59f7d79ee8575929659e342509" exitCode=137 Jan 03 07:06:25 crc kubenswrapper[4854]: I0103 07:06:25.612349 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-bzjqc" event={"ID":"d1422b70-f6c6-46f8-81b3-1d2f35800374","Type":"ContainerDied","Data":"179fce328f6e20fdc3653c24e9f94fa6adc4fd59f7d79ee8575929659e342509"} Jan 03 07:06:25 crc kubenswrapper[4854]: I0103 07:06:25.619404 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="manager" containerStatusID={"Type":"cri-o","ID":"6276d73e5970a2f94289dded2de6b14873d5cb520a1efb79f3fa5ee5db4cac7c"} pod="openstack-operators/glance-operator-controller-manager-7b549fc966-hgwsb" containerMessage="Container manager failed liveness probe, will be restarted" Jan 03 07:06:25 crc kubenswrapper[4854]: I0103 07:06:25.619679 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-hgwsb" podUID="a327e8cf-824f-41b1-9076-5fd57a8b4352" containerName="manager" containerID="cri-o://6276d73e5970a2f94289dded2de6b14873d5cb520a1efb79f3fa5ee5db4cac7c" gracePeriod=10 Jan 03 07:06:25 crc kubenswrapper[4854]: I0103 07:06:25.851948 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/keystone-operator-controller-manager-568985c78-x78fv" podUID="14991c3c-8c35-4008-b1a0-1b8690074322" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:25 crc kubenswrapper[4854]: I0103 07:06:25.933633 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-vdnq9" podUID="25988b2b-1924-4007-a6b1-5e5403d5dc68" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:25 crc kubenswrapper[4854]: I0103 07:06:25.933803 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-vdnq9" Jan 03 07:06:25 crc kubenswrapper[4854]: I0103 07:06:25.934356 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-vdnq9" podUID="25988b2b-1924-4007-a6b1-5e5403d5dc68" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.015302 4854 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-9trnq container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.7:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.015376 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-9trnq" podUID="5c8ccde8-0051-491f-b5d6-a2930440c138" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.7:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.097378 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-z7cfx" podUID="fe7f33a3-c4b8-44b6-81f1-c2143cbb9dd1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.097431 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-7lvxp" podUID="ad6a18d3-e1d2-446a-9b41-a9fca5e8b574" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.097519 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-z7cfx" podUID="fe7f33a3-c4b8-44b6-81f1-c2143cbb9dd1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.097542 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-z7cfx" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.097574 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-7lvxp" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.097878 4854 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-tgcxk container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.33:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.097933 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-tgcxk" podUID="ff43f741-1a42-4dfa-bfea-11b28b56487c" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.33:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.098937 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="manager" containerStatusID={"Type":"cri-o","ID":"782199bdda514fa01ba01343a904400694b5d17b3870189dbd47ddbd380a3384"} pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-7lvxp" containerMessage="Container manager failed liveness probe, will be restarted" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.098991 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-7lvxp" podUID="ad6a18d3-e1d2-446a-9b41-a9fca5e8b574" containerName="manager" containerID="cri-o://782199bdda514fa01ba01343a904400694b5d17b3870189dbd47ddbd380a3384" gracePeriod=10 Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.128700 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-8f5jd" podUID="a0902db0-b7a6-496e-955c-c6f6bb3429c6" containerName="registry-server" probeResult="failure" output="command timed out" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.128733 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-wwws2" podUID="6cc37176-dd9d-4138-a8f4-615d7815311a" containerName="nmstate-handler" probeResult="failure" output="command timed out" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.129526 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8f5jd" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.131240 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-6pvh8" podUID="03a2de93-c858-46e8-ae42-a34d1d776b7c" containerName="registry-server" probeResult="failure" output="command timed out" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.131429 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/certified-operators-6pvh8" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.132406 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-8f5jd" podUID="a0902db0-b7a6-496e-955c-c6f6bb3429c6" containerName="registry-server" probeResult="failure" output="command timed out" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.132481 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8f5jd" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.132765 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"bbe82c0b1f68f3994e20a0b03a43efbc04ceb2ca503b4bedb9b84659f17e79ca"} pod="openshift-marketplace/certified-operators-6pvh8" containerMessage="Container registry-server failed liveness probe, will be restarted" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.132795 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6pvh8" podUID="03a2de93-c858-46e8-ae42-a34d1d776b7c" containerName="registry-server" containerID="cri-o://bbe82c0b1f68f3994e20a0b03a43efbc04ceb2ca503b4bedb9b84659f17e79ca" gracePeriod=30 Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.134205 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-6pvh8" podUID="03a2de93-c858-46e8-ae42-a34d1d776b7c" containerName="registry-server" probeResult="failure" output="command timed out" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.134328 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6pvh8" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.181390 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-ncjlb" podUID="402a077e-f741-447d-ab1c-25bc62cd24cf" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.181473 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-ncjlb" podUID="402a077e-f741-447d-ab1c-25bc62cd24cf" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.181757 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-ncjlb" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.181503 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-8xksh" podUID="05f5522f-8e47-4d35-be75-2edee0f16f77" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.181519 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-7lvxp" podUID="ad6a18d3-e1d2-446a-9b41-a9fca5e8b574" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.181924 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-8xksh" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.181974 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-7lvxp" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.222379 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-jqj54" podUID="e62c43c5-cac2-4f9f-9e1b-de61827c4c94" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.222630 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-8xksh" podUID="05f5522f-8e47-4d35-be75-2edee0f16f77" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.222764 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-8xksh" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.239237 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"beefc6c9c7cd45bbcbb076ad45c39976e498a1e4adab4caf810e7c2791e0ac1e"} pod="openshift-marketplace/redhat-marketplace-8f5jd" containerMessage="Container registry-server failed liveness probe, will be restarted" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.239334 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8f5jd" podUID="a0902db0-b7a6-496e-955c-c6f6bb3429c6" containerName="registry-server" containerID="cri-o://beefc6c9c7cd45bbcbb076ad45c39976e498a1e4adab4caf810e7c2791e0ac1e" gracePeriod=30 Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.304348 4854 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.304389 4854 patch_prober.go:28] interesting pod/nmstate-webhook-f8fb84555-mxm65 container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.88:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.304416 4854 patch_prober.go:28] interesting pod/oauth-openshift-6994f97844-8cxlw container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.60:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.304458 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-f8fb84555-mxm65" podUID="d78e7aa0-58e7-4445-920b-ca73758f9c84" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.88:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.304485 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" podUID="159783f1-b3b7-432d-b243-e8e7076ddd0a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.60:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.304403 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="78ad3d84-530d-45e9-928d-c552448aec20" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.56:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.304330 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-jqj54" podUID="e62c43c5-cac2-4f9f-9e1b-de61827c4c94" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.304356 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-xrghz" podUID="56476ba9-ae33-4d34-855c-0e144e4f5da3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.346310 4854 patch_prober.go:28] interesting pod/controller-manager-7ff6f7c9f7-lfv4z container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.74:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.346376 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-lfv4z" podUID="82dcb747-3603-42a5-82ca-f7664d5d9027" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.74:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.346439 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-qzzw2" podUID="ddf8e54e-858e-432c-ab2d-8b4d83f6282b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.346609 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-qzzw2" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.346692 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-7666dbdd4f-46t4f" podUID="8f21d9f8-0bdd-43de-8196-186dccb7b2f8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.346731 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-qzzw2" podUID="ddf8e54e-858e-432c-ab2d-8b4d83f6282b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.346714 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-xrghz" podUID="56476ba9-ae33-4d34-855c-0e144e4f5da3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.346838 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7666dbdd4f-46t4f" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.346947 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-dprp4" podUID="6515eec5-5595-42cb-8588-81baa0db47c1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.347035 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-dprp4" podUID="6515eec5-5595-42cb-8588-81baa0db47c1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.347124 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/telemetry-operator-controller-manager-7666dbdd4f-46t4f" podUID="8f21d9f8-0bdd-43de-8196-186dccb7b2f8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.347165 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-dprp4" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.429299 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="64c47821-9bcb-435f-9802-15d45eb73f52" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.7:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.429350 4854 patch_prober.go:28] interesting pod/route-controller-manager-799fd78b6c-wqs5s container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.71:8443/healthz\": dial tcp 10.217.0.71:8443: connect: connection refused" start-of-body= Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.429378 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-wqs5s" podUID="0f4370d7-e178-42cc-99ec-fdfeca5fb5f8" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.71:8443/healthz\": dial tcp 10.217.0.71:8443: connect: connection refused" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.429385 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-msvf6" podUID="c2f6c336-91f0-41e6-b439-c5d940264b7f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.429391 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-jvp7v" podUID="81de0b3b-e6fc-45c9-b347-995726d00213" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.101:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.429332 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-hgwsb" podUID="a327e8cf-824f-41b1-9076-5fd57a8b4352" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.429377 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="64c47821-9bcb-435f-9802-15d45eb73f52" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.7:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.429299 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-jx5q2" podUID="40ad961e-d740-49fa-9a1f-e9d950002a3e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.429716 4854 patch_prober.go:28] interesting pod/console-67666b4d85-nwx4t container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.429774 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-67666b4d85-nwx4t" podUID="002174a6-3b57-4eba-985b-9fd7c492b143" containerName="console" probeResult="failure" output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.429887 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-67666b4d85-nwx4t" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.430426 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-z7cfx" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.448074 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-xgtzc" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.626813 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca1d3e35-8df0-4b19-891d-3f2aecc401ab","Type":"ContainerStarted","Data":"3dd84dedf8688d5ff3d1a5fbdcb29d9e3cd0b508fad6ac6b71c603d2fc526568"} Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.628476 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="manager" containerStatusID={"Type":"cri-o","ID":"552f47762794e0c39bb4081c8db206bd34205a674c1b762980d168b1617b9e91"} pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-8xksh" containerMessage="Container manager failed liveness probe, will be restarted" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.628551 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-8xksh" podUID="05f5522f-8e47-4d35-be75-2edee0f16f77" containerName="manager" containerID="cri-o://552f47762794e0c39bb4081c8db206bd34205a674c1b762980d168b1617b9e91" gracePeriod=10 Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.646239 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-568985c78-x78fv" podUID="14991c3c-8c35-4008-b1a0-1b8690074322" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.747593 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="79760a75-c798-415d-be02-dd3a6a9c74ee" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.176:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.747669 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="79760a75-c798-415d-be02-dd3a6a9c74ee" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.176:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.747789 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.751183 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="prometheus" containerStatusID={"Type":"cri-o","ID":"e2af7467dae280858c2f304d5e6bd72712fff83f213ca117622d0be2839f6d64"} pod="openstack/prometheus-metric-storage-0" containerMessage="Container prometheus failed liveness probe, will be restarted" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.751289 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="79760a75-c798-415d-be02-dd3a6a9c74ee" containerName="prometheus" containerID="cri-o://e2af7467dae280858c2f304d5e6bd72712fff83f213ca117622d0be2839f6d64" gracePeriod=600 Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.931713 4854 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-n82hj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.931777 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n82hj" podUID="9ecb343a-f88c-49d3-a792-696f8b94eca3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 03 07:06:26 crc kubenswrapper[4854]: I0103 07:06:26.975283 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-vdnq9" podUID="25988b2b-1924-4007-a6b1-5e5403d5dc68" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:27 crc kubenswrapper[4854]: I0103 07:06:27.112289 4854 prober.go:107] "Probe failed" probeType="Startup" pod="metallb-system/frr-k8s-6fczv" podUID="e29c84ac-4ca9-44ec-b886-ae50c84ba121" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:27 crc kubenswrapper[4854]: I0103 07:06:27.128416 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-7vksk" podUID="ec8a24a9-62d4-4db8-8f17-f261a85d6a47" containerName="registry-server" probeResult="failure" output="command timed out" Jan 03 07:06:27 crc kubenswrapper[4854]: I0103 07:06:27.128500 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-7vksk" Jan 03 07:06:27 crc kubenswrapper[4854]: I0103 07:06:27.130771 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="9afcc108-879e-4244-a52b-1c5720d08571" containerName="prometheus" probeResult="failure" output="command timed out" Jan 03 07:06:27 crc kubenswrapper[4854]: I0103 07:06:27.130909 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Jan 03 07:06:27 crc kubenswrapper[4854]: I0103 07:06:27.130817 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="9afcc108-879e-4244-a52b-1c5720d08571" containerName="prometheus" probeResult="failure" output="command timed out" Jan 03 07:06:27 crc kubenswrapper[4854]: I0103 07:06:27.131491 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-7vksk" podUID="ec8a24a9-62d4-4db8-8f17-f261a85d6a47" containerName="registry-server" probeResult="failure" output="command timed out" Jan 03 07:06:27 crc kubenswrapper[4854]: I0103 07:06:27.131594 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-operator-index-7vksk" Jan 03 07:06:27 crc kubenswrapper[4854]: I0103 07:06:27.132245 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="prometheus" containerStatusID={"Type":"cri-o","ID":"126e20e41c5864ad13d00e751b2fc481ae418edc9985b5ae2aac0e5b79445f9d"} pod="openshift-monitoring/prometheus-k8s-0" containerMessage="Container prometheus failed liveness probe, will be restarted" Jan 03 07:06:27 crc kubenswrapper[4854]: I0103 07:06:27.132461 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="9afcc108-879e-4244-a52b-1c5720d08571" containerName="prometheus" containerID="cri-o://126e20e41c5864ad13d00e751b2fc481ae418edc9985b5ae2aac0e5b79445f9d" gracePeriod=600 Jan 03 07:06:27 crc kubenswrapper[4854]: I0103 07:06:27.267244 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-8xksh" podUID="05f5522f-8e47-4d35-be75-2edee0f16f77" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:27 crc kubenswrapper[4854]: I0103 07:06:27.267244 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-ncjlb" podUID="402a077e-f741-447d-ab1c-25bc62cd24cf" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:27 crc kubenswrapper[4854]: I0103 07:06:27.470268 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-dprp4" podUID="6515eec5-5595-42cb-8588-81baa0db47c1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:27 crc kubenswrapper[4854]: I0103 07:06:27.470303 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-7666dbdd4f-46t4f" podUID="8f21d9f8-0bdd-43de-8196-186dccb7b2f8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:27 crc kubenswrapper[4854]: I0103 07:06:27.470343 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-qzzw2" podUID="ddf8e54e-858e-432c-ab2d-8b4d83f6282b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:27 crc kubenswrapper[4854]: I0103 07:06:27.511288 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-hgwsb" podUID="a327e8cf-824f-41b1-9076-5fd57a8b4352" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:27 crc kubenswrapper[4854]: I0103 07:06:27.645527 4854 generic.go:334] "Generic (PLEG): container finished" podID="0c7ed8af-66a8-4ce9-95bd-4818cc646245" containerID="a957a4826fca42b35b6e9ebf213d5830ccdff686e48afdeea502621addf72ba0" exitCode=1 Jan 03 07:06:27 crc kubenswrapper[4854]: I0103 07:06:27.645865 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-bd45dfbc8-vmrll" event={"ID":"0c7ed8af-66a8-4ce9-95bd-4818cc646245","Type":"ContainerDied","Data":"a957a4826fca42b35b6e9ebf213d5830ccdff686e48afdeea502621addf72ba0"} Jan 03 07:06:27 crc kubenswrapper[4854]: I0103 07:06:27.650629 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-2lwzj_dcde1a7d-7025-45cb-92de-483da7a86296/console-operator/0.log" Jan 03 07:06:27 crc kubenswrapper[4854]: I0103 07:06:27.650682 4854 generic.go:334] "Generic (PLEG): container finished" podID="dcde1a7d-7025-45cb-92de-483da7a86296" containerID="e471bf3617e0cc1e81dc107d06d8c5e3583056df720e8fbf7f91a51f43b4521b" exitCode=1 Jan 03 07:06:27 crc kubenswrapper[4854]: I0103 07:06:27.651617 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"bf78afb6756a0ac3f08ae94f7f5549ad50f06b336a95f7cfdbbb58d836a8e757"} pod="openstack-operators/openstack-operator-index-7vksk" containerMessage="Container registry-server failed liveness probe, will be restarted" Jan 03 07:06:27 crc kubenswrapper[4854]: I0103 07:06:27.651662 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-7vksk" podUID="ec8a24a9-62d4-4db8-8f17-f261a85d6a47" containerName="registry-server" containerID="cri-o://bf78afb6756a0ac3f08ae94f7f5549ad50f06b336a95f7cfdbbb58d836a8e757" gracePeriod=30 Jan 03 07:06:27 crc kubenswrapper[4854]: I0103 07:06:27.652119 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-2lwzj" event={"ID":"dcde1a7d-7025-45cb-92de-483da7a86296","Type":"ContainerDied","Data":"e471bf3617e0cc1e81dc107d06d8c5e3583056df720e8fbf7f91a51f43b4521b"} Jan 03 07:06:27 crc kubenswrapper[4854]: I0103 07:06:27.653173 4854 scope.go:117] "RemoveContainer" containerID="a957a4826fca42b35b6e9ebf213d5830ccdff686e48afdeea502621addf72ba0" Jan 03 07:06:28 crc kubenswrapper[4854]: I0103 07:06:28.310266 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-8xksh" podUID="05f5522f-8e47-4d35-be75-2edee0f16f77" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:28 crc kubenswrapper[4854]: I0103 07:06:28.462209 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-9lzwf" podUID="7e9a4f28-3133-4df6-9ed3-fbae3e03d777" containerName="registry-server" probeResult="failure" output="" Jan 03 07:06:28 crc kubenswrapper[4854]: I0103 07:06:28.687027 4854 generic.go:334] "Generic (PLEG): container finished" podID="c2f6c336-91f0-41e6-b439-c5d940264b7f" containerID="404fb52f064b6dcc85411ed7ef7cc4d30a115e2fd32313478d77f5c379b1bdd0" exitCode=1 Jan 03 07:06:28 crc kubenswrapper[4854]: I0103 07:06:28.687123 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-msvf6" event={"ID":"c2f6c336-91f0-41e6-b439-c5d940264b7f","Type":"ContainerDied","Data":"404fb52f064b6dcc85411ed7ef7cc4d30a115e2fd32313478d77f5c379b1bdd0"} Jan 03 07:06:28 crc kubenswrapper[4854]: I0103 07:06:28.688310 4854 scope.go:117] "RemoveContainer" containerID="404fb52f064b6dcc85411ed7ef7cc4d30a115e2fd32313478d77f5c379b1bdd0" Jan 03 07:06:28 crc kubenswrapper[4854]: I0103 07:06:28.689132 4854 generic.go:334] "Generic (PLEG): container finished" podID="e62c43c5-cac2-4f9f-9e1b-de61827c4c94" containerID="681a58a1260cf4c81cb41d9d5ff37952fd310164763547c25d23c6d3b57e2a94" exitCode=1 Jan 03 07:06:28 crc kubenswrapper[4854]: I0103 07:06:28.689186 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-jqj54" event={"ID":"e62c43c5-cac2-4f9f-9e1b-de61827c4c94","Type":"ContainerDied","Data":"681a58a1260cf4c81cb41d9d5ff37952fd310164763547c25d23c6d3b57e2a94"} Jan 03 07:06:28 crc kubenswrapper[4854]: I0103 07:06:28.690469 4854 scope.go:117] "RemoveContainer" containerID="681a58a1260cf4c81cb41d9d5ff37952fd310164763547c25d23c6d3b57e2a94" Jan 03 07:06:28 crc kubenswrapper[4854]: I0103 07:06:28.980844 4854 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-k8nxq container/registry namespace/openshift-image-registry: Liveness probe status=failure output="Get \"https://10.217.0.58:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:28 crc kubenswrapper[4854]: I0103 07:06:28.981221 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" podUID="b633dc70-c725-4f1b-9595-aee7f6c165b4" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.58:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:28 crc kubenswrapper[4854]: I0103 07:06:28.981429 4854 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-k8nxq container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.58:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:28 crc kubenswrapper[4854]: I0103 07:06:28.981522 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66df7c8f76-k8nxq" podUID="b633dc70-c725-4f1b-9595-aee7f6c165b4" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.58:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:29 crc kubenswrapper[4854]: I0103 07:06:29.103591 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operators-redhat/loki-operator-controller-manager-bd45dfbc8-vmrll" Jan 03 07:06:29 crc kubenswrapper[4854]: I0103 07:06:29.103650 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-bd45dfbc8-vmrll" Jan 03 07:06:29 crc kubenswrapper[4854]: I0103 07:06:29.134100 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-engine-64d84f65b5-cnzjg" podUID="f0006564-0566-4941-983d-8e5c58889f7f" containerName="heat-engine" probeResult="failure" output="command timed out" Jan 03 07:06:29 crc kubenswrapper[4854]: I0103 07:06:29.134201 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-64d84f65b5-cnzjg" Jan 03 07:06:29 crc kubenswrapper[4854]: I0103 07:06:29.137378 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/heat-engine-64d84f65b5-cnzjg" podUID="f0006564-0566-4941-983d-8e5c58889f7f" containerName="heat-engine" probeResult="failure" output="command timed out" Jan 03 07:06:29 crc kubenswrapper[4854]: I0103 07:06:29.137475 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-engine-64d84f65b5-cnzjg" Jan 03 07:06:29 crc kubenswrapper[4854]: I0103 07:06:29.210168 4854 patch_prober.go:28] interesting pod/logging-loki-gateway-656bf7cf7c-w49nx container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:29 crc kubenswrapper[4854]: I0103 07:06:29.210242 4854 patch_prober.go:28] interesting pod/logging-loki-gateway-656bf7cf7c-w49nx container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:29 crc kubenswrapper[4854]: I0103 07:06:29.210491 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-656bf7cf7c-w49nx" podUID="4de190f3-1f91-4bd7-9d46-df7235633d58" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:29 crc kubenswrapper[4854]: I0103 07:06:29.210561 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-656bf7cf7c-w49nx" podUID="4de190f3-1f91-4bd7-9d46-df7235633d58" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:29 crc kubenswrapper[4854]: E0103 07:06:29.315274 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a0936da6f25419621b27d0418a43e754bb7970fc15020eb69cec2bc27be795a1 is running failed: container process not found" containerID="a0936da6f25419621b27d0418a43e754bb7970fc15020eb69cec2bc27be795a1" cmd=["grpc_health_probe","-addr=:50051"] Jan 03 07:06:29 crc kubenswrapper[4854]: E0103 07:06:29.315719 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a0936da6f25419621b27d0418a43e754bb7970fc15020eb69cec2bc27be795a1 is running failed: container process not found" containerID="a0936da6f25419621b27d0418a43e754bb7970fc15020eb69cec2bc27be795a1" cmd=["grpc_health_probe","-addr=:50051"] Jan 03 07:06:29 crc kubenswrapper[4854]: E0103 07:06:29.316094 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a0936da6f25419621b27d0418a43e754bb7970fc15020eb69cec2bc27be795a1 is running failed: container process not found" containerID="a0936da6f25419621b27d0418a43e754bb7970fc15020eb69cec2bc27be795a1" cmd=["grpc_health_probe","-addr=:50051"] Jan 03 07:06:29 crc kubenswrapper[4854]: E0103 07:06:29.316133 4854 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a0936da6f25419621b27d0418a43e754bb7970fc15020eb69cec2bc27be795a1 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-9lzwf" podUID="7e9a4f28-3133-4df6-9ed3-fbae3e03d777" containerName="registry-server" Jan 03 07:06:29 crc kubenswrapper[4854]: I0103 07:06:29.383499 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-7fdb976ccd-xpqws" podUID="c752fc50-5b45-4cbc-8a1c-b0cec9e720e5" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.93:8080/readyz\": dial tcp 10.217.0.93:8080: connect: connection refused" Jan 03 07:06:29 crc kubenswrapper[4854]: I0103 07:06:29.595579 4854 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": EOF" start-of-body= Jan 03 07:06:29 crc kubenswrapper[4854]: I0103 07:06:29.595638 4854 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": EOF" start-of-body= Jan 03 07:06:29 crc kubenswrapper[4854]: I0103 07:06:29.595991 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": EOF" Jan 03 07:06:29 crc kubenswrapper[4854]: I0103 07:06:29.595939 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": EOF" Jan 03 07:06:29 crc kubenswrapper[4854]: I0103 07:06:29.702423 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-85546d974f-nhdvf" event={"ID":"c82f4933-ef34-46ae-8f48-f87b3ce1e90f","Type":"ContainerStarted","Data":"c05eb79051efd6b202438463f2d2306873d9b26ee8be1f8e32b9bcfc8dc274fd"} Jan 03 07:06:29 crc kubenswrapper[4854]: I0103 07:06:29.702590 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-85546d974f-nhdvf" Jan 03 07:06:29 crc kubenswrapper[4854]: I0103 07:06:29.703671 4854 patch_prober.go:28] interesting pod/logging-loki-gateway-656bf7cf7c-98v92 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:29 crc kubenswrapper[4854]: I0103 07:06:29.703716 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-656bf7cf7c-98v92" podUID="428c2117-0003-47b2-abfa-f4f7930e126c" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:29 crc kubenswrapper[4854]: I0103 07:06:29.703981 4854 patch_prober.go:28] interesting pod/logging-loki-gateway-656bf7cf7c-98v92 container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:29 crc kubenswrapper[4854]: I0103 07:06:29.704022 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-656bf7cf7c-98v92" podUID="428c2117-0003-47b2-abfa-f4f7930e126c" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:29 crc kubenswrapper[4854]: I0103 07:06:29.707770 4854 generic.go:334] "Generic (PLEG): container finished" podID="03a2de93-c858-46e8-ae42-a34d1d776b7c" containerID="bbe82c0b1f68f3994e20a0b03a43efbc04ceb2ca503b4bedb9b84659f17e79ca" exitCode=0 Jan 03 07:06:29 crc kubenswrapper[4854]: I0103 07:06:29.707819 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6pvh8" event={"ID":"03a2de93-c858-46e8-ae42-a34d1d776b7c","Type":"ContainerDied","Data":"bbe82c0b1f68f3994e20a0b03a43efbc04ceb2ca503b4bedb9b84659f17e79ca"} Jan 03 07:06:29 crc kubenswrapper[4854]: I0103 07:06:29.712908 4854 generic.go:334] "Generic (PLEG): container finished" podID="99f863f5-fa79-40f0-8ee2-d3d75b6c3df2" containerID="dfef79ec6ba8d75de052d84ce5dcd60a393c8f2af22bc8300a68bb6818818d46" exitCode=0 Jan 03 07:06:29 crc kubenswrapper[4854]: I0103 07:06:29.712983 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s6gct" event={"ID":"99f863f5-fa79-40f0-8ee2-d3d75b6c3df2","Type":"ContainerDied","Data":"dfef79ec6ba8d75de052d84ce5dcd60a393c8f2af22bc8300a68bb6818818d46"} Jan 03 07:06:29 crc kubenswrapper[4854]: I0103 07:06:29.717350 4854 generic.go:334] "Generic (PLEG): container finished" podID="7e9a4f28-3133-4df6-9ed3-fbae3e03d777" containerID="a0936da6f25419621b27d0418a43e754bb7970fc15020eb69cec2bc27be795a1" exitCode=0 Jan 03 07:06:29 crc kubenswrapper[4854]: I0103 07:06:29.717482 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9lzwf" event={"ID":"7e9a4f28-3133-4df6-9ed3-fbae3e03d777","Type":"ContainerDied","Data":"a0936da6f25419621b27d0418a43e754bb7970fc15020eb69cec2bc27be795a1"} Jan 03 07:06:29 crc kubenswrapper[4854]: I0103 07:06:29.719632 4854 generic.go:334] "Generic (PLEG): container finished" podID="c752fc50-5b45-4cbc-8a1c-b0cec9e720e5" containerID="fe20e50857042ab7dcaddfbd3d7f074d09030b7ded9acadf155df462aaccbfcd" exitCode=1 Jan 03 07:06:29 crc kubenswrapper[4854]: I0103 07:06:29.719691 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7fdb976ccd-xpqws" event={"ID":"c752fc50-5b45-4cbc-8a1c-b0cec9e720e5","Type":"ContainerDied","Data":"fe20e50857042ab7dcaddfbd3d7f074d09030b7ded9acadf155df462aaccbfcd"} Jan 03 07:06:29 crc kubenswrapper[4854]: I0103 07:06:29.720984 4854 scope.go:117] "RemoveContainer" containerID="fe20e50857042ab7dcaddfbd3d7f074d09030b7ded9acadf155df462aaccbfcd" Jan 03 07:06:29 crc kubenswrapper[4854]: I0103 07:06:29.722134 4854 generic.go:334] "Generic (PLEG): container finished" podID="159783f1-b3b7-432d-b243-e8e7076ddd0a" containerID="1c1339677d0c8a6d7d7eee61fd4fa15d6a40580599301989032bde78a8b8e7c2" exitCode=0 Jan 03 07:06:29 crc kubenswrapper[4854]: I0103 07:06:29.722188 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" event={"ID":"159783f1-b3b7-432d-b243-e8e7076ddd0a","Type":"ContainerDied","Data":"1c1339677d0c8a6d7d7eee61fd4fa15d6a40580599301989032bde78a8b8e7c2"} Jan 03 07:06:29 crc kubenswrapper[4854]: I0103 07:06:29.724338 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-q6v5f" event={"ID":"e1f91a20-c61d-488f-98ab-f966174f3764","Type":"ContainerStarted","Data":"6dd3ec3405b378ee7065fdf75aa7ab0db9b901a1dc87a31841de861fd58e52cc"} Jan 03 07:06:29 crc kubenswrapper[4854]: I0103 07:06:29.724555 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-q6v5f" Jan 03 07:06:29 crc kubenswrapper[4854]: I0103 07:06:29.725245 4854 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-q6v5f container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.73:8443/healthz\": dial tcp 10.217.0.73:8443: connect: connection refused" start-of-body= Jan 03 07:06:29 crc kubenswrapper[4854]: I0103 07:06:29.725296 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-q6v5f" podUID="e1f91a20-c61d-488f-98ab-f966174f3764" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.73:8443/healthz\": dial tcp 10.217.0.73:8443: connect: connection refused" Jan 03 07:06:29 crc kubenswrapper[4854]: I0103 07:06:29.727141 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"006530e4-7385-4334-80e8-86bfcf5f645f","Type":"ContainerStarted","Data":"8cafcd9d39769968b81b8e6d8052b84e5bd2df3bab97ff9a02c9ef238410c1b0"} Jan 03 07:06:29 crc kubenswrapper[4854]: I0103 07:06:29.727261 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 03 07:06:29 crc kubenswrapper[4854]: I0103 07:06:29.728969 4854 generic.go:334] "Generic (PLEG): container finished" podID="5c8ccde8-0051-491f-b5d6-a2930440c138" containerID="1fef4fe0b5cd3735e92b2987769721a91e3baf81c3b158e62352607a1dd17e36" exitCode=0 Jan 03 07:06:29 crc kubenswrapper[4854]: I0103 07:06:29.729043 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-9trnq" event={"ID":"5c8ccde8-0051-491f-b5d6-a2930440c138","Type":"ContainerDied","Data":"1fef4fe0b5cd3735e92b2987769721a91e3baf81c3b158e62352607a1dd17e36"} Jan 03 07:06:29 crc kubenswrapper[4854]: I0103 07:06:29.729677 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="heat-engine" containerStatusID={"Type":"cri-o","ID":"5b3c9925f5f7e6441dbf0a0ee55b9643562516ab15c2af0af5a2c0f0efe1c5ae"} pod="openstack/heat-engine-64d84f65b5-cnzjg" containerMessage="Container heat-engine failed liveness probe, will be restarted" Jan 03 07:06:29 crc kubenswrapper[4854]: I0103 07:06:29.729729 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-64d84f65b5-cnzjg" podUID="f0006564-0566-4941-983d-8e5c58889f7f" containerName="heat-engine" containerID="cri-o://5b3c9925f5f7e6441dbf0a0ee55b9643562516ab15c2af0af5a2c0f0efe1c5ae" gracePeriod=60 Jan 03 07:06:29 crc kubenswrapper[4854]: I0103 07:06:29.751021 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="79760a75-c798-415d-be02-dd3a6a9c74ee" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.176:9090/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:29 crc kubenswrapper[4854]: I0103 07:06:29.932001 4854 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-n82hj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 03 07:06:29 crc kubenswrapper[4854]: I0103 07:06:29.932068 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n82hj" podUID="9ecb343a-f88c-49d3-a792-696f8b94eca3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 03 07:06:29 crc kubenswrapper[4854]: I0103 07:06:29.955099 4854 trace.go:236] Trace[1103497476]: "Calculate volume metrics of glance for pod openstack/glance-default-internal-api-0" (03-Jan-2026 07:06:18.326) (total time: 11628ms): Jan 03 07:06:29 crc kubenswrapper[4854]: Trace[1103497476]: [11.628952366s] [11.628952366s] END Jan 03 07:06:29 crc kubenswrapper[4854]: I0103 07:06:29.959768 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="79760a75-c798-415d-be02-dd3a6a9c74ee" containerName="prometheus" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 03 07:06:29 crc kubenswrapper[4854]: I0103 07:06:29.966159 4854 trace.go:236] Trace[528919130]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-server-2" (03-Jan-2026 07:06:18.706) (total time: 11259ms): Jan 03 07:06:29 crc kubenswrapper[4854]: Trace[528919130]: [11.259403674s] [11.259403674s] END Jan 03 07:06:29 crc kubenswrapper[4854]: I0103 07:06:29.966235 4854 trace.go:236] Trace[1507704802]: "Calculate volume metrics of glance for pod openstack/glance-default-external-api-0" (03-Jan-2026 07:06:23.302) (total time: 6663ms): Jan 03 07:06:29 crc kubenswrapper[4854]: Trace[1507704802]: [6.663414388s] [6.663414388s] END Jan 03 07:06:30 crc kubenswrapper[4854]: I0103 07:06:30.022802 4854 patch_prober.go:28] interesting pod/console-operator-58897d9998-2lwzj container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/readyz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Jan 03 07:06:30 crc kubenswrapper[4854]: I0103 07:06:30.023043 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-2lwzj" podUID="dcde1a7d-7025-45cb-92de-483da7a86296" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.17:8443/readyz\": dial tcp 10.217.0.17:8443: connect: connection refused" Jan 03 07:06:30 crc kubenswrapper[4854]: I0103 07:06:30.051282 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7nclw5" podUID="1f9928f3-0c28-40df-b6ad-c871424ad3a6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": read tcp 10.217.0.2:47736->10.217.0.116:8081: read: connection reset by peer" Jan 03 07:06:30 crc kubenswrapper[4854]: I0103 07:06:30.325342 4854 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-99h4j container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 03 07:06:30 crc kubenswrapper[4854]: I0103 07:06:30.325362 4854 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-lltxw container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Jan 03 07:06:30 crc kubenswrapper[4854]: I0103 07:06:30.325389 4854 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-lltxw container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Jan 03 07:06:30 crc kubenswrapper[4854]: I0103 07:06:30.325398 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-99h4j" podUID="07007d77-4861-45ac-aacd-17b840bef2ee" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 03 07:06:30 crc kubenswrapper[4854]: I0103 07:06:30.325404 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lltxw" podUID="77084a3a-5610-4014-a3bf-6d4073a74d44" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Jan 03 07:06:30 crc kubenswrapper[4854]: I0103 07:06:30.325446 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lltxw" podUID="77084a3a-5610-4014-a3bf-6d4073a74d44" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Jan 03 07:06:30 crc kubenswrapper[4854]: I0103 07:06:30.541251 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qqbq9" podUID="ba0f32da-a0e3-4c43-8dde-d6212a1c63e1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:30 crc kubenswrapper[4854]: I0103 07:06:30.648323 4854 patch_prober.go:28] interesting pod/router-default-5444994796-tdlx9 container/router namespace/openshift-ingress: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]backend-http ok Jan 03 07:06:30 crc kubenswrapper[4854]: [+]has-synced ok Jan 03 07:06:30 crc kubenswrapper[4854]: [-]process-running failed: reason withheld Jan 03 07:06:30 crc kubenswrapper[4854]: healthz check failed Jan 03 07:06:30 crc kubenswrapper[4854]: I0103 07:06:30.648376 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-tdlx9" podUID="ab6ec22e-2a2c-4e28-8242-5bd783990843" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 03 07:06:30 crc kubenswrapper[4854]: I0103 07:06:30.754883 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="9afcc108-879e-4244-a52b-1c5720d08571" containerName="prometheus" probeResult="failure" output=< Jan 03 07:06:30 crc kubenswrapper[4854]: % Total % Received % Xferd Average Speed Time Time Time Current Jan 03 07:06:30 crc kubenswrapper[4854]: Dload Upload Total Spent Left Speed Jan 03 07:06:30 crc kubenswrapper[4854]: [166B blob data] Jan 03 07:06:30 crc kubenswrapper[4854]: curl: (22) The requested URL returned error: 503 Jan 03 07:06:30 crc kubenswrapper[4854]: > Jan 03 07:06:30 crc kubenswrapper[4854]: E0103 07:06:30.766295 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="126e20e41c5864ad13d00e751b2fc481ae418edc9985b5ae2aac0e5b79445f9d" cmd=["sh","-c","if [ -x \"$(command -v curl)\" ]; then exec curl --fail http://localhost:9090/-/ready; elif [ -x \"$(command -v wget)\" ]; then exec wget -q -O /dev/null http://localhost:9090/-/ready; else exit 1; fi"] Jan 03 07:06:30 crc kubenswrapper[4854]: I0103 07:06:30.767815 4854 generic.go:334] "Generic (PLEG): container finished" podID="7d4776d0-290f-4c82-aa5c-6412b5bb4608" containerID="7a94f73858bd6bc637fdf88e96a68dc87a1aefec805ad7273825018854334617" exitCode=1 Jan 03 07:06:30 crc kubenswrapper[4854]: I0103 07:06:30.768453 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-k6nnf" event={"ID":"7d4776d0-290f-4c82-aa5c-6412b5bb4608","Type":"ContainerDied","Data":"7a94f73858bd6bc637fdf88e96a68dc87a1aefec805ad7273825018854334617"} Jan 03 07:06:30 crc kubenswrapper[4854]: I0103 07:06:30.769815 4854 scope.go:117] "RemoveContainer" containerID="7a94f73858bd6bc637fdf88e96a68dc87a1aefec805ad7273825018854334617" Jan 03 07:06:30 crc kubenswrapper[4854]: E0103 07:06:30.776716 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="126e20e41c5864ad13d00e751b2fc481ae418edc9985b5ae2aac0e5b79445f9d" cmd=["sh","-c","if [ -x \"$(command -v curl)\" ]; then exec curl --fail http://localhost:9090/-/ready; elif [ -x \"$(command -v wget)\" ]; then exec wget -q -O /dev/null http://localhost:9090/-/ready; else exit 1; fi"] Jan 03 07:06:30 crc kubenswrapper[4854]: I0103 07:06:30.779068 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-lzwb4" event={"ID":"ea9863f6-8706-4844-ad3e-93309cdbef22","Type":"ContainerStarted","Data":"8dc4cbbc8c13dbcb595c7b3d6dfcdc4cded88f1509714f81ee5e950b7400532f"} Jan 03 07:06:30 crc kubenswrapper[4854]: I0103 07:06:30.779418 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-lzwb4" Jan 03 07:06:30 crc kubenswrapper[4854]: E0103 07:06:30.781394 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="126e20e41c5864ad13d00e751b2fc481ae418edc9985b5ae2aac0e5b79445f9d" cmd=["sh","-c","if [ -x \"$(command -v curl)\" ]; then exec curl --fail http://localhost:9090/-/ready; elif [ -x \"$(command -v wget)\" ]; then exec wget -q -O /dev/null http://localhost:9090/-/ready; else exit 1; fi"] Jan 03 07:06:30 crc kubenswrapper[4854]: E0103 07:06:30.781455 4854 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="9afcc108-879e-4244-a52b-1c5720d08571" containerName="prometheus" Jan 03 07:06:30 crc kubenswrapper[4854]: I0103 07:06:30.788247 4854 generic.go:334] "Generic (PLEG): container finished" podID="05f5522f-8e47-4d35-be75-2edee0f16f77" containerID="552f47762794e0c39bb4081c8db206bd34205a674c1b762980d168b1617b9e91" exitCode=0 Jan 03 07:06:30 crc kubenswrapper[4854]: I0103 07:06:30.788347 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-8xksh" event={"ID":"05f5522f-8e47-4d35-be75-2edee0f16f77","Type":"ContainerDied","Data":"552f47762794e0c39bb4081c8db206bd34205a674c1b762980d168b1617b9e91"} Jan 03 07:06:30 crc kubenswrapper[4854]: I0103 07:06:30.790784 4854 generic.go:334] "Generic (PLEG): container finished" podID="25988b2b-1924-4007-a6b1-5e5403d5dc68" containerID="5a78d09ec546ee13bf3ea690431552445ce59db383b12a2c96a94ff28061ccd8" exitCode=1 Jan 03 07:06:30 crc kubenswrapper[4854]: I0103 07:06:30.790858 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-vdnq9" event={"ID":"25988b2b-1924-4007-a6b1-5e5403d5dc68","Type":"ContainerDied","Data":"5a78d09ec546ee13bf3ea690431552445ce59db383b12a2c96a94ff28061ccd8"} Jan 03 07:06:30 crc kubenswrapper[4854]: I0103 07:06:30.791971 4854 scope.go:117] "RemoveContainer" containerID="5a78d09ec546ee13bf3ea690431552445ce59db383b12a2c96a94ff28061ccd8" Jan 03 07:06:30 crc kubenswrapper[4854]: I0103 07:06:30.798837 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/2.log" Jan 03 07:06:30 crc kubenswrapper[4854]: E0103 07:06:30.800511 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bbe82c0b1f68f3994e20a0b03a43efbc04ceb2ca503b4bedb9b84659f17e79ca is running failed: container process not found" containerID="bbe82c0b1f68f3994e20a0b03a43efbc04ceb2ca503b4bedb9b84659f17e79ca" cmd=["grpc_health_probe","-addr=:50051"] Jan 03 07:06:30 crc kubenswrapper[4854]: E0103 07:06:30.801248 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bbe82c0b1f68f3994e20a0b03a43efbc04ceb2ca503b4bedb9b84659f17e79ca is running failed: container process not found" containerID="bbe82c0b1f68f3994e20a0b03a43efbc04ceb2ca503b4bedb9b84659f17e79ca" cmd=["grpc_health_probe","-addr=:50051"] Jan 03 07:06:30 crc kubenswrapper[4854]: E0103 07:06:30.802108 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bbe82c0b1f68f3994e20a0b03a43efbc04ceb2ca503b4bedb9b84659f17e79ca is running failed: container process not found" containerID="bbe82c0b1f68f3994e20a0b03a43efbc04ceb2ca503b4bedb9b84659f17e79ca" cmd=["grpc_health_probe","-addr=:50051"] Jan 03 07:06:30 crc kubenswrapper[4854]: E0103 07:06:30.802156 4854 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bbe82c0b1f68f3994e20a0b03a43efbc04ceb2ca503b4bedb9b84659f17e79ca is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-6pvh8" podUID="03a2de93-c858-46e8-ae42-a34d1d776b7c" containerName="registry-server" Jan 03 07:06:30 crc kubenswrapper[4854]: I0103 07:06:30.806047 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 03 07:06:30 crc kubenswrapper[4854]: I0103 07:06:30.808691 4854 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="5a0763e01c1342b89c8825637cbcf287d92d7340beb51666b69cf6ebf12fd3b9" exitCode=1 Jan 03 07:06:30 crc kubenswrapper[4854]: I0103 07:06:30.808741 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"5a0763e01c1342b89c8825637cbcf287d92d7340beb51666b69cf6ebf12fd3b9"} Jan 03 07:06:30 crc kubenswrapper[4854]: I0103 07:06:30.808789 4854 scope.go:117] "RemoveContainer" containerID="1e783c688f3958d8106cc55b7b342eb3b92c06ef49c155bc3474c7118bccdd71" Jan 03 07:06:30 crc kubenswrapper[4854]: I0103 07:06:30.810446 4854 scope.go:117] "RemoveContainer" containerID="5a0763e01c1342b89c8825637cbcf287d92d7340beb51666b69cf6ebf12fd3b9" Jan 03 07:06:30 crc kubenswrapper[4854]: I0103 07:06:30.812905 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-bzjqc" event={"ID":"d1422b70-f6c6-46f8-81b3-1d2f35800374","Type":"ContainerStarted","Data":"eec9670e0e7ddcbbaf40d53c87e9c25c166df03692c5930bb16183d120657b00"} Jan 03 07:06:30 crc kubenswrapper[4854]: I0103 07:06:30.813020 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-5bddd4b946-bzjqc" Jan 03 07:06:30 crc kubenswrapper[4854]: I0103 07:06:30.816638 4854 generic.go:334] "Generic (PLEG): container finished" podID="a327e8cf-824f-41b1-9076-5fd57a8b4352" containerID="6276d73e5970a2f94289dded2de6b14873d5cb520a1efb79f3fa5ee5db4cac7c" exitCode=0 Jan 03 07:06:30 crc kubenswrapper[4854]: I0103 07:06:30.816665 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-hgwsb" event={"ID":"a327e8cf-824f-41b1-9076-5fd57a8b4352","Type":"ContainerDied","Data":"6276d73e5970a2f94289dded2de6b14873d5cb520a1efb79f3fa5ee5db4cac7c"} Jan 03 07:06:30 crc kubenswrapper[4854]: I0103 07:06:30.818560 4854 generic.go:334] "Generic (PLEG): container finished" podID="bc9994eb-5930-484d-a02c-60d4e13483e2" containerID="423337c31d04ab35f34cc1bfe20f120baa0b2e3d55c33fe2710212e8b1497b88" exitCode=0 Jan 03 07:06:30 crc kubenswrapper[4854]: I0103 07:06:30.818652 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-c8b457848-dg5cs" event={"ID":"bc9994eb-5930-484d-a02c-60d4e13483e2","Type":"ContainerDied","Data":"423337c31d04ab35f34cc1bfe20f120baa0b2e3d55c33fe2710212e8b1497b88"} Jan 03 07:06:30 crc kubenswrapper[4854]: I0103 07:06:30.819493 4854 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-q6v5f container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.73:8443/healthz\": dial tcp 10.217.0.73:8443: connect: connection refused" start-of-body= Jan 03 07:06:30 crc kubenswrapper[4854]: I0103 07:06:30.819547 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-q6v5f" podUID="e1f91a20-c61d-488f-98ab-f966174f3764" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.73:8443/healthz\": dial tcp 10.217.0.73:8443: connect: connection refused" Jan 03 07:06:30 crc kubenswrapper[4854]: I0103 07:06:30.908907 4854 trace.go:236] Trace[1483581112]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-server-1" (03-Jan-2026 07:06:27.315) (total time: 3592ms): Jan 03 07:06:30 crc kubenswrapper[4854]: Trace[1483581112]: [3.592892782s] [3.592892782s] END Jan 03 07:06:30 crc kubenswrapper[4854]: I0103 07:06:30.947401 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-85b679bdc6-qrnbc" Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.032609 4854 patch_prober.go:28] interesting pod/downloads-7954f5f757-dmlm5 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.033026 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-dmlm5" podUID="d5805efa-800c-43df-ba80-7a7db226ebb3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.032815 4854 patch_prober.go:28] interesting pod/downloads-7954f5f757-dmlm5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.033370 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dmlm5" podUID="d5805efa-800c-43df-ba80-7a7db226ebb3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.072449 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-6fczv" podUID="e29c84ac-4ca9-44ec-b886-ae50c84ba121" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": dial tcp 127.0.0.1:7572: connect: connection refused" Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.127264 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-6fczv" Jan 03 07:06:31 crc kubenswrapper[4854]: E0103 07:06:31.203412 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="beefc6c9c7cd45bbcbb076ad45c39976e498a1e4adab4caf810e7c2791e0ac1e" cmd=["grpc_health_probe","-addr=:50051"] Jan 03 07:06:31 crc kubenswrapper[4854]: E0103 07:06:31.208307 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="beefc6c9c7cd45bbcbb076ad45c39976e498a1e4adab4caf810e7c2791e0ac1e" cmd=["grpc_health_probe","-addr=:50051"] Jan 03 07:06:31 crc kubenswrapper[4854]: E0103 07:06:31.239037 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="beefc6c9c7cd45bbcbb076ad45c39976e498a1e4adab4caf810e7c2791e0ac1e" cmd=["grpc_health_probe","-addr=:50051"] Jan 03 07:06:31 crc kubenswrapper[4854]: E0103 07:06:31.239154 4854 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-8f5jd" podUID="a0902db0-b7a6-496e-955c-c6f6bb3429c6" containerName="registry-server" Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.261000 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-64d84f65b5-cnzjg" Jan 03 07:06:31 crc kubenswrapper[4854]: E0103 07:06:31.263137 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5b3c9925f5f7e6441dbf0a0ee55b9643562516ab15c2af0af5a2c0f0efe1c5ae" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 03 07:06:31 crc kubenswrapper[4854]: E0103 07:06:31.265395 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5b3c9925f5f7e6441dbf0a0ee55b9643562516ab15c2af0af5a2c0f0efe1c5ae" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 03 07:06:31 crc kubenswrapper[4854]: E0103 07:06:31.266686 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5b3c9925f5f7e6441dbf0a0ee55b9643562516ab15c2af0af5a2c0f0efe1c5ae" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 03 07:06:31 crc kubenswrapper[4854]: E0103 07:06:31.266760 4854 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-64d84f65b5-cnzjg" podUID="f0006564-0566-4941-983d-8e5c58889f7f" containerName="heat-engine" Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.341446 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.532299 4854 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-lcbzf container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.532333 4854 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-fwgd2 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.532312 4854 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-fwgd2 container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.31:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.532356 4854 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-lcbzf container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.532381 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fwgd2" podUID="5ab7ee8b-9182-43e2-85de-f8d92aa12587" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.31:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.532413 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fwgd2" podUID="5ab7ee8b-9182-43e2-85de-f8d92aa12587" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.31:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.532355 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-lcbzf" podUID="07528198-b6c3-44c7-aec4-4647d7a06116" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.532416 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-lcbzf" podUID="07528198-b6c3-44c7-aec4-4647d7a06116" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.532446 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fwgd2" Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.532490 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-lcbzf" Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.532502 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fwgd2" Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.532510 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-lcbzf" Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.533837 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="marketplace-operator" containerStatusID={"Type":"cri-o","ID":"ccd9c5a4f61c165f96a2b42680ccd773657a0bfcb8c8599cfbbde2f069b6a6c0"} pod="openshift-marketplace/marketplace-operator-79b997595-lcbzf" containerMessage="Container marketplace-operator failed liveness probe, will be restarted" Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.533876 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-lcbzf" podUID="07528198-b6c3-44c7-aec4-4647d7a06116" containerName="marketplace-operator" containerID="cri-o://ccd9c5a4f61c165f96a2b42680ccd773657a0bfcb8c8599cfbbde2f069b6a6c0" gracePeriod=30 Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.534338 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="packageserver" containerStatusID={"Type":"cri-o","ID":"c021757414cec9b59fdae5efd408a03412abeb6822003d9b893eb05bdeb3f029"} pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fwgd2" containerMessage="Container packageserver failed liveness probe, will be restarted" Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.534369 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fwgd2" podUID="5ab7ee8b-9182-43e2-85de-f8d92aa12587" containerName="packageserver" containerID="cri-o://c021757414cec9b59fdae5efd408a03412abeb6822003d9b893eb05bdeb3f029" gracePeriod=30 Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.547143 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-lcbzf" Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.848516 4854 generic.go:334] "Generic (PLEG): container finished" podID="ddf8e54e-858e-432c-ab2d-8b4d83f6282b" containerID="9ad77647bc8c7a303f8c28d2bec0f7ab94d0e6d882ca027fcc923850bed2e1e6" exitCode=1 Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.848601 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-qzzw2" event={"ID":"ddf8e54e-858e-432c-ab2d-8b4d83f6282b","Type":"ContainerDied","Data":"9ad77647bc8c7a303f8c28d2bec0f7ab94d0e6d882ca027fcc923850bed2e1e6"} Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.849430 4854 scope.go:117] "RemoveContainer" containerID="9ad77647bc8c7a303f8c28d2bec0f7ab94d0e6d882ca027fcc923850bed2e1e6" Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.857999 4854 generic.go:334] "Generic (PLEG): container finished" podID="07007d77-4861-45ac-aacd-17b840bef2ee" containerID="cd5d4db41f7b67e9b596b2078363907a0118e5e595d471db2365026bf43e6851" exitCode=0 Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.858069 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-99h4j" event={"ID":"07007d77-4861-45ac-aacd-17b840bef2ee","Type":"ContainerDied","Data":"cd5d4db41f7b67e9b596b2078363907a0118e5e595d471db2365026bf43e6851"} Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.863644 4854 generic.go:334] "Generic (PLEG): container finished" podID="ad6a18d3-e1d2-446a-9b41-a9fca5e8b574" containerID="782199bdda514fa01ba01343a904400694b5d17b3870189dbd47ddbd380a3384" exitCode=0 Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.863692 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-7lvxp" event={"ID":"ad6a18d3-e1d2-446a-9b41-a9fca5e8b574","Type":"ContainerDied","Data":"782199bdda514fa01ba01343a904400694b5d17b3870189dbd47ddbd380a3384"} Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.895664 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6fczv" event={"ID":"e29c84ac-4ca9-44ec-b886-ae50c84ba121","Type":"ContainerStarted","Data":"591461506d4c7b2fb7114f34397471ba90face1367421551311ccc054357c998"} Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.895969 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-6fczv" Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.900608 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-5444994796-tdlx9_ab6ec22e-2a2c-4e28-8242-5bd783990843/router/0.log" Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.900695 4854 generic.go:334] "Generic (PLEG): container finished" podID="ab6ec22e-2a2c-4e28-8242-5bd783990843" containerID="63cc6c355f6397dba553d8cb89d15fb9ff68767748c5f862c6a7a5d7d0806e07" exitCode=137 Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.900789 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-tdlx9" event={"ID":"ab6ec22e-2a2c-4e28-8242-5bd783990843","Type":"ContainerDied","Data":"63cc6c355f6397dba553d8cb89d15fb9ff68767748c5f862c6a7a5d7d0806e07"} Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.906642 4854 generic.go:334] "Generic (PLEG): container finished" podID="79760a75-c798-415d-be02-dd3a6a9c74ee" containerID="e2af7467dae280858c2f304d5e6bd72712fff83f213ca117622d0be2839f6d64" exitCode=0 Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.906972 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"79760a75-c798-415d-be02-dd3a6a9c74ee","Type":"ContainerDied","Data":"e2af7467dae280858c2f304d5e6bd72712fff83f213ca117622d0be2839f6d64"} Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.912053 4854 generic.go:334] "Generic (PLEG): container finished" podID="1f9928f3-0c28-40df-b6ad-c871424ad3a6" containerID="7a17508ff67dfec2d67f7f271686ca65908119c01fe986e63df083ab37deb07e" exitCode=1 Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.912148 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7nclw5" event={"ID":"1f9928f3-0c28-40df-b6ad-c871424ad3a6","Type":"ContainerDied","Data":"7a17508ff67dfec2d67f7f271686ca65908119c01fe986e63df083ab37deb07e"} Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.913188 4854 scope.go:117] "RemoveContainer" containerID="7a17508ff67dfec2d67f7f271686ca65908119c01fe986e63df083ab37deb07e" Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.916217 4854 generic.go:334] "Generic (PLEG): container finished" podID="14991c3c-8c35-4008-b1a0-1b8690074322" containerID="95bce15c2a178ac1cba0a18dc148e3ffba44a4cc32babeb5c7258243a1c05990" exitCode=1 Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.916272 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-568985c78-x78fv" event={"ID":"14991c3c-8c35-4008-b1a0-1b8690074322","Type":"ContainerDied","Data":"95bce15c2a178ac1cba0a18dc148e3ffba44a4cc32babeb5c7258243a1c05990"} Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.916682 4854 scope.go:117] "RemoveContainer" containerID="95bce15c2a178ac1cba0a18dc148e3ffba44a4cc32babeb5c7258243a1c05990" Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.919376 4854 generic.go:334] "Generic (PLEG): container finished" podID="7f7c87f2-5743-4000-a36a-3a9400e24cdd" containerID="b29a56eea2c1e72f35b4e50d6c1fd1f33dff3c436704cd4fc9bcdf70e3c082ee" exitCode=1 Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.919420 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-85b679bdc6-qrnbc" event={"ID":"7f7c87f2-5743-4000-a36a-3a9400e24cdd","Type":"ContainerDied","Data":"b29a56eea2c1e72f35b4e50d6c1fd1f33dff3c436704cd4fc9bcdf70e3c082ee"} Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.920291 4854 scope.go:117] "RemoveContainer" containerID="b29a56eea2c1e72f35b4e50d6c1fd1f33dff3c436704cd4fc9bcdf70e3c082ee" Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.922681 4854 generic.go:334] "Generic (PLEG): container finished" podID="ba0f32da-a0e3-4c43-8dde-d6212a1c63e1" containerID="e5a91fcb715bdf60c1d37a559ebbb7addbd8d8b8f95e6c6e300d56858e664bd6" exitCode=1 Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.922769 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qqbq9" event={"ID":"ba0f32da-a0e3-4c43-8dde-d6212a1c63e1","Type":"ContainerDied","Data":"e5a91fcb715bdf60c1d37a559ebbb7addbd8d8b8f95e6c6e300d56858e664bd6"} Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.924231 4854 scope.go:117] "RemoveContainer" containerID="e5a91fcb715bdf60c1d37a559ebbb7addbd8d8b8f95e6c6e300d56858e664bd6" Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.926460 4854 generic.go:334] "Generic (PLEG): container finished" podID="1d8399ce-3c90-4601-9a32-31dc20da4552" containerID="02a72f3526af7de403502873243d9df5f78fcecb9259c58c32cf0517bd4002fe" exitCode=0 Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.926522 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-4pbfn" event={"ID":"1d8399ce-3c90-4601-9a32-31dc20da4552","Type":"ContainerDied","Data":"02a72f3526af7de403502873243d9df5f78fcecb9259c58c32cf0517bd4002fe"} Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.930222 4854 generic.go:334] "Generic (PLEG): container finished" podID="fe7f33a3-c4b8-44b6-81f1-c2143cbb9dd1" containerID="27e03b059df2eaacdaf62fc993ec49a70301713595a9b2b921343637a5e6ac56" exitCode=1 Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.930312 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-z7cfx" event={"ID":"fe7f33a3-c4b8-44b6-81f1-c2143cbb9dd1","Type":"ContainerDied","Data":"27e03b059df2eaacdaf62fc993ec49a70301713595a9b2b921343637a5e6ac56"} Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.931033 4854 scope.go:117] "RemoveContainer" containerID="27e03b059df2eaacdaf62fc993ec49a70301713595a9b2b921343637a5e6ac56" Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.935938 4854 generic.go:334] "Generic (PLEG): container finished" podID="56476ba9-ae33-4d34-855c-0e144e4f5da3" containerID="7797cd0dbfc31494234b66b2ec1186c0b6f0cb586b6282d6a6e7c1bac6d18947" exitCode=1 Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.936000 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-xrghz" event={"ID":"56476ba9-ae33-4d34-855c-0e144e4f5da3","Type":"ContainerDied","Data":"7797cd0dbfc31494234b66b2ec1186c0b6f0cb586b6282d6a6e7c1bac6d18947"} Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.937455 4854 scope.go:117] "RemoveContainer" containerID="7797cd0dbfc31494234b66b2ec1186c0b6f0cb586b6282d6a6e7c1bac6d18947" Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.953158 4854 generic.go:334] "Generic (PLEG): container finished" podID="b0379c6e-b02d-40ef-b9ae-add1e633bc4a" containerID="b003306c709cdb4b4c71e9bbbb037118b8466a77a516a31cf224abfbcfbcd931" exitCode=0 Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.953271 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-vtpbv" event={"ID":"b0379c6e-b02d-40ef-b9ae-add1e633bc4a","Type":"ContainerDied","Data":"b003306c709cdb4b4c71e9bbbb037118b8466a77a516a31cf224abfbcfbcd931"} Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.968471 4854 generic.go:334] "Generic (PLEG): container finished" podID="81de0b3b-e6fc-45c9-b347-995726d00213" containerID="42580edc27d34903604c0511b72307f02c183363e82be201ab729031ea338806" exitCode=1 Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.968675 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-7vksk" podUID="ec8a24a9-62d4-4db8-8f17-f261a85d6a47" containerName="registry-server" probeResult="failure" output="" Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.968768 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-jvp7v" event={"ID":"81de0b3b-e6fc-45c9-b347-995726d00213","Type":"ContainerDied","Data":"42580edc27d34903604c0511b72307f02c183363e82be201ab729031ea338806"} Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.969632 4854 scope.go:117] "RemoveContainer" containerID="42580edc27d34903604c0511b72307f02c183363e82be201ab729031ea338806" Jan 03 07:06:31 crc kubenswrapper[4854]: E0103 07:06:31.969715 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bf78afb6756a0ac3f08ae94f7f5549ad50f06b336a95f7cfdbbb58d836a8e757 is running failed: container process not found" containerID="bf78afb6756a0ac3f08ae94f7f5549ad50f06b336a95f7cfdbbb58d836a8e757" cmd=["grpc_health_probe","-addr=:50051"] Jan 03 07:06:31 crc kubenswrapper[4854]: E0103 07:06:31.970109 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bf78afb6756a0ac3f08ae94f7f5549ad50f06b336a95f7cfdbbb58d836a8e757 is running failed: container process not found" containerID="bf78afb6756a0ac3f08ae94f7f5549ad50f06b336a95f7cfdbbb58d836a8e757" cmd=["grpc_health_probe","-addr=:50051"] Jan 03 07:06:31 crc kubenswrapper[4854]: E0103 07:06:31.970670 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bf78afb6756a0ac3f08ae94f7f5549ad50f06b336a95f7cfdbbb58d836a8e757 is running failed: container process not found" containerID="bf78afb6756a0ac3f08ae94f7f5549ad50f06b336a95f7cfdbbb58d836a8e757" cmd=["grpc_health_probe","-addr=:50051"] Jan 03 07:06:31 crc kubenswrapper[4854]: E0103 07:06:31.970698 4854 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bf78afb6756a0ac3f08ae94f7f5549ad50f06b336a95f7cfdbbb58d836a8e757 is running failed: container process not found" probeType="Readiness" pod="openstack-operators/openstack-operator-index-7vksk" podUID="ec8a24a9-62d4-4db8-8f17-f261a85d6a47" containerName="registry-server" Jan 03 07:06:31 crc kubenswrapper[4854]: E0103 07:06:31.971100 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bf78afb6756a0ac3f08ae94f7f5549ad50f06b336a95f7cfdbbb58d836a8e757 is running failed: container process not found" containerID="bf78afb6756a0ac3f08ae94f7f5549ad50f06b336a95f7cfdbbb58d836a8e757" cmd=["grpc_health_probe","-addr=:50051"] Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.971829 4854 generic.go:334] "Generic (PLEG): container finished" podID="f5b690cb-eb48-469c-a774-eff5eda46f89" containerID="8564758a866053787b8c7c3719c9e2c0aafc3cfb635325e6e19ddeef1b7ed0e6" exitCode=0 Jan 03 07:06:31 crc kubenswrapper[4854]: E0103 07:06:31.972214 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bf78afb6756a0ac3f08ae94f7f5549ad50f06b336a95f7cfdbbb58d836a8e757 is running failed: container process not found" containerID="bf78afb6756a0ac3f08ae94f7f5549ad50f06b336a95f7cfdbbb58d836a8e757" cmd=["grpc_health_probe","-addr=:50051"] Jan 03 07:06:31 crc kubenswrapper[4854]: I0103 07:06:31.972349 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-trsxr" event={"ID":"f5b690cb-eb48-469c-a774-eff5eda46f89","Type":"ContainerDied","Data":"8564758a866053787b8c7c3719c9e2c0aafc3cfb635325e6e19ddeef1b7ed0e6"} Jan 03 07:06:31 crc kubenswrapper[4854]: E0103 07:06:31.972736 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bf78afb6756a0ac3f08ae94f7f5549ad50f06b336a95f7cfdbbb58d836a8e757 is running failed: container process not found" containerID="bf78afb6756a0ac3f08ae94f7f5549ad50f06b336a95f7cfdbbb58d836a8e757" cmd=["grpc_health_probe","-addr=:50051"] Jan 03 07:06:31 crc kubenswrapper[4854]: E0103 07:06:31.972764 4854 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bf78afb6756a0ac3f08ae94f7f5549ad50f06b336a95f7cfdbbb58d836a8e757 is running failed: container process not found" probeType="Readiness" pod="openstack-operators/openstack-operator-index-7vksk" podUID="ec8a24a9-62d4-4db8-8f17-f261a85d6a47" containerName="registry-server" Jan 03 07:06:32 crc kubenswrapper[4854]: I0103 07:06:32.128692 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="10578fce-2c06-4977-9cb2-51b8593f9fed" containerName="galera" probeResult="failure" output="command timed out" Jan 03 07:06:32 crc kubenswrapper[4854]: I0103 07:06:32.152206 4854 kuberuntime_container.go:700] "PreStop hook not completed in grace period" pod="openstack/openstack-galera-0" podUID="10578fce-2c06-4977-9cb2-51b8593f9fed" containerName="galera" containerID="cri-o://85534b86966512a3a6777d75350ffebd09d510b71ed3bd20f577bb5b92d31a74" gracePeriod=30 Jan 03 07:06:32 crc kubenswrapper[4854]: I0103 07:06:32.152560 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="10578fce-2c06-4977-9cb2-51b8593f9fed" containerName="galera" containerID="cri-o://85534b86966512a3a6777d75350ffebd09d510b71ed3bd20f577bb5b92d31a74" gracePeriod=2 Jan 03 07:06:32 crc kubenswrapper[4854]: I0103 07:06:32.199412 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-czgkw"] Jan 03 07:06:32 crc kubenswrapper[4854]: I0103 07:06:32.242827 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fwgd2" Jan 03 07:06:32 crc kubenswrapper[4854]: E0103 07:06:32.902909 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5b3c9925f5f7e6441dbf0a0ee55b9643562516ab15c2af0af5a2c0f0efe1c5ae" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 03 07:06:32 crc kubenswrapper[4854]: E0103 07:06:32.905544 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5b3c9925f5f7e6441dbf0a0ee55b9643562516ab15c2af0af5a2c0f0efe1c5ae" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 03 07:06:32 crc kubenswrapper[4854]: E0103 07:06:32.907744 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5b3c9925f5f7e6441dbf0a0ee55b9643562516ab15c2af0af5a2c0f0efe1c5ae" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 03 07:06:32 crc kubenswrapper[4854]: E0103 07:06:32.907906 4854 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-64d84f65b5-cnzjg" podUID="f0006564-0566-4941-983d-8e5c58889f7f" containerName="heat-engine" Jan 03 07:06:32 crc kubenswrapper[4854]: I0103 07:06:32.915788 4854 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-9trnq container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.7:8081/healthz\": dial tcp 10.217.0.7:8081: connect: connection refused" start-of-body= Jan 03 07:06:32 crc kubenswrapper[4854]: I0103 07:06:32.915842 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-9trnq" podUID="5c8ccde8-0051-491f-b5d6-a2930440c138" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.7:8081/healthz\": dial tcp 10.217.0.7:8081: connect: connection refused" Jan 03 07:06:32 crc kubenswrapper[4854]: I0103 07:06:32.931819 4854 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-n82hj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 03 07:06:32 crc kubenswrapper[4854]: I0103 07:06:32.931876 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n82hj" podUID="9ecb343a-f88c-49d3-a792-696f8b94eca3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 03 07:06:32 crc kubenswrapper[4854]: I0103 07:06:32.989361 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76788598db-b8thp" Jan 03 07:06:32 crc kubenswrapper[4854]: I0103 07:06:32.997300 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-p67sv" Jan 03 07:06:33 crc kubenswrapper[4854]: I0103 07:06:33.019091 4854 generic.go:334] "Generic (PLEG): container finished" podID="07528198-b6c3-44c7-aec4-4647d7a06116" containerID="ccd9c5a4f61c165f96a2b42680ccd773657a0bfcb8c8599cfbbde2f069b6a6c0" exitCode=0 Jan 03 07:06:33 crc kubenswrapper[4854]: I0103 07:06:33.019215 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lcbzf" event={"ID":"07528198-b6c3-44c7-aec4-4647d7a06116","Type":"ContainerDied","Data":"ccd9c5a4f61c165f96a2b42680ccd773657a0bfcb8c8599cfbbde2f069b6a6c0"} Jan 03 07:06:33 crc kubenswrapper[4854]: I0103 07:06:33.028542 4854 generic.go:334] "Generic (PLEG): container finished" podID="402a077e-f741-447d-ab1c-25bc62cd24cf" containerID="451f22fd7dca98b9f9f7499fbbe0a62e1c5d3a9d28d0859afaee9fd992ccdad3" exitCode=1 Jan 03 07:06:33 crc kubenswrapper[4854]: I0103 07:06:33.028965 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-ncjlb" event={"ID":"402a077e-f741-447d-ab1c-25bc62cd24cf","Type":"ContainerDied","Data":"451f22fd7dca98b9f9f7499fbbe0a62e1c5d3a9d28d0859afaee9fd992ccdad3"} Jan 03 07:06:33 crc kubenswrapper[4854]: I0103 07:06:33.030586 4854 scope.go:117] "RemoveContainer" containerID="451f22fd7dca98b9f9f7499fbbe0a62e1c5d3a9d28d0859afaee9fd992ccdad3" Jan 03 07:06:33 crc kubenswrapper[4854]: I0103 07:06:33.053993 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-69d9546745-42f7g" Jan 03 07:06:33 crc kubenswrapper[4854]: I0103 07:06:33.055444 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-tgcxk" Jan 03 07:06:33 crc kubenswrapper[4854]: I0103 07:06:33.057716 4854 generic.go:334] "Generic (PLEG): container finished" podID="5ab7ee8b-9182-43e2-85de-f8d92aa12587" containerID="c021757414cec9b59fdae5efd408a03412abeb6822003d9b893eb05bdeb3f029" exitCode=0 Jan 03 07:06:33 crc kubenswrapper[4854]: I0103 07:06:33.057828 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fwgd2" event={"ID":"5ab7ee8b-9182-43e2-85de-f8d92aa12587","Type":"ContainerDied","Data":"c021757414cec9b59fdae5efd408a03412abeb6822003d9b893eb05bdeb3f029"} Jan 03 07:06:33 crc kubenswrapper[4854]: I0103 07:06:33.066837 4854 generic.go:334] "Generic (PLEG): container finished" podID="0f4370d7-e178-42cc-99ec-fdfeca5fb5f8" containerID="8ba167167a7457c4d989953c93e58c0a961916861f9a13e0bb90cacb5956b991" exitCode=0 Jan 03 07:06:33 crc kubenswrapper[4854]: I0103 07:06:33.066912 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-wqs5s" event={"ID":"0f4370d7-e178-42cc-99ec-fdfeca5fb5f8","Type":"ContainerDied","Data":"8ba167167a7457c4d989953c93e58c0a961916861f9a13e0bb90cacb5956b991"} Jan 03 07:06:33 crc kubenswrapper[4854]: I0103 07:06:33.076133 4854 generic.go:334] "Generic (PLEG): container finished" podID="ec8a24a9-62d4-4db8-8f17-f261a85d6a47" containerID="bf78afb6756a0ac3f08ae94f7f5549ad50f06b336a95f7cfdbbb58d836a8e757" exitCode=0 Jan 03 07:06:33 crc kubenswrapper[4854]: I0103 07:06:33.076233 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-7vksk" event={"ID":"ec8a24a9-62d4-4db8-8f17-f261a85d6a47","Type":"ContainerDied","Data":"bf78afb6756a0ac3f08ae94f7f5549ad50f06b336a95f7cfdbbb58d836a8e757"} Jan 03 07:06:33 crc kubenswrapper[4854]: I0103 07:06:33.095203 4854 generic.go:334] "Generic (PLEG): container finished" podID="9afcc108-879e-4244-a52b-1c5720d08571" containerID="126e20e41c5864ad13d00e751b2fc481ae418edc9985b5ae2aac0e5b79445f9d" exitCode=0 Jan 03 07:06:33 crc kubenswrapper[4854]: I0103 07:06:33.095315 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"9afcc108-879e-4244-a52b-1c5720d08571","Type":"ContainerDied","Data":"126e20e41c5864ad13d00e751b2fc481ae418edc9985b5ae2aac0e5b79445f9d"} Jan 03 07:06:33 crc kubenswrapper[4854]: I0103 07:06:33.100749 4854 generic.go:334] "Generic (PLEG): container finished" podID="82dcb747-3603-42a5-82ca-f7664d5d9027" containerID="51cf1354e8866c019109dd0689ead62267930f50c8e279fc80a89946e66485df" exitCode=0 Jan 03 07:06:33 crc kubenswrapper[4854]: I0103 07:06:33.100795 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-lfv4z" event={"ID":"82dcb747-3603-42a5-82ca-f7664d5d9027","Type":"ContainerDied","Data":"51cf1354e8866c019109dd0689ead62267930f50c8e279fc80a89946e66485df"} Jan 03 07:06:33 crc kubenswrapper[4854]: I0103 07:06:33.102527 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-bd45dfbc8-vmrll" event={"ID":"0c7ed8af-66a8-4ce9-95bd-4818cc646245","Type":"ContainerStarted","Data":"5dc25733ed733314e3ae551c6fb03721458558f415ac152bc161de95375b019e"} Jan 03 07:06:33 crc kubenswrapper[4854]: I0103 07:06:33.102811 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-bd45dfbc8-vmrll" Jan 03 07:06:33 crc kubenswrapper[4854]: I0103 07:06:33.111604 4854 generic.go:334] "Generic (PLEG): container finished" podID="9ecb343a-f88c-49d3-a792-696f8b94eca3" containerID="21f27f09d6dbc1e7c9b44ed26845c77c7130232e16ad10ca00346ecd3f3f82a6" exitCode=0 Jan 03 07:06:33 crc kubenswrapper[4854]: I0103 07:06:33.111736 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n82hj" event={"ID":"9ecb343a-f88c-49d3-a792-696f8b94eca3","Type":"ContainerDied","Data":"21f27f09d6dbc1e7c9b44ed26845c77c7130232e16ad10ca00346ecd3f3f82a6"} Jan 03 07:06:33 crc kubenswrapper[4854]: I0103 07:06:33.116529 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-2lwzj_dcde1a7d-7025-45cb-92de-483da7a86296/console-operator/0.log" Jan 03 07:06:33 crc kubenswrapper[4854]: I0103 07:06:33.116787 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-2lwzj" event={"ID":"dcde1a7d-7025-45cb-92de-483da7a86296","Type":"ContainerStarted","Data":"d555da6bacbf74a1f53ab157741249a9cf6b0b03c20a50bd35046d0f31021199"} Jan 03 07:06:33 crc kubenswrapper[4854]: I0103 07:06:33.118275 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-2lwzj" Jan 03 07:06:33 crc kubenswrapper[4854]: I0103 07:06:33.118338 4854 patch_prober.go:28] interesting pod/console-operator-58897d9998-2lwzj container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/readyz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Jan 03 07:06:33 crc kubenswrapper[4854]: I0103 07:06:33.118359 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-2lwzj" podUID="dcde1a7d-7025-45cb-92de-483da7a86296" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.17:8443/readyz\": dial tcp 10.217.0.17:8443: connect: connection refused" Jan 03 07:06:33 crc kubenswrapper[4854]: I0103 07:06:33.124009 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/2.log" Jan 03 07:06:33 crc kubenswrapper[4854]: I0103 07:06:33.131189 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="748d9586-5917-42ab-8f1f-3a811b724dae" containerName="galera" probeResult="failure" output="command timed out" Jan 03 07:06:33 crc kubenswrapper[4854]: I0103 07:06:33.131885 4854 kuberuntime_container.go:700] "PreStop hook not completed in grace period" pod="openstack/openstack-cell1-galera-0" podUID="748d9586-5917-42ab-8f1f-3a811b724dae" containerName="galera" containerID="cri-o://b6c69658c671ea8316febf8922284a5f50e580024481037297ff09c1c29c326a" gracePeriod=30 Jan 03 07:06:33 crc kubenswrapper[4854]: I0103 07:06:33.131908 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="748d9586-5917-42ab-8f1f-3a811b724dae" containerName="galera" containerID="cri-o://b6c69658c671ea8316febf8922284a5f50e580024481037297ff09c1c29c326a" gracePeriod=2 Jan 03 07:06:33 crc kubenswrapper[4854]: I0103 07:06:33.136949 4854 generic.go:334] "Generic (PLEG): container finished" podID="6515eec5-5595-42cb-8588-81baa0db47c1" containerID="624bad035cc975d2993cfecbbce65f6e0bdf8f1a0acb430a45c97693d78d33ab" exitCode=1 Jan 03 07:06:33 crc kubenswrapper[4854]: I0103 07:06:33.136985 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-dprp4" event={"ID":"6515eec5-5595-42cb-8588-81baa0db47c1","Type":"ContainerDied","Data":"624bad035cc975d2993cfecbbce65f6e0bdf8f1a0acb430a45c97693d78d33ab"} Jan 03 07:06:33 crc kubenswrapper[4854]: I0103 07:06:33.139410 4854 scope.go:117] "RemoveContainer" containerID="624bad035cc975d2993cfecbbce65f6e0bdf8f1a0acb430a45c97693d78d33ab" Jan 03 07:06:33 crc kubenswrapper[4854]: I0103 07:06:33.157430 4854 generic.go:334] "Generic (PLEG): container finished" podID="a0902db0-b7a6-496e-955c-c6f6bb3429c6" containerID="beefc6c9c7cd45bbcbb076ad45c39976e498a1e4adab4caf810e7c2791e0ac1e" exitCode=0 Jan 03 07:06:33 crc kubenswrapper[4854]: I0103 07:06:33.157481 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8f5jd" event={"ID":"a0902db0-b7a6-496e-955c-c6f6bb3429c6","Type":"ContainerDied","Data":"beefc6c9c7cd45bbcbb076ad45c39976e498a1e4adab4caf810e7c2791e0ac1e"} Jan 03 07:06:33 crc kubenswrapper[4854]: I0103 07:06:33.159025 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-msvf6" event={"ID":"c2f6c336-91f0-41e6-b439-c5d940264b7f","Type":"ContainerStarted","Data":"445e8a81971caa320b34a6440df7c8934734d0294e7652cb6be494fdcffec7f8"} Jan 03 07:06:33 crc kubenswrapper[4854]: I0103 07:06:33.160144 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-msvf6" Jan 03 07:06:33 crc kubenswrapper[4854]: I0103 07:06:33.170046 4854 generic.go:334] "Generic (PLEG): container finished" podID="04d8c7f1-6674-45b0-9506-9d62c1a2f892" containerID="3f4692302ff77bfafcff5ca37f1422fddb733d20b795c9b3ca159f49df47472f" exitCode=1 Jan 03 07:06:33 crc kubenswrapper[4854]: I0103 07:06:33.171104 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-xgtzc" event={"ID":"04d8c7f1-6674-45b0-9506-9d62c1a2f892","Type":"ContainerDied","Data":"3f4692302ff77bfafcff5ca37f1422fddb733d20b795c9b3ca159f49df47472f"} Jan 03 07:06:33 crc kubenswrapper[4854]: I0103 07:06:33.171585 4854 scope.go:117] "RemoveContainer" containerID="3f4692302ff77bfafcff5ca37f1422fddb733d20b795c9b3ca159f49df47472f" Jan 03 07:06:33 crc kubenswrapper[4854]: I0103 07:06:33.246793 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-57f57bb94b-jb8qx" Jan 03 07:06:33 crc kubenswrapper[4854]: I0103 07:06:33.432904 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-operator-c8b457848-dg5cs" podUID="bc9994eb-5930-484d-a02c-60d4e13483e2" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.100:8081/readyz\": dial tcp 10.217.0.100:8081: connect: connection refused" Jan 03 07:06:33 crc kubenswrapper[4854]: I0103 07:06:33.520905 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-jx5q2" Jan 03 07:06:33 crc kubenswrapper[4854]: I0103 07:06:33.562361 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-trsxr" podUID="f5b690cb-eb48-469c-a774-eff5eda46f89" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": dial tcp 10.217.0.104:8081: connect: connection refused" Jan 03 07:06:33 crc kubenswrapper[4854]: E0103 07:06:33.606358 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 126e20e41c5864ad13d00e751b2fc481ae418edc9985b5ae2aac0e5b79445f9d is running failed: container process not found" containerID="126e20e41c5864ad13d00e751b2fc481ae418edc9985b5ae2aac0e5b79445f9d" cmd=["sh","-c","if [ -x \"$(command -v curl)\" ]; then exec curl --fail http://localhost:9090/-/ready; elif [ -x \"$(command -v wget)\" ]; then exec wget -q -O /dev/null http://localhost:9090/-/ready; else exit 1; fi"] Jan 03 07:06:33 crc kubenswrapper[4854]: E0103 07:06:33.609266 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 126e20e41c5864ad13d00e751b2fc481ae418edc9985b5ae2aac0e5b79445f9d is running failed: container process not found" containerID="126e20e41c5864ad13d00e751b2fc481ae418edc9985b5ae2aac0e5b79445f9d" cmd=["sh","-c","if [ -x \"$(command -v curl)\" ]; then exec curl --fail http://localhost:9090/-/ready; elif [ -x \"$(command -v wget)\" ]; then exec wget -q -O /dev/null http://localhost:9090/-/ready; else exit 1; fi"] Jan 03 07:06:33 crc kubenswrapper[4854]: E0103 07:06:33.613533 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 126e20e41c5864ad13d00e751b2fc481ae418edc9985b5ae2aac0e5b79445f9d is running failed: container process not found" containerID="126e20e41c5864ad13d00e751b2fc481ae418edc9985b5ae2aac0e5b79445f9d" cmd=["sh","-c","if [ -x \"$(command -v curl)\" ]; then exec curl --fail http://localhost:9090/-/ready; elif [ -x \"$(command -v wget)\" ]; then exec wget -q -O /dev/null http://localhost:9090/-/ready; else exit 1; fi"] Jan 03 07:06:33 crc kubenswrapper[4854]: E0103 07:06:33.613599 4854 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 126e20e41c5864ad13d00e751b2fc481ae418edc9985b5ae2aac0e5b79445f9d is running failed: container process not found" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="9afcc108-879e-4244-a52b-1c5720d08571" containerName="prometheus" Jan 03 07:06:33 crc kubenswrapper[4854]: I0103 07:06:33.746798 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="79760a75-c798-415d-be02-dd3a6a9c74ee" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.176:9090/-/ready\": dial tcp 10.217.0.176:9090: connect: connection refused" Jan 03 07:06:33 crc kubenswrapper[4854]: I0103 07:06:33.799327 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Jan 03 07:06:33 crc kubenswrapper[4854]: I0103 07:06:33.844129 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-hgwsb" podUID="a327e8cf-824f-41b1-9076-5fd57a8b4352" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": dial tcp 10.217.0.105:8081: connect: connection refused" Jan 03 07:06:33 crc kubenswrapper[4854]: I0103 07:06:33.942630 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-4pbfn" podUID="1d8399ce-3c90-4601-9a32-31dc20da4552" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": dial tcp 10.217.0.108:8081: connect: connection refused" Jan 03 07:06:33 crc kubenswrapper[4854]: I0103 07:06:33.960027 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-xgtzc" Jan 03 07:06:33 crc kubenswrapper[4854]: E0103 07:06:33.986260 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of dfef79ec6ba8d75de052d84ce5dcd60a393c8f2af22bc8300a68bb6818818d46 is running failed: container process not found" containerID="dfef79ec6ba8d75de052d84ce5dcd60a393c8f2af22bc8300a68bb6818818d46" cmd=["grpc_health_probe","-addr=:50051"] Jan 03 07:06:33 crc kubenswrapper[4854]: E0103 07:06:33.986530 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of dfef79ec6ba8d75de052d84ce5dcd60a393c8f2af22bc8300a68bb6818818d46 is running failed: container process not found" containerID="dfef79ec6ba8d75de052d84ce5dcd60a393c8f2af22bc8300a68bb6818818d46" cmd=["grpc_health_probe","-addr=:50051"] Jan 03 07:06:33 crc kubenswrapper[4854]: E0103 07:06:33.996891 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of dfef79ec6ba8d75de052d84ce5dcd60a393c8f2af22bc8300a68bb6818818d46 is running failed: container process not found" containerID="dfef79ec6ba8d75de052d84ce5dcd60a393c8f2af22bc8300a68bb6818818d46" cmd=["grpc_health_probe","-addr=:50051"] Jan 03 07:06:33 crc kubenswrapper[4854]: E0103 07:06:33.996956 4854 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of dfef79ec6ba8d75de052d84ce5dcd60a393c8f2af22bc8300a68bb6818818d46 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-s6gct" podUID="99f863f5-fa79-40f0-8ee2-d3d75b6c3df2" containerName="registry-server" Jan 03 07:06:34 crc kubenswrapper[4854]: I0103 07:06:34.136471 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-9mfrk" podUID="b826d6d3-0de8-4b3d-9294-9e5f8f9faae6" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": dial tcp [::1]:29150: connect: connection refused" Jan 03 07:06:34 crc kubenswrapper[4854]: I0103 07:06:34.136527 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-9mfrk" podUID="b826d6d3-0de8-4b3d-9294-9e5f8f9faae6" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": dial tcp [::1]:29150: connect: connection refused" Jan 03 07:06:34 crc kubenswrapper[4854]: E0103 07:06:34.192634 4854 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err="command '/bin/bash /var/lib/operator-scripts/mysql_shutdown.sh' exited with 137: " execCommand=["/bin/bash","/var/lib/operator-scripts/mysql_shutdown.sh"] containerName="galera" pod="openstack/openstack-galera-0" message="" Jan 03 07:06:34 crc kubenswrapper[4854]: E0103 07:06:34.197249 4854 kuberuntime_container.go:691] "PreStop hook failed" err="command '/bin/bash /var/lib/operator-scripts/mysql_shutdown.sh' exited with 137: " pod="openstack/openstack-galera-0" podUID="10578fce-2c06-4977-9cb2-51b8593f9fed" containerName="galera" containerID="cri-o://85534b86966512a3a6777d75350ffebd09d510b71ed3bd20f577bb5b92d31a74" Jan 03 07:06:34 crc kubenswrapper[4854]: I0103 07:06:34.204962 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/speaker-9mfrk" Jan 03 07:06:34 crc kubenswrapper[4854]: I0103 07:06:34.205014 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-z7cfx" Jan 03 07:06:34 crc kubenswrapper[4854]: I0103 07:06:34.206107 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="speaker" containerStatusID={"Type":"cri-o","ID":"0e7ade64b8e2469e96b56a33f7507989607bc747da1305a20feaa1f07204144e"} pod="metallb-system/speaker-9mfrk" containerMessage="Container speaker failed liveness probe, will be restarted" Jan 03 07:06:34 crc kubenswrapper[4854]: I0103 07:06:34.206179 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/speaker-9mfrk" podUID="b826d6d3-0de8-4b3d-9294-9e5f8f9faae6" containerName="speaker" containerID="cri-o://0e7ade64b8e2469e96b56a33f7507989607bc747da1305a20feaa1f07204144e" gracePeriod=2 Jan 03 07:06:34 crc kubenswrapper[4854]: I0103 07:06:34.207116 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-9mfrk" podUID="b826d6d3-0de8-4b3d-9294-9e5f8f9faae6" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": dial tcp [::1]:29150: connect: connection refused" Jan 03 07:06:34 crc kubenswrapper[4854]: I0103 07:06:34.217475 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-7lvxp" podUID="ad6a18d3-e1d2-446a-9b41-a9fca5e8b574" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": dial tcp 10.217.0.121:8081: connect: connection refused" Jan 03 07:06:34 crc kubenswrapper[4854]: I0103 07:06:34.267567 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-8xksh" podUID="05f5522f-8e47-4d35-be75-2edee0f16f77" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": dial tcp 10.217.0.114:8081: connect: connection refused" Jan 03 07:06:34 crc kubenswrapper[4854]: I0103 07:06:34.303208 4854 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-q6v5f container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.73:8443/healthz\": dial tcp 10.217.0.73:8443: connect: connection refused" start-of-body= Jan 03 07:06:34 crc kubenswrapper[4854]: I0103 07:06:34.303263 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-q6v5f" podUID="e1f91a20-c61d-488f-98ab-f966174f3764" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.73:8443/healthz\": dial tcp 10.217.0.73:8443: connect: connection refused" Jan 03 07:06:34 crc kubenswrapper[4854]: I0103 07:06:34.303361 4854 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-q6v5f container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.73:8443/healthz\": dial tcp 10.217.0.73:8443: connect: connection refused" start-of-body= Jan 03 07:06:34 crc kubenswrapper[4854]: I0103 07:06:34.303378 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-q6v5f" podUID="e1f91a20-c61d-488f-98ab-f966174f3764" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.73:8443/healthz\": dial tcp 10.217.0.73:8443: connect: connection refused" Jan 03 07:06:34 crc kubenswrapper[4854]: I0103 07:06:34.328259 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" event={"ID":"159783f1-b3b7-432d-b243-e8e7076ddd0a","Type":"ContainerStarted","Data":"d01d65a6a10952f7e295328b7ce3978c7f0880274ea1d1a9e567b8301934ffc8"} Jan 03 07:06:34 crc kubenswrapper[4854]: I0103 07:06:34.331806 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" Jan 03 07:06:34 crc kubenswrapper[4854]: I0103 07:06:34.331934 4854 patch_prober.go:28] interesting pod/oauth-openshift-6994f97844-8cxlw container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.60:6443/healthz\": dial tcp 10.217.0.60:6443: connect: connection refused" start-of-body= Jan 03 07:06:34 crc kubenswrapper[4854]: I0103 07:06:34.331985 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" podUID="159783f1-b3b7-432d-b243-e8e7076ddd0a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.60:6443/healthz\": dial tcp 10.217.0.60:6443: connect: connection refused" Jan 03 07:06:34 crc kubenswrapper[4854]: I0103 07:06:34.373549 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-k6nnf" event={"ID":"7d4776d0-290f-4c82-aa5c-6412b5bb4608","Type":"ContainerStarted","Data":"89d100332b58601925bd0874accd671cf71bd64ed443336790da7765b8c2804b"} Jan 03 07:06:34 crc kubenswrapper[4854]: I0103 07:06:34.374061 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-k6nnf" Jan 03 07:06:34 crc kubenswrapper[4854]: I0103 07:06:34.412912 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-trsxr" event={"ID":"f5b690cb-eb48-469c-a774-eff5eda46f89","Type":"ContainerStarted","Data":"f14018c392b50762ccb9ee7f83d42430763fd25d6dcb5d2614eeb37290a94bca"} Jan 03 07:06:34 crc kubenswrapper[4854]: I0103 07:06:34.413219 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-trsxr" Jan 03 07:06:34 crc kubenswrapper[4854]: I0103 07:06:34.435402 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7666dbdd4f-46t4f" Jan 03 07:06:34 crc kubenswrapper[4854]: I0103 07:06:34.437368 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-vdnq9" event={"ID":"25988b2b-1924-4007-a6b1-5e5403d5dc68","Type":"ContainerStarted","Data":"f6df32fd2690dcc6518b7d835000742d4fb4d61e049215002ec4afc8dc28a0e1"} Jan 03 07:06:34 crc kubenswrapper[4854]: I0103 07:06:34.437747 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-vdnq9" Jan 03 07:06:34 crc kubenswrapper[4854]: I0103 07:06:34.440401 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-jqj54" event={"ID":"e62c43c5-cac2-4f9f-9e1b-de61827c4c94","Type":"ContainerStarted","Data":"0fd2f4925b855e16a1682df3b8f1768cbfbb0f307b982cbf94025d293498d430"} Jan 03 07:06:34 crc kubenswrapper[4854]: I0103 07:06:34.442257 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-jqj54" Jan 03 07:06:34 crc kubenswrapper[4854]: I0103 07:06:34.477745 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-hgwsb" event={"ID":"a327e8cf-824f-41b1-9076-5fd57a8b4352","Type":"ContainerStarted","Data":"e794b56b4b433afa37691dc1302f45524d3a35c8e2cc656276efd2e4afd5c409"} Jan 03 07:06:34 crc kubenswrapper[4854]: I0103 07:06:34.489740 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-hgwsb" Jan 03 07:06:34 crc kubenswrapper[4854]: I0103 07:06:34.555620 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="2d802db5-d336-4639-8264-e628fa15d820" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 03 07:06:34 crc kubenswrapper[4854]: I0103 07:06:34.561439 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"79760a75-c798-415d-be02-dd3a6a9c74ee","Type":"ContainerStarted","Data":"f8e3a3e9fa988f2951d15420adb981ff0650e0a8f732d6bd589d59cbc935fe3e"} Jan 03 07:06:34 crc kubenswrapper[4854]: I0103 07:06:34.581340 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-7lvxp" event={"ID":"ad6a18d3-e1d2-446a-9b41-a9fca5e8b574","Type":"ContainerStarted","Data":"adccab766e5b28a11c6bb473b7194db68f4db06c1100164faacdf58dac63a544"} Jan 03 07:06:34 crc kubenswrapper[4854]: I0103 07:06:34.581533 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-7lvxp" Jan 03 07:06:34 crc kubenswrapper[4854]: I0103 07:06:34.631452 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-8xksh" event={"ID":"05f5522f-8e47-4d35-be75-2edee0f16f77","Type":"ContainerStarted","Data":"82abff7255aa0cbf4849373259b57715c12aef2c400b0ef97d7a537bbb7a219e"} Jan 03 07:06:34 crc kubenswrapper[4854]: I0103 07:06:34.631741 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-8xksh" Jan 03 07:06:34 crc kubenswrapper[4854]: I0103 07:06:34.654529 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7fdb976ccd-xpqws" event={"ID":"c752fc50-5b45-4cbc-8a1c-b0cec9e720e5","Type":"ContainerStarted","Data":"4b9c2fc04addad78f8733aff53ae3e33f0b5f4de309d1e5c57385191d1e82073"} Jan 03 07:06:34 crc kubenswrapper[4854]: I0103 07:06:34.654885 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7fdb976ccd-xpqws" Jan 03 07:06:34 crc kubenswrapper[4854]: I0103 07:06:34.686047 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-4pbfn" event={"ID":"1d8399ce-3c90-4601-9a32-31dc20da4552","Type":"ContainerStarted","Data":"a209ea22902b23adaed27fa96855874a87226dc8131ca584ab8b5e27cdca7367"} Jan 03 07:06:34 crc kubenswrapper[4854]: I0103 07:06:34.687032 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-4pbfn" Jan 03 07:06:34 crc kubenswrapper[4854]: I0103 07:06:34.716036 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/2.log" Jan 03 07:06:34 crc kubenswrapper[4854]: I0103 07:06:34.723317 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5d857e54272793d26f7cdc626f49935abb53530d63176989e1deaea067cc9fc4"} Jan 03 07:06:34 crc kubenswrapper[4854]: I0103 07:06:34.763495 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-c8b457848-dg5cs" event={"ID":"bc9994eb-5930-484d-a02c-60d4e13483e2","Type":"ContainerStarted","Data":"12145c233d4ffed3a56a59ff524f24b67281c9c88aeb018c77b54f7794167e10"} Jan 03 07:06:34 crc kubenswrapper[4854]: I0103 07:06:34.766306 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-c8b457848-dg5cs" Jan 03 07:06:34 crc kubenswrapper[4854]: I0103 07:06:34.766375 4854 patch_prober.go:28] interesting pod/console-operator-58897d9998-2lwzj container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/readyz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Jan 03 07:06:34 crc kubenswrapper[4854]: I0103 07:06:34.766442 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-2lwzj" podUID="dcde1a7d-7025-45cb-92de-483da7a86296" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.17:8443/readyz\": dial tcp 10.217.0.17:8443: connect: connection refused" Jan 03 07:06:34 crc kubenswrapper[4854]: E0103 07:06:34.843269 4854 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10578fce_2c06_4977_9cb2_51b8593f9fed.slice/crio-85534b86966512a3a6777d75350ffebd09d510b71ed3bd20f577bb5b92d31a74.scope\": RecentStats: unable to find data in memory cache]" Jan 03 07:06:34 crc kubenswrapper[4854]: I0103 07:06:34.897090 4854 patch_prober.go:28] interesting pod/oauth-openshift-6994f97844-8cxlw container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.60:6443/healthz\": dial tcp 10.217.0.60:6443: connect: connection refused" start-of-body= Jan 03 07:06:34 crc kubenswrapper[4854]: I0103 07:06:34.897136 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" podUID="159783f1-b3b7-432d-b243-e8e7076ddd0a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.60:6443/healthz\": dial tcp 10.217.0.60:6443: connect: connection refused" Jan 03 07:06:34 crc kubenswrapper[4854]: I0103 07:06:34.939471 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-f8fb84555-mxm65" Jan 03 07:06:35 crc kubenswrapper[4854]: I0103 07:06:35.194972 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-czgkw"] Jan 03 07:06:35 crc kubenswrapper[4854]: I0103 07:06:35.195394 4854 patch_prober.go:28] interesting pod/controller-manager-7ff6f7c9f7-lfv4z container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.74:8443/healthz\": dial tcp 10.217.0.74:8443: connect: connection refused" start-of-body= Jan 03 07:06:35 crc kubenswrapper[4854]: I0103 07:06:35.195557 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-lfv4z" podUID="82dcb747-3603-42a5-82ca-f7664d5d9027" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.74:8443/healthz\": dial tcp 10.217.0.74:8443: connect: connection refused" Jan 03 07:06:35 crc kubenswrapper[4854]: I0103 07:06:35.439303 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-67666b4d85-nwx4t" Jan 03 07:06:35 crc kubenswrapper[4854]: I0103 07:06:35.782257 4854 patch_prober.go:28] interesting pod/route-controller-manager-799fd78b6c-wqs5s container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.71:8443/healthz\": dial tcp 10.217.0.71:8443: connect: connection refused" start-of-body= Jan 03 07:06:35 crc kubenswrapper[4854]: I0103 07:06:35.782502 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-wqs5s" podUID="0f4370d7-e178-42cc-99ec-fdfeca5fb5f8" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.71:8443/healthz\": dial tcp 10.217.0.71:8443: connect: connection refused" Jan 03 07:06:35 crc kubenswrapper[4854]: I0103 07:06:35.797972 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-czgkw" event={"ID":"8ff403f4-841a-4305-8f8d-4f5fd6b14765","Type":"ContainerStarted","Data":"c0f92cdd570d3278efbccf7c016d1fd04d43b4061c83b0896dbd520d3b060685"} Jan 03 07:06:35 crc kubenswrapper[4854]: I0103 07:06:35.809313 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-lfv4z" event={"ID":"82dcb747-3603-42a5-82ca-f7664d5d9027","Type":"ContainerStarted","Data":"173dfe20a85332c948bd30720540d6acd4db8171557be1c558b70799b3e117bc"} Jan 03 07:06:35 crc kubenswrapper[4854]: I0103 07:06:35.813133 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-lfv4z" Jan 03 07:06:35 crc kubenswrapper[4854]: I0103 07:06:35.814290 4854 patch_prober.go:28] interesting pod/controller-manager-7ff6f7c9f7-lfv4z container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.74:8443/healthz\": dial tcp 10.217.0.74:8443: connect: connection refused" start-of-body= Jan 03 07:06:35 crc kubenswrapper[4854]: I0103 07:06:35.814338 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-lfv4z" podUID="82dcb747-3603-42a5-82ca-f7664d5d9027" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.74:8443/healthz\": dial tcp 10.217.0.74:8443: connect: connection refused" Jan 03 07:06:35 crc kubenswrapper[4854]: I0103 07:06:35.838272 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s6gct" event={"ID":"99f863f5-fa79-40f0-8ee2-d3d75b6c3df2","Type":"ContainerStarted","Data":"576ed1bcd66cbc1cccd2d2eaef7f18f7725e3a41e73aae511eb575e0fc8623db"} Jan 03 07:06:35 crc kubenswrapper[4854]: I0103 07:06:35.852506 4854 generic.go:334] "Generic (PLEG): container finished" podID="10578fce-2c06-4977-9cb2-51b8593f9fed" containerID="85534b86966512a3a6777d75350ffebd09d510b71ed3bd20f577bb5b92d31a74" exitCode=137 Jan 03 07:06:35 crc kubenswrapper[4854]: I0103 07:06:35.852911 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"10578fce-2c06-4977-9cb2-51b8593f9fed","Type":"ContainerDied","Data":"85534b86966512a3a6777d75350ffebd09d510b71ed3bd20f577bb5b92d31a74"} Jan 03 07:06:35 crc kubenswrapper[4854]: I0103 07:06:35.886777 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-xgtzc" event={"ID":"04d8c7f1-6674-45b0-9506-9d62c1a2f892","Type":"ContainerStarted","Data":"a6613917a743bfd6c50897550d82d2a9978597365d33b045c5a5f7eb0bce89c1"} Jan 03 07:06:35 crc kubenswrapper[4854]: I0103 07:06:35.887008 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-xgtzc" Jan 03 07:06:35 crc kubenswrapper[4854]: I0103 07:06:35.901683 4854 generic.go:334] "Generic (PLEG): container finished" podID="748d9586-5917-42ab-8f1f-3a811b724dae" containerID="b6c69658c671ea8316febf8922284a5f50e580024481037297ff09c1c29c326a" exitCode=137 Jan 03 07:06:35 crc kubenswrapper[4854]: I0103 07:06:35.901769 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"748d9586-5917-42ab-8f1f-3a811b724dae","Type":"ContainerDied","Data":"b6c69658c671ea8316febf8922284a5f50e580024481037297ff09c1c29c326a"} Jan 03 07:06:35 crc kubenswrapper[4854]: I0103 07:06:35.931625 4854 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-n82hj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 03 07:06:35 crc kubenswrapper[4854]: I0103 07:06:35.931671 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n82hj" podUID="9ecb343a-f88c-49d3-a792-696f8b94eca3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 03 07:06:35 crc kubenswrapper[4854]: I0103 07:06:35.931956 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"9afcc108-879e-4244-a52b-1c5720d08571","Type":"ContainerStarted","Data":"0e468fca9f6645516a2a444dc40169d3618461787dd4cf230a4b7abc83e6ea01"} Jan 03 07:06:35 crc kubenswrapper[4854]: I0103 07:06:35.968876 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-jvp7v" event={"ID":"81de0b3b-e6fc-45c9-b347-995726d00213","Type":"ContainerStarted","Data":"16835c050ed1f42dd04ba8487d14a9a7b0025288047534e29687bc823ae5efe0"} Jan 03 07:06:35 crc kubenswrapper[4854]: I0103 07:06:35.970108 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-jvp7v" Jan 03 07:06:35 crc kubenswrapper[4854]: I0103 07:06:35.974866 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-wqs5s" event={"ID":"0f4370d7-e178-42cc-99ec-fdfeca5fb5f8","Type":"ContainerStarted","Data":"9a1f118244a6e2fe8a455d1c012553472d82f6332f1a1bc05595d48bbeb19258"} Jan 03 07:06:35 crc kubenswrapper[4854]: I0103 07:06:35.974907 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-wqs5s" Jan 03 07:06:35 crc kubenswrapper[4854]: I0103 07:06:35.976222 4854 patch_prober.go:28] interesting pod/route-controller-manager-799fd78b6c-wqs5s container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.71:8443/healthz\": dial tcp 10.217.0.71:8443: connect: connection refused" start-of-body= Jan 03 07:06:35 crc kubenswrapper[4854]: I0103 07:06:35.976275 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-wqs5s" podUID="0f4370d7-e178-42cc-99ec-fdfeca5fb5f8" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.71:8443/healthz\": dial tcp 10.217.0.71:8443: connect: connection refused" Jan 03 07:06:35 crc kubenswrapper[4854]: I0103 07:06:35.976358 4854 patch_prober.go:28] interesting pod/oauth-openshift-6994f97844-8cxlw container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.60:6443/healthz\": dial tcp 10.217.0.60:6443: connect: connection refused" start-of-body= Jan 03 07:06:35 crc kubenswrapper[4854]: I0103 07:06:35.976378 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" podUID="159783f1-b3b7-432d-b243-e8e7076ddd0a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.60:6443/healthz\": dial tcp 10.217.0.60:6443: connect: connection refused" Jan 03 07:06:35 crc kubenswrapper[4854]: I0103 07:06:35.980469 4854 patch_prober.go:28] interesting pod/console-operator-58897d9998-2lwzj container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/readyz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Jan 03 07:06:35 crc kubenswrapper[4854]: I0103 07:06:35.980535 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-2lwzj" podUID="dcde1a7d-7025-45cb-92de-483da7a86296" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.17:8443/readyz\": dial tcp 10.217.0.17:8443: connect: connection refused" Jan 03 07:06:36 crc kubenswrapper[4854]: I0103 07:06:36.791225 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 07:06:36 crc kubenswrapper[4854]: I0103 07:06:36.985601 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-vtpbv" event={"ID":"b0379c6e-b02d-40ef-b9ae-add1e633bc4a","Type":"ContainerStarted","Data":"2e727f95cc4fe442dc39c7ad3abd28ed607e50cfbdf4ba453f657d73a8372dcc"} Jan 03 07:06:36 crc kubenswrapper[4854]: I0103 07:06:36.988303 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fwgd2" event={"ID":"5ab7ee8b-9182-43e2-85de-f8d92aa12587","Type":"ContainerStarted","Data":"722529a2dc2ec05e6d7076a1b30aef4b6edb3b38ed8cdb9510fce9a272161cf0"} Jan 03 07:06:36 crc kubenswrapper[4854]: I0103 07:06:36.988619 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fwgd2" Jan 03 07:06:36 crc kubenswrapper[4854]: I0103 07:06:36.988833 4854 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-fwgd2 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:5443/healthz\": dial tcp 10.217.0.31:5443: connect: connection refused" start-of-body= Jan 03 07:06:36 crc kubenswrapper[4854]: I0103 07:06:36.988873 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fwgd2" podUID="5ab7ee8b-9182-43e2-85de-f8d92aa12587" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.31:5443/healthz\": dial tcp 10.217.0.31:5443: connect: connection refused" Jan 03 07:06:36 crc kubenswrapper[4854]: I0103 07:06:36.998379 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n82hj" event={"ID":"9ecb343a-f88c-49d3-a792-696f8b94eca3","Type":"ContainerStarted","Data":"c079a2f44dafe20dd90d49d306c526db1bf402428134e5a465d60f66cf89f5cf"} Jan 03 07:06:36 crc kubenswrapper[4854]: I0103 07:06:36.998502 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n82hj" Jan 03 07:06:37 crc kubenswrapper[4854]: I0103 07:06:37.006375 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-xrghz" event={"ID":"56476ba9-ae33-4d34-855c-0e144e4f5da3","Type":"ContainerStarted","Data":"20ac6b133abc21fb6c362a9e1c801476e179d036f88d9530f23ac6ce88fa31cc"} Jan 03 07:06:37 crc kubenswrapper[4854]: I0103 07:06:37.006561 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-xrghz" Jan 03 07:06:37 crc kubenswrapper[4854]: I0103 07:06:37.008713 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-568985c78-x78fv" event={"ID":"14991c3c-8c35-4008-b1a0-1b8690074322","Type":"ContainerStarted","Data":"7dcc739d008204cda90c039155d1890fd82d08c27e1fae47be1584d43acc34fe"} Jan 03 07:06:37 crc kubenswrapper[4854]: I0103 07:06:37.009295 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-568985c78-x78fv" Jan 03 07:06:37 crc kubenswrapper[4854]: I0103 07:06:37.010138 4854 generic.go:334] "Generic (PLEG): container finished" podID="8ff403f4-841a-4305-8f8d-4f5fd6b14765" containerID="509d94d1ad0ebc4bc528f4bfb11d681e27fd42b5b7f67f8097f92fa7cf645a4e" exitCode=0 Jan 03 07:06:37 crc kubenswrapper[4854]: I0103 07:06:37.010181 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-czgkw" event={"ID":"8ff403f4-841a-4305-8f8d-4f5fd6b14765","Type":"ContainerDied","Data":"509d94d1ad0ebc4bc528f4bfb11d681e27fd42b5b7f67f8097f92fa7cf645a4e"} Jan 03 07:06:37 crc kubenswrapper[4854]: I0103 07:06:37.013718 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-85b679bdc6-qrnbc" event={"ID":"7f7c87f2-5743-4000-a36a-3a9400e24cdd","Type":"ContainerStarted","Data":"1a421707d77c1808a1e0ce147b678028b53bcb6bc0f875d017076f638132859c"} Jan 03 07:06:37 crc kubenswrapper[4854]: I0103 07:06:37.013894 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-85b679bdc6-qrnbc" Jan 03 07:06:37 crc kubenswrapper[4854]: I0103 07:06:37.016635 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-qzzw2" event={"ID":"ddf8e54e-858e-432c-ab2d-8b4d83f6282b","Type":"ContainerStarted","Data":"6d074dffa437ca5ed215095e1fd53fdc0b273f71bc596e239bd1957d92d90b73"} Jan 03 07:06:37 crc kubenswrapper[4854]: I0103 07:06:37.017583 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-qzzw2" Jan 03 07:06:37 crc kubenswrapper[4854]: I0103 07:06:37.023807 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"748d9586-5917-42ab-8f1f-3a811b724dae","Type":"ContainerStarted","Data":"689922fb2e46874fcf7b0d8e87a2098750b3e3a285dd234a8d8e7e2950c6fbf7"} Jan 03 07:06:37 crc kubenswrapper[4854]: I0103 07:06:37.055910 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-dprp4" event={"ID":"6515eec5-5595-42cb-8588-81baa0db47c1","Type":"ContainerStarted","Data":"81b629571eb65b9f1cf13ed15eef642df51f6f4a32f992c034e7d8b4d7c3e855"} Jan 03 07:06:37 crc kubenswrapper[4854]: I0103 07:06:37.056567 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-dprp4" Jan 03 07:06:37 crc kubenswrapper[4854]: I0103 07:06:37.080796 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-9trnq" event={"ID":"5c8ccde8-0051-491f-b5d6-a2930440c138","Type":"ContainerStarted","Data":"572ec7023ac7296706165412566da57846090c615615b82918be3442d230e830"} Jan 03 07:06:37 crc kubenswrapper[4854]: I0103 07:06:37.081457 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-9trnq" Jan 03 07:06:37 crc kubenswrapper[4854]: I0103 07:06:37.081612 4854 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-9trnq container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.7:8081/healthz\": dial tcp 10.217.0.7:8081: connect: connection refused" start-of-body= Jan 03 07:06:37 crc kubenswrapper[4854]: I0103 07:06:37.081683 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-9trnq" podUID="5c8ccde8-0051-491f-b5d6-a2930440c138" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.7:8081/healthz\": dial tcp 10.217.0.7:8081: connect: connection refused" Jan 03 07:06:37 crc kubenswrapper[4854]: I0103 07:06:37.093020 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-7vksk" event={"ID":"ec8a24a9-62d4-4db8-8f17-f261a85d6a47","Type":"ContainerStarted","Data":"3aa1db8dca0ca3f4378994f228f295b39c816569d5b1227837976effa23e53e0"} Jan 03 07:06:37 crc kubenswrapper[4854]: I0103 07:06:37.090201 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="2d802db5-d336-4639-8264-e628fa15d820" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 03 07:06:37 crc kubenswrapper[4854]: I0103 07:06:37.101039 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qqbq9" event={"ID":"ba0f32da-a0e3-4c43-8dde-d6212a1c63e1","Type":"ContainerStarted","Data":"572cfc62d2849fd5298e4f0927286c95105842523c88c046e12126292f820889"} Jan 03 07:06:37 crc kubenswrapper[4854]: I0103 07:06:37.102203 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qqbq9" Jan 03 07:06:37 crc kubenswrapper[4854]: I0103 07:06:37.121750 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-5444994796-tdlx9_ab6ec22e-2a2c-4e28-8242-5bd783990843/router/0.log" Jan 03 07:06:37 crc kubenswrapper[4854]: I0103 07:06:37.122352 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-tdlx9" event={"ID":"ab6ec22e-2a2c-4e28-8242-5bd783990843","Type":"ContainerStarted","Data":"0ef1391b334c68de69a4ffbdee597710d1b66bd7253b805b67b04682459f796d"} Jan 03 07:06:37 crc kubenswrapper[4854]: I0103 07:06:37.128653 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-z7cfx" event={"ID":"fe7f33a3-c4b8-44b6-81f1-c2143cbb9dd1","Type":"ContainerStarted","Data":"537a2275d831e0e1b41d0a570cd06fe5a14e0669dc2b5b1544e8bfc5c7ddd9cf"} Jan 03 07:06:37 crc kubenswrapper[4854]: I0103 07:06:37.128901 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-z7cfx" Jan 03 07:06:37 crc kubenswrapper[4854]: I0103 07:06:37.152352 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"10578fce-2c06-4977-9cb2-51b8593f9fed","Type":"ContainerStarted","Data":"ec3b4713724fea0db5cfdf53ab9ff75d4081e0a0f1d5c69dfdbdef366a9fbc17"} Jan 03 07:06:37 crc kubenswrapper[4854]: I0103 07:06:37.183507 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6pvh8" event={"ID":"03a2de93-c858-46e8-ae42-a34d1d776b7c","Type":"ContainerStarted","Data":"e59ee062a4d7d1b3cb077bce97bf9da07113d28dc2426814fbeb14050d682fca"} Jan 03 07:06:37 crc kubenswrapper[4854]: I0103 07:06:37.191934 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-99h4j" event={"ID":"07007d77-4861-45ac-aacd-17b840bef2ee","Type":"ContainerStarted","Data":"e4d5dd7fb0e5613120baf598e6a1c6481c66b267070b62524a801cd535494af0"} Jan 03 07:06:37 crc kubenswrapper[4854]: I0103 07:06:37.192028 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-99h4j" Jan 03 07:06:37 crc kubenswrapper[4854]: I0103 07:06:37.194044 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7nclw5" event={"ID":"1f9928f3-0c28-40df-b6ad-c871424ad3a6","Type":"ContainerStarted","Data":"d2962378a4cd2656236357900e0d4162aca8511fb95dc7a6017fffc584ede061"} Jan 03 07:06:37 crc kubenswrapper[4854]: I0103 07:06:37.195293 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7nclw5" Jan 03 07:06:37 crc kubenswrapper[4854]: I0103 07:06:37.197571 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9lzwf" event={"ID":"7e9a4f28-3133-4df6-9ed3-fbae3e03d777","Type":"ContainerStarted","Data":"d2767d64a7f7ae38b5f30124df306b3096fff289eed0c6dd806667e697a4bfc1"} Jan 03 07:06:37 crc kubenswrapper[4854]: I0103 07:06:37.200517 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-ncjlb" event={"ID":"402a077e-f741-447d-ab1c-25bc62cd24cf","Type":"ContainerStarted","Data":"f7587d4ecba389388d9fc028a5ef2b54bd5c19188a3faa6848cf5f3d8485254b"} Jan 03 07:06:37 crc kubenswrapper[4854]: I0103 07:06:37.200689 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-ncjlb" Jan 03 07:06:37 crc kubenswrapper[4854]: I0103 07:06:37.217710 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lcbzf" event={"ID":"07528198-b6c3-44c7-aec4-4647d7a06116","Type":"ContainerStarted","Data":"0e3bf2a63e20dda5c4ba58e34c6c7d39dc946a92cad4c54c05eedb6e9059cca4"} Jan 03 07:06:37 crc kubenswrapper[4854]: I0103 07:06:37.218488 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-lcbzf" Jan 03 07:06:37 crc kubenswrapper[4854]: I0103 07:06:37.218553 4854 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-lcbzf container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.65:8080/healthz\": dial tcp 10.217.0.65:8080: connect: connection refused" start-of-body= Jan 03 07:06:37 crc kubenswrapper[4854]: I0103 07:06:37.218580 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-lcbzf" podUID="07528198-b6c3-44c7-aec4-4647d7a06116" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.65:8080/healthz\": dial tcp 10.217.0.65:8080: connect: connection refused" Jan 03 07:06:37 crc kubenswrapper[4854]: I0103 07:06:37.236830 4854 generic.go:334] "Generic (PLEG): container finished" podID="b826d6d3-0de8-4b3d-9294-9e5f8f9faae6" containerID="0e7ade64b8e2469e96b56a33f7507989607bc747da1305a20feaa1f07204144e" exitCode=137 Jan 03 07:06:37 crc kubenswrapper[4854]: I0103 07:06:37.236900 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-9mfrk" event={"ID":"b826d6d3-0de8-4b3d-9294-9e5f8f9faae6","Type":"ContainerDied","Data":"0e7ade64b8e2469e96b56a33f7507989607bc747da1305a20feaa1f07204144e"} Jan 03 07:06:37 crc kubenswrapper[4854]: I0103 07:06:37.236935 4854 scope.go:117] "RemoveContainer" containerID="252cba5186a3c3dc8dd0d53e03137843f95f7633b9b36e4815bc42dab4f08ae0" Jan 03 07:06:37 crc kubenswrapper[4854]: I0103 07:06:37.261969 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8f5jd" event={"ID":"a0902db0-b7a6-496e-955c-c6f6bb3429c6","Type":"ContainerStarted","Data":"da30dd7c11ca49892725c166df643b533f3799c09d26b32140ed72bf4fc553b2"} Jan 03 07:06:37 crc kubenswrapper[4854]: I0103 07:06:37.262741 4854 patch_prober.go:28] interesting pod/controller-manager-7ff6f7c9f7-lfv4z container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.74:8443/healthz\": dial tcp 10.217.0.74:8443: connect: connection refused" start-of-body= Jan 03 07:06:37 crc kubenswrapper[4854]: I0103 07:06:37.262793 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-lfv4z" podUID="82dcb747-3603-42a5-82ca-f7664d5d9027" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.74:8443/healthz\": dial tcp 10.217.0.74:8443: connect: connection refused" Jan 03 07:06:37 crc kubenswrapper[4854]: I0103 07:06:37.336022 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-tdlx9" Jan 03 07:06:37 crc kubenswrapper[4854]: I0103 07:06:37.337060 4854 patch_prober.go:28] interesting pod/router-default-5444994796-tdlx9 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 03 07:06:37 crc kubenswrapper[4854]: I0103 07:06:37.337137 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tdlx9" podUID="ab6ec22e-2a2c-4e28-8242-5bd783990843" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 03 07:06:37 crc kubenswrapper[4854]: I0103 07:06:37.620819 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 03 07:06:38 crc kubenswrapper[4854]: I0103 07:06:38.265003 4854 patch_prober.go:28] interesting pod/route-controller-manager-799fd78b6c-wqs5s container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.71:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 07:06:38 crc kubenswrapper[4854]: I0103 07:06:38.265496 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-wqs5s" podUID="0f4370d7-e178-42cc-99ec-fdfeca5fb5f8" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.71:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 07:06:38 crc kubenswrapper[4854]: I0103 07:06:38.275668 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-czgkw" event={"ID":"8ff403f4-841a-4305-8f8d-4f5fd6b14765","Type":"ContainerStarted","Data":"e8a0a26a317741ba9c1810c0688e02a82ec3277fa6e52fd3ad37f489c066129a"} Jan 03 07:06:38 crc kubenswrapper[4854]: I0103 07:06:38.279115 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-9mfrk" Jan 03 07:06:38 crc kubenswrapper[4854]: I0103 07:06:38.279144 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-9mfrk" event={"ID":"b826d6d3-0de8-4b3d-9294-9e5f8f9faae6","Type":"ContainerStarted","Data":"48ea861e97a0c46412bd862f63fe2e5c8666d10a252620f81879fdb53760ec0c"} Jan 03 07:06:38 crc kubenswrapper[4854]: I0103 07:06:38.282730 4854 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-lcbzf container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.65:8080/healthz\": dial tcp 10.217.0.65:8080: connect: connection refused" start-of-body= Jan 03 07:06:38 crc kubenswrapper[4854]: I0103 07:06:38.282792 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-lcbzf" podUID="07528198-b6c3-44c7-aec4-4647d7a06116" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.65:8080/healthz\": dial tcp 10.217.0.65:8080: connect: connection refused" Jan 03 07:06:38 crc kubenswrapper[4854]: I0103 07:06:38.283283 4854 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-9trnq container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.7:8081/healthz\": dial tcp 10.217.0.7:8081: connect: connection refused" start-of-body= Jan 03 07:06:38 crc kubenswrapper[4854]: I0103 07:06:38.283323 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-9trnq" podUID="5c8ccde8-0051-491f-b5d6-a2930440c138" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.7:8081/healthz\": dial tcp 10.217.0.7:8081: connect: connection refused" Jan 03 07:06:38 crc kubenswrapper[4854]: I0103 07:06:38.334867 4854 patch_prober.go:28] interesting pod/router-default-5444994796-tdlx9 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 03 07:06:38 crc kubenswrapper[4854]: I0103 07:06:38.334923 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tdlx9" podUID="ab6ec22e-2a2c-4e28-8242-5bd783990843" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 03 07:06:38 crc kubenswrapper[4854]: I0103 07:06:38.338886 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7ff6f7c9f7-lfv4z" Jan 03 07:06:38 crc kubenswrapper[4854]: I0103 07:06:38.604388 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Jan 03 07:06:38 crc kubenswrapper[4854]: I0103 07:06:38.746484 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 03 07:06:39 crc kubenswrapper[4854]: I0103 07:06:39.031438 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fwgd2" Jan 03 07:06:39 crc kubenswrapper[4854]: I0103 07:06:39.112206 4854 patch_prober.go:28] interesting pod/loki-operator-controller-manager-bd45dfbc8-vmrll container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.48:8081/readyz\": dial tcp 10.217.0.48:8081: connect: connection refused" start-of-body= Jan 03 07:06:39 crc kubenswrapper[4854]: I0103 07:06:39.112486 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-bd45dfbc8-vmrll" podUID="0c7ed8af-66a8-4ce9-95bd-4818cc646245" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.48:8081/readyz\": dial tcp 10.217.0.48:8081: connect: connection refused" Jan 03 07:06:39 crc kubenswrapper[4854]: I0103 07:06:39.316264 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9lzwf" Jan 03 07:06:39 crc kubenswrapper[4854]: I0103 07:06:39.317064 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9lzwf" Jan 03 07:06:39 crc kubenswrapper[4854]: I0103 07:06:39.349053 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-tdlx9" Jan 03 07:06:39 crc kubenswrapper[4854]: I0103 07:06:39.778428 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-85546d974f-nhdvf" Jan 03 07:06:40 crc kubenswrapper[4854]: I0103 07:06:40.029259 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-2lwzj" Jan 03 07:06:40 crc kubenswrapper[4854]: I0103 07:06:40.108071 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="2d802db5-d336-4639-8264-e628fa15d820" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 03 07:06:40 crc kubenswrapper[4854]: I0103 07:06:40.108216 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 03 07:06:40 crc kubenswrapper[4854]: I0103 07:06:40.109316 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-scheduler" containerStatusID={"Type":"cri-o","ID":"8d605ff5812a1bb92720d9dfe6ed631408ba09c2d540278cddd1b7b5491d467b"} pod="openstack/cinder-scheduler-0" containerMessage="Container cinder-scheduler failed liveness probe, will be restarted" Jan 03 07:06:40 crc kubenswrapper[4854]: I0103 07:06:40.109387 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="2d802db5-d336-4639-8264-e628fa15d820" containerName="cinder-scheduler" containerID="cri-o://8d605ff5812a1bb92720d9dfe6ed631408ba09c2d540278cddd1b7b5491d467b" gracePeriod=30 Jan 03 07:06:40 crc kubenswrapper[4854]: I0103 07:06:40.307612 4854 generic.go:334] "Generic (PLEG): container finished" podID="8ff403f4-841a-4305-8f8d-4f5fd6b14765" containerID="e8a0a26a317741ba9c1810c0688e02a82ec3277fa6e52fd3ad37f489c066129a" exitCode=0 Jan 03 07:06:40 crc kubenswrapper[4854]: I0103 07:06:40.307696 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-czgkw" event={"ID":"8ff403f4-841a-4305-8f8d-4f5fd6b14765","Type":"ContainerDied","Data":"e8a0a26a317741ba9c1810c0688e02a82ec3277fa6e52fd3ad37f489c066129a"} Jan 03 07:06:40 crc kubenswrapper[4854]: I0103 07:06:40.309543 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-tdlx9" Jan 03 07:06:40 crc kubenswrapper[4854]: I0103 07:06:40.320192 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-tdlx9" Jan 03 07:06:40 crc kubenswrapper[4854]: I0103 07:06:40.333461 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lltxw" Jan 03 07:06:40 crc kubenswrapper[4854]: I0103 07:06:40.458400 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-lcbzf" Jan 03 07:06:40 crc kubenswrapper[4854]: I0103 07:06:40.552625 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9lzwf" podUID="7e9a4f28-3133-4df6-9ed3-fbae3e03d777" containerName="registry-server" probeResult="failure" output=< Jan 03 07:06:40 crc kubenswrapper[4854]: timeout: failed to connect service ":50051" within 1s Jan 03 07:06:40 crc kubenswrapper[4854]: > Jan 03 07:06:40 crc kubenswrapper[4854]: I0103 07:06:40.800195 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6pvh8" Jan 03 07:06:40 crc kubenswrapper[4854]: I0103 07:06:40.800514 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6pvh8" Jan 03 07:06:40 crc kubenswrapper[4854]: I0103 07:06:40.850631 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 03 07:06:40 crc kubenswrapper[4854]: I0103 07:06:40.850751 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 03 07:06:41 crc kubenswrapper[4854]: I0103 07:06:41.060118 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-lzwb4" Jan 03 07:06:41 crc kubenswrapper[4854]: I0103 07:06:41.081776 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-6fczv" Jan 03 07:06:41 crc kubenswrapper[4854]: I0103 07:06:41.180533 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-5bddd4b946-bzjqc" Jan 03 07:06:41 crc kubenswrapper[4854]: I0103 07:06:41.204371 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8f5jd" Jan 03 07:06:41 crc kubenswrapper[4854]: I0103 07:06:41.206546 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8f5jd" Jan 03 07:06:41 crc kubenswrapper[4854]: I0103 07:06:41.341546 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 07:06:41 crc kubenswrapper[4854]: I0103 07:06:41.341770 4854 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 03 07:06:41 crc kubenswrapper[4854]: I0103 07:06:41.341819 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 03 07:06:41 crc kubenswrapper[4854]: I0103 07:06:41.401322 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-7vksk" Jan 03 07:06:41 crc kubenswrapper[4854]: I0103 07:06:41.401615 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-7vksk" Jan 03 07:06:41 crc kubenswrapper[4854]: I0103 07:06:41.703796 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-7vksk" Jan 03 07:06:41 crc kubenswrapper[4854]: I0103 07:06:41.862801 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-6pvh8" podUID="03a2de93-c858-46e8-ae42-a34d1d776b7c" containerName="registry-server" probeResult="failure" output=< Jan 03 07:06:41 crc kubenswrapper[4854]: timeout: failed to connect service ":50051" within 1s Jan 03 07:06:41 crc kubenswrapper[4854]: > Jan 03 07:06:41 crc kubenswrapper[4854]: I0103 07:06:41.944151 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n82hj" Jan 03 07:06:42 crc kubenswrapper[4854]: I0103 07:06:42.293730 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 03 07:06:42 crc kubenswrapper[4854]: I0103 07:06:42.294557 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 03 07:06:42 crc kubenswrapper[4854]: I0103 07:06:42.331643 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-czgkw" event={"ID":"8ff403f4-841a-4305-8f8d-4f5fd6b14765","Type":"ContainerStarted","Data":"79da304d6df2d02f77999b0c9d3cf1664be473b3c39ec1ca0be21b217d3dec57"} Jan 03 07:06:42 crc kubenswrapper[4854]: I0103 07:06:42.335164 4854 generic.go:334] "Generic (PLEG): container finished" podID="2d802db5-d336-4639-8264-e628fa15d820" containerID="8d605ff5812a1bb92720d9dfe6ed631408ba09c2d540278cddd1b7b5491d467b" exitCode=0 Jan 03 07:06:42 crc kubenswrapper[4854]: I0103 07:06:42.335297 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2d802db5-d336-4639-8264-e628fa15d820","Type":"ContainerDied","Data":"8d605ff5812a1bb92720d9dfe6ed631408ba09c2d540278cddd1b7b5491d467b"} Jan 03 07:06:42 crc kubenswrapper[4854]: I0103 07:06:42.361298 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-czgkw" podStartSLOduration=44.51137689 podStartE2EDuration="48.357041319s" podCreationTimestamp="2026-01-03 07:05:54 +0000 UTC" firstStartedPulling="2026-01-03 07:06:37.01224639 +0000 UTC m=+5175.338822962" lastFinishedPulling="2026-01-03 07:06:40.857910829 +0000 UTC m=+5179.184487391" observedRunningTime="2026-01-03 07:06:42.354386964 +0000 UTC m=+5180.680963546" watchObservedRunningTime="2026-01-03 07:06:42.357041319 +0000 UTC m=+5180.683617891" Jan 03 07:06:42 crc kubenswrapper[4854]: I0103 07:06:42.383124 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-7vksk" Jan 03 07:06:42 crc kubenswrapper[4854]: I0103 07:06:42.518937 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-czgkw" Jan 03 07:06:42 crc kubenswrapper[4854]: I0103 07:06:42.518995 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-czgkw" Jan 03 07:06:42 crc kubenswrapper[4854]: I0103 07:06:42.605293 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 03 07:06:42 crc kubenswrapper[4854]: I0103 07:06:42.745673 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-8f5jd" podUID="a0902db0-b7a6-496e-955c-c6f6bb3429c6" containerName="registry-server" probeResult="failure" output=< Jan 03 07:06:42 crc kubenswrapper[4854]: timeout: failed to connect service ":50051" within 1s Jan 03 07:06:42 crc kubenswrapper[4854]: > Jan 03 07:06:42 crc kubenswrapper[4854]: I0103 07:06:42.749573 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-67666b4d85-nwx4t" podUID="002174a6-3b57-4eba-985b-9fd7c492b143" containerName="console" containerID="cri-o://880bc6dc8873f0bbc31cde5de1f7081f573da192ec8aefac577a46a08ed98ee5" gracePeriod=13 Jan 03 07:06:42 crc kubenswrapper[4854]: E0103 07:06:42.903520 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5b3c9925f5f7e6441dbf0a0ee55b9643562516ab15c2af0af5a2c0f0efe1c5ae" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 03 07:06:42 crc kubenswrapper[4854]: E0103 07:06:42.907038 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5b3c9925f5f7e6441dbf0a0ee55b9643562516ab15c2af0af5a2c0f0efe1c5ae" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 03 07:06:42 crc kubenswrapper[4854]: E0103 07:06:42.916062 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5b3c9925f5f7e6441dbf0a0ee55b9643562516ab15c2af0af5a2c0f0efe1c5ae" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 03 07:06:42 crc kubenswrapper[4854]: E0103 07:06:42.916140 4854 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-64d84f65b5-cnzjg" podUID="f0006564-0566-4941-983d-8e5c58889f7f" containerName="heat-engine" Jan 03 07:06:42 crc kubenswrapper[4854]: I0103 07:06:42.917557 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-9trnq" Jan 03 07:06:43 crc kubenswrapper[4854]: I0103 07:06:43.350539 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-67666b4d85-nwx4t_002174a6-3b57-4eba-985b-9fd7c492b143/console/0.log" Jan 03 07:06:43 crc kubenswrapper[4854]: I0103 07:06:43.350867 4854 generic.go:334] "Generic (PLEG): container finished" podID="002174a6-3b57-4eba-985b-9fd7c492b143" containerID="880bc6dc8873f0bbc31cde5de1f7081f573da192ec8aefac577a46a08ed98ee5" exitCode=2 Jan 03 07:06:43 crc kubenswrapper[4854]: I0103 07:06:43.351366 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-67666b4d85-nwx4t" event={"ID":"002174a6-3b57-4eba-985b-9fd7c492b143","Type":"ContainerDied","Data":"880bc6dc8873f0bbc31cde5de1f7081f573da192ec8aefac577a46a08ed98ee5"} Jan 03 07:06:43 crc kubenswrapper[4854]: I0103 07:06:43.351412 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-67666b4d85-nwx4t" event={"ID":"002174a6-3b57-4eba-985b-9fd7c492b143","Type":"ContainerStarted","Data":"eadce77e8b53e87116f0373f0aa2cfe1c33a343b1da9b616d52dea6bd1bf7197"} Jan 03 07:06:43 crc kubenswrapper[4854]: I0103 07:06:43.437172 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-c8b457848-dg5cs" Jan 03 07:06:43 crc kubenswrapper[4854]: I0103 07:06:43.519021 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-jvp7v" Jan 03 07:06:43 crc kubenswrapper[4854]: I0103 07:06:43.532010 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-msvf6" Jan 03 07:06:43 crc kubenswrapper[4854]: I0103 07:06:43.565273 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-trsxr" Jan 03 07:06:43 crc kubenswrapper[4854]: I0103 07:06:43.604662 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Jan 03 07:06:43 crc kubenswrapper[4854]: I0103 07:06:43.627318 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-czgkw" podUID="8ff403f4-841a-4305-8f8d-4f5fd6b14765" containerName="registry-server" probeResult="failure" output=< Jan 03 07:06:43 crc kubenswrapper[4854]: timeout: failed to connect service ":50051" within 1s Jan 03 07:06:43 crc kubenswrapper[4854]: > Jan 03 07:06:43 crc kubenswrapper[4854]: I0103 07:06:43.692283 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-monitoring/prometheus-k8s-0" podUID="9afcc108-879e-4244-a52b-1c5720d08571" containerName="prometheus" probeResult="failure" output=< Jan 03 07:06:43 crc kubenswrapper[4854]: % Total % Received % Xferd Average Speed Time Time Time Current Jan 03 07:06:43 crc kubenswrapper[4854]: Dload Upload Total Spent Left Speed Jan 03 07:06:43 crc kubenswrapper[4854]: [166B blob data] Jan 03 07:06:43 crc kubenswrapper[4854]: curl: (22) The requested URL returned error: 503 Jan 03 07:06:43 crc kubenswrapper[4854]: > Jan 03 07:06:43 crc kubenswrapper[4854]: I0103 07:06:43.847749 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-hgwsb" Jan 03 07:06:43 crc kubenswrapper[4854]: I0103 07:06:43.897042 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-k6nnf" Jan 03 07:06:43 crc kubenswrapper[4854]: I0103 07:06:43.940760 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-4pbfn" Jan 03 07:06:43 crc kubenswrapper[4854]: I0103 07:06:43.969144 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-xgtzc" Jan 03 07:06:43 crc kubenswrapper[4854]: I0103 07:06:43.986731 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s6gct" Jan 03 07:06:43 crc kubenswrapper[4854]: I0103 07:06:43.986936 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s6gct" Jan 03 07:06:43 crc kubenswrapper[4854]: I0103 07:06:43.997454 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-568985c78-x78fv" Jan 03 07:06:43 crc kubenswrapper[4854]: I0103 07:06:43.998417 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-vdnq9" Jan 03 07:06:44 crc kubenswrapper[4854]: I0103 07:06:44.132636 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-z7cfx" Jan 03 07:06:44 crc kubenswrapper[4854]: I0103 07:06:44.218527 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-7lvxp" Jan 03 07:06:44 crc kubenswrapper[4854]: I0103 07:06:44.268221 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-8xksh" Jan 03 07:06:44 crc kubenswrapper[4854]: I0103 07:06:44.269551 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-ncjlb" Jan 03 07:06:44 crc kubenswrapper[4854]: I0103 07:06:44.305354 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-q6v5f" Jan 03 07:06:44 crc kubenswrapper[4854]: I0103 07:06:44.324488 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-xrghz" Jan 03 07:06:44 crc kubenswrapper[4854]: I0103 07:06:44.341650 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-jqj54" Jan 03 07:06:44 crc kubenswrapper[4854]: I0103 07:06:44.370139 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2d802db5-d336-4639-8264-e628fa15d820","Type":"ContainerStarted","Data":"4d3d8e36da92c4909ac59c6400fdc8382de92a3ce9e8dfa7e4f358e22357e734"} Jan 03 07:06:44 crc kubenswrapper[4854]: I0103 07:06:44.390917 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-dprp4" Jan 03 07:06:44 crc kubenswrapper[4854]: I0103 07:06:44.574780 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-qzzw2" Jan 03 07:06:44 crc kubenswrapper[4854]: I0103 07:06:44.906117 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6994f97844-8cxlw" Jan 03 07:06:44 crc kubenswrapper[4854]: I0103 07:06:44.990327 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-n9b9g"] Jan 03 07:06:44 crc kubenswrapper[4854]: I0103 07:06:44.994579 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n9b9g" Jan 03 07:06:45 crc kubenswrapper[4854]: I0103 07:06:45.014961 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n9b9g"] Jan 03 07:06:45 crc kubenswrapper[4854]: I0103 07:06:45.072245 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-s6gct" podUID="99f863f5-fa79-40f0-8ee2-d3d75b6c3df2" containerName="registry-server" probeResult="failure" output=< Jan 03 07:06:45 crc kubenswrapper[4854]: timeout: failed to connect service ":50051" within 1s Jan 03 07:06:45 crc kubenswrapper[4854]: > Jan 03 07:06:45 crc kubenswrapper[4854]: I0103 07:06:45.080132 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b08d10d9-fc85-455b-a848-ae1109eee932-utilities\") pod \"redhat-operators-n9b9g\" (UID: \"b08d10d9-fc85-455b-a848-ae1109eee932\") " pod="openshift-marketplace/redhat-operators-n9b9g" Jan 03 07:06:45 crc kubenswrapper[4854]: I0103 07:06:45.080336 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b08d10d9-fc85-455b-a848-ae1109eee932-catalog-content\") pod \"redhat-operators-n9b9g\" (UID: \"b08d10d9-fc85-455b-a848-ae1109eee932\") " pod="openshift-marketplace/redhat-operators-n9b9g" Jan 03 07:06:45 crc kubenswrapper[4854]: I0103 07:06:45.080632 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg6rt\" (UniqueName: \"kubernetes.io/projected/b08d10d9-fc85-455b-a848-ae1109eee932-kube-api-access-hg6rt\") pod \"redhat-operators-n9b9g\" (UID: \"b08d10d9-fc85-455b-a848-ae1109eee932\") " pod="openshift-marketplace/redhat-operators-n9b9g" Jan 03 07:06:45 crc kubenswrapper[4854]: I0103 07:06:45.184735 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b08d10d9-fc85-455b-a848-ae1109eee932-utilities\") pod \"redhat-operators-n9b9g\" (UID: \"b08d10d9-fc85-455b-a848-ae1109eee932\") " pod="openshift-marketplace/redhat-operators-n9b9g" Jan 03 07:06:45 crc kubenswrapper[4854]: I0103 07:06:45.185052 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b08d10d9-fc85-455b-a848-ae1109eee932-catalog-content\") pod \"redhat-operators-n9b9g\" (UID: \"b08d10d9-fc85-455b-a848-ae1109eee932\") " pod="openshift-marketplace/redhat-operators-n9b9g" Jan 03 07:06:45 crc kubenswrapper[4854]: I0103 07:06:45.185284 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hg6rt\" (UniqueName: \"kubernetes.io/projected/b08d10d9-fc85-455b-a848-ae1109eee932-kube-api-access-hg6rt\") pod \"redhat-operators-n9b9g\" (UID: \"b08d10d9-fc85-455b-a848-ae1109eee932\") " pod="openshift-marketplace/redhat-operators-n9b9g" Jan 03 07:06:45 crc kubenswrapper[4854]: I0103 07:06:45.186006 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b08d10d9-fc85-455b-a848-ae1109eee932-utilities\") pod \"redhat-operators-n9b9g\" (UID: \"b08d10d9-fc85-455b-a848-ae1109eee932\") " pod="openshift-marketplace/redhat-operators-n9b9g" Jan 03 07:06:45 crc kubenswrapper[4854]: I0103 07:06:45.186266 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b08d10d9-fc85-455b-a848-ae1109eee932-catalog-content\") pod \"redhat-operators-n9b9g\" (UID: \"b08d10d9-fc85-455b-a848-ae1109eee932\") " pod="openshift-marketplace/redhat-operators-n9b9g" Jan 03 07:06:45 crc kubenswrapper[4854]: I0103 07:06:45.234915 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hg6rt\" (UniqueName: \"kubernetes.io/projected/b08d10d9-fc85-455b-a848-ae1109eee932-kube-api-access-hg6rt\") pod \"redhat-operators-n9b9g\" (UID: \"b08d10d9-fc85-455b-a848-ae1109eee932\") " pod="openshift-marketplace/redhat-operators-n9b9g" Jan 03 07:06:45 crc kubenswrapper[4854]: I0103 07:06:45.336701 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n9b9g" Jan 03 07:06:45 crc kubenswrapper[4854]: I0103 07:06:45.373472 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-67666b4d85-nwx4t" Jan 03 07:06:45 crc kubenswrapper[4854]: I0103 07:06:45.373771 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-67666b4d85-nwx4t" Jan 03 07:06:45 crc kubenswrapper[4854]: I0103 07:06:45.375670 4854 patch_prober.go:28] interesting pod/console-67666b4d85-nwx4t container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.138:8443/health\": dial tcp 10.217.0.138:8443: connect: connection refused" start-of-body= Jan 03 07:06:45 crc kubenswrapper[4854]: I0103 07:06:45.375815 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-67666b4d85-nwx4t" podUID="002174a6-3b57-4eba-985b-9fd7c492b143" containerName="console" probeResult="failure" output="Get \"https://10.217.0.138:8443/health\": dial tcp 10.217.0.138:8443: connect: connection refused" Jan 03 07:06:45 crc kubenswrapper[4854]: I0103 07:06:45.384506 4854 generic.go:334] "Generic (PLEG): container finished" podID="e5328fb8-38ea-4119-aa67-b052d0ae7971" containerID="ea8c0cd040983fe5129817596250c6a78376cebc85df8129b621c3c77345d4e5" exitCode=1 Jan 03 07:06:45 crc kubenswrapper[4854]: I0103 07:06:45.385917 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"e5328fb8-38ea-4119-aa67-b052d0ae7971","Type":"ContainerDied","Data":"ea8c0cd040983fe5129817596250c6a78376cebc85df8129b621c3c77345d4e5"} Jan 03 07:06:45 crc kubenswrapper[4854]: I0103 07:06:45.798710 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-799fd78b6c-wqs5s" Jan 03 07:06:46 crc kubenswrapper[4854]: I0103 07:06:46.046157 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n9b9g"] Jan 03 07:06:46 crc kubenswrapper[4854]: W0103 07:06:46.067724 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb08d10d9_fc85_455b_a848_ae1109eee932.slice/crio-07ed526d54ad06fcbe7d8c369f6beb35c5cfab4fa1f960fee23239302de1bb16 WatchSource:0}: Error finding container 07ed526d54ad06fcbe7d8c369f6beb35c5cfab4fa1f960fee23239302de1bb16: Status 404 returned error can't find the container with id 07ed526d54ad06fcbe7d8c369f6beb35c5cfab4fa1f960fee23239302de1bb16 Jan 03 07:06:46 crc kubenswrapper[4854]: I0103 07:06:46.397106 4854 generic.go:334] "Generic (PLEG): container finished" podID="b08d10d9-fc85-455b-a848-ae1109eee932" containerID="9d52bd6c98dcad3da82ca5a886fc8144733aa46b6e370461608024b65dfeb5b4" exitCode=0 Jan 03 07:06:46 crc kubenswrapper[4854]: I0103 07:06:46.398491 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n9b9g" event={"ID":"b08d10d9-fc85-455b-a848-ae1109eee932","Type":"ContainerDied","Data":"9d52bd6c98dcad3da82ca5a886fc8144733aa46b6e370461608024b65dfeb5b4"} Jan 03 07:06:46 crc kubenswrapper[4854]: I0103 07:06:46.398542 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n9b9g" event={"ID":"b08d10d9-fc85-455b-a848-ae1109eee932","Type":"ContainerStarted","Data":"07ed526d54ad06fcbe7d8c369f6beb35c5cfab4fa1f960fee23239302de1bb16"} Jan 03 07:06:47 crc kubenswrapper[4854]: I0103 07:06:47.061204 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 03 07:06:47 crc kubenswrapper[4854]: I0103 07:06:47.137574 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 03 07:06:47 crc kubenswrapper[4854]: I0103 07:06:47.239291 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e5328fb8-38ea-4119-aa67-b052d0ae7971-test-operator-ephemeral-temporary\") pod \"e5328fb8-38ea-4119-aa67-b052d0ae7971\" (UID: \"e5328fb8-38ea-4119-aa67-b052d0ae7971\") " Jan 03 07:06:47 crc kubenswrapper[4854]: I0103 07:06:47.239705 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e5328fb8-38ea-4119-aa67-b052d0ae7971-openstack-config-secret\") pod \"e5328fb8-38ea-4119-aa67-b052d0ae7971\" (UID: \"e5328fb8-38ea-4119-aa67-b052d0ae7971\") " Jan 03 07:06:47 crc kubenswrapper[4854]: I0103 07:06:47.239838 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e5328fb8-38ea-4119-aa67-b052d0ae7971-ssh-key\") pod \"e5328fb8-38ea-4119-aa67-b052d0ae7971\" (UID: \"e5328fb8-38ea-4119-aa67-b052d0ae7971\") " Jan 03 07:06:47 crc kubenswrapper[4854]: I0103 07:06:47.239888 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e5328fb8-38ea-4119-aa67-b052d0ae7971-config-data\") pod \"e5328fb8-38ea-4119-aa67-b052d0ae7971\" (UID: \"e5328fb8-38ea-4119-aa67-b052d0ae7971\") " Jan 03 07:06:47 crc kubenswrapper[4854]: I0103 07:06:47.239916 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e5328fb8-38ea-4119-aa67-b052d0ae7971-openstack-config\") pod \"e5328fb8-38ea-4119-aa67-b052d0ae7971\" (UID: \"e5328fb8-38ea-4119-aa67-b052d0ae7971\") " Jan 03 07:06:47 crc kubenswrapper[4854]: I0103 07:06:47.240014 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e5328fb8-38ea-4119-aa67-b052d0ae7971-test-operator-ephemeral-workdir\") pod \"e5328fb8-38ea-4119-aa67-b052d0ae7971\" (UID: \"e5328fb8-38ea-4119-aa67-b052d0ae7971\") " Jan 03 07:06:47 crc kubenswrapper[4854]: I0103 07:06:47.240050 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"e5328fb8-38ea-4119-aa67-b052d0ae7971\" (UID: \"e5328fb8-38ea-4119-aa67-b052d0ae7971\") " Jan 03 07:06:47 crc kubenswrapper[4854]: I0103 07:06:47.240107 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzrpx\" (UniqueName: \"kubernetes.io/projected/e5328fb8-38ea-4119-aa67-b052d0ae7971-kube-api-access-jzrpx\") pod \"e5328fb8-38ea-4119-aa67-b052d0ae7971\" (UID: \"e5328fb8-38ea-4119-aa67-b052d0ae7971\") " Jan 03 07:06:47 crc kubenswrapper[4854]: I0103 07:06:47.240159 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e5328fb8-38ea-4119-aa67-b052d0ae7971-ca-certs\") pod \"e5328fb8-38ea-4119-aa67-b052d0ae7971\" (UID: \"e5328fb8-38ea-4119-aa67-b052d0ae7971\") " Jan 03 07:06:47 crc kubenswrapper[4854]: I0103 07:06:47.241280 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5328fb8-38ea-4119-aa67-b052d0ae7971-config-data" (OuterVolumeSpecName: "config-data") pod "e5328fb8-38ea-4119-aa67-b052d0ae7971" (UID: "e5328fb8-38ea-4119-aa67-b052d0ae7971"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 07:06:47 crc kubenswrapper[4854]: I0103 07:06:47.242584 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5328fb8-38ea-4119-aa67-b052d0ae7971-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "e5328fb8-38ea-4119-aa67-b052d0ae7971" (UID: "e5328fb8-38ea-4119-aa67-b052d0ae7971"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 07:06:47 crc kubenswrapper[4854]: I0103 07:06:47.247160 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5328fb8-38ea-4119-aa67-b052d0ae7971-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "e5328fb8-38ea-4119-aa67-b052d0ae7971" (UID: "e5328fb8-38ea-4119-aa67-b052d0ae7971"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 07:06:47 crc kubenswrapper[4854]: I0103 07:06:47.247853 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "test-operator-logs") pod "e5328fb8-38ea-4119-aa67-b052d0ae7971" (UID: "e5328fb8-38ea-4119-aa67-b052d0ae7971"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 03 07:06:47 crc kubenswrapper[4854]: I0103 07:06:47.256576 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5328fb8-38ea-4119-aa67-b052d0ae7971-kube-api-access-jzrpx" (OuterVolumeSpecName: "kube-api-access-jzrpx") pod "e5328fb8-38ea-4119-aa67-b052d0ae7971" (UID: "e5328fb8-38ea-4119-aa67-b052d0ae7971"). InnerVolumeSpecName "kube-api-access-jzrpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 07:06:47 crc kubenswrapper[4854]: I0103 07:06:47.296288 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5328fb8-38ea-4119-aa67-b052d0ae7971-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "e5328fb8-38ea-4119-aa67-b052d0ae7971" (UID: "e5328fb8-38ea-4119-aa67-b052d0ae7971"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 07:06:47 crc kubenswrapper[4854]: I0103 07:06:47.310906 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5328fb8-38ea-4119-aa67-b052d0ae7971-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e5328fb8-38ea-4119-aa67-b052d0ae7971" (UID: "e5328fb8-38ea-4119-aa67-b052d0ae7971"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 07:06:47 crc kubenswrapper[4854]: I0103 07:06:47.317407 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5328fb8-38ea-4119-aa67-b052d0ae7971-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "e5328fb8-38ea-4119-aa67-b052d0ae7971" (UID: "e5328fb8-38ea-4119-aa67-b052d0ae7971"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 07:06:47 crc kubenswrapper[4854]: I0103 07:06:47.343197 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzrpx\" (UniqueName: \"kubernetes.io/projected/e5328fb8-38ea-4119-aa67-b052d0ae7971-kube-api-access-jzrpx\") on node \"crc\" DevicePath \"\"" Jan 03 07:06:47 crc kubenswrapper[4854]: I0103 07:06:47.343230 4854 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e5328fb8-38ea-4119-aa67-b052d0ae7971-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 03 07:06:47 crc kubenswrapper[4854]: I0103 07:06:47.343242 4854 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e5328fb8-38ea-4119-aa67-b052d0ae7971-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 03 07:06:47 crc kubenswrapper[4854]: I0103 07:06:47.343253 4854 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e5328fb8-38ea-4119-aa67-b052d0ae7971-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 03 07:06:47 crc kubenswrapper[4854]: I0103 07:06:47.343262 4854 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e5328fb8-38ea-4119-aa67-b052d0ae7971-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 03 07:06:47 crc kubenswrapper[4854]: I0103 07:06:47.343270 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e5328fb8-38ea-4119-aa67-b052d0ae7971-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 07:06:47 crc kubenswrapper[4854]: I0103 07:06:47.343279 4854 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e5328fb8-38ea-4119-aa67-b052d0ae7971-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 03 07:06:47 crc kubenswrapper[4854]: I0103 07:06:47.343305 4854 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 03 07:06:47 crc kubenswrapper[4854]: I0103 07:06:47.351935 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5328fb8-38ea-4119-aa67-b052d0ae7971-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "e5328fb8-38ea-4119-aa67-b052d0ae7971" (UID: "e5328fb8-38ea-4119-aa67-b052d0ae7971"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 07:06:47 crc kubenswrapper[4854]: I0103 07:06:47.395919 4854 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 03 07:06:47 crc kubenswrapper[4854]: I0103 07:06:47.425564 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n9b9g" event={"ID":"b08d10d9-fc85-455b-a848-ae1109eee932","Type":"ContainerStarted","Data":"86662bd3a93b1430cb203d0ee17d2ea0b4f1cefadac58b3386a7bff731bfaf1a"} Jan 03 07:06:47 crc kubenswrapper[4854]: I0103 07:06:47.440854 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"e5328fb8-38ea-4119-aa67-b052d0ae7971","Type":"ContainerDied","Data":"2ca63f2a01b6b67bdb1894d1e8e55816d944c4efb85af7cf8fa72a8e7d455ac0"} Jan 03 07:06:47 crc kubenswrapper[4854]: I0103 07:06:47.440925 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ca63f2a01b6b67bdb1894d1e8e55816d944c4efb85af7cf8fa72a8e7d455ac0" Jan 03 07:06:47 crc kubenswrapper[4854]: I0103 07:06:47.440999 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 03 07:06:47 crc kubenswrapper[4854]: I0103 07:06:47.445319 4854 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e5328fb8-38ea-4119-aa67-b052d0ae7971-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 03 07:06:47 crc kubenswrapper[4854]: I0103 07:06:47.445353 4854 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 03 07:06:48 crc kubenswrapper[4854]: I0103 07:06:48.456785 4854 generic.go:334] "Generic (PLEG): container finished" podID="29a7524a-4f1c-4e10-ae41-8e05f91cbde6" containerID="161aa8978d0e162a2d5fef70db9445adca1e8119b53f12d630da478dbffc384e" exitCode=137 Jan 03 07:06:48 crc kubenswrapper[4854]: I0103 07:06:48.456969 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qmphn" event={"ID":"29a7524a-4f1c-4e10-ae41-8e05f91cbde6","Type":"ContainerDied","Data":"161aa8978d0e162a2d5fef70db9445adca1e8119b53f12d630da478dbffc384e"} Jan 03 07:06:48 crc kubenswrapper[4854]: I0103 07:06:48.457601 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qmphn" event={"ID":"29a7524a-4f1c-4e10-ae41-8e05f91cbde6","Type":"ContainerStarted","Data":"33a021313b1ddd614038e432bbf8be7031e6d68346f9d05b5b77c480b2461255"} Jan 03 07:06:48 crc kubenswrapper[4854]: I0103 07:06:48.746413 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 03 07:06:48 crc kubenswrapper[4854]: I0103 07:06:48.753388 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 03 07:06:49 crc kubenswrapper[4854]: I0103 07:06:49.107285 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-bd45dfbc8-vmrll" Jan 03 07:06:49 crc kubenswrapper[4854]: I0103 07:06:49.476057 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 03 07:06:49 crc kubenswrapper[4854]: I0103 07:06:49.504352 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qqbq9" Jan 03 07:06:50 crc kubenswrapper[4854]: I0103 07:06:50.055621 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7nclw5" Jan 03 07:06:51 crc kubenswrapper[4854]: I0103 07:06:51.074848 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9lzwf" podUID="7e9a4f28-3133-4df6-9ed3-fbae3e03d777" containerName="registry-server" probeResult="failure" output=< Jan 03 07:06:51 crc kubenswrapper[4854]: timeout: failed to connect service ":50051" within 1s Jan 03 07:06:51 crc kubenswrapper[4854]: > Jan 03 07:06:51 crc kubenswrapper[4854]: I0103 07:06:51.281921 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-85b679bdc6-qrnbc" Jan 03 07:06:51 crc kubenswrapper[4854]: I0103 07:06:51.306545 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8f5jd" Jan 03 07:06:51 crc kubenswrapper[4854]: I0103 07:06:51.308835 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 03 07:06:51 crc kubenswrapper[4854]: E0103 07:06:51.310448 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5328fb8-38ea-4119-aa67-b052d0ae7971" containerName="tempest-tests-tempest-tests-runner" Jan 03 07:06:51 crc kubenswrapper[4854]: I0103 07:06:51.310486 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5328fb8-38ea-4119-aa67-b052d0ae7971" containerName="tempest-tests-tempest-tests-runner" Jan 03 07:06:51 crc kubenswrapper[4854]: I0103 07:06:51.311455 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5328fb8-38ea-4119-aa67-b052d0ae7971" containerName="tempest-tests-tempest-tests-runner" Jan 03 07:06:51 crc kubenswrapper[4854]: I0103 07:06:51.313393 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 03 07:06:51 crc kubenswrapper[4854]: I0103 07:06:51.316886 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-5fscr" Jan 03 07:06:51 crc kubenswrapper[4854]: I0103 07:06:51.343845 4854 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 03 07:06:51 crc kubenswrapper[4854]: I0103 07:06:51.343890 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 03 07:06:51 crc kubenswrapper[4854]: I0103 07:06:51.344975 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 03 07:06:51 crc kubenswrapper[4854]: I0103 07:06:51.496788 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8f5jd" Jan 03 07:06:51 crc kubenswrapper[4854]: I0103 07:06:51.547208 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"902965ab-fc30-41fe-864a-1f8275d1d87d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 03 07:06:51 crc kubenswrapper[4854]: I0103 07:06:51.547599 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt8cr\" (UniqueName: \"kubernetes.io/projected/902965ab-fc30-41fe-864a-1f8275d1d87d-kube-api-access-xt8cr\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"902965ab-fc30-41fe-864a-1f8275d1d87d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 03 07:06:51 crc kubenswrapper[4854]: I0103 07:06:51.648816 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"902965ab-fc30-41fe-864a-1f8275d1d87d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 03 07:06:51 crc kubenswrapper[4854]: I0103 07:06:51.648971 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xt8cr\" (UniqueName: \"kubernetes.io/projected/902965ab-fc30-41fe-864a-1f8275d1d87d-kube-api-access-xt8cr\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"902965ab-fc30-41fe-864a-1f8275d1d87d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 03 07:06:51 crc kubenswrapper[4854]: I0103 07:06:51.649524 4854 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"902965ab-fc30-41fe-864a-1f8275d1d87d\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 03 07:06:51 crc kubenswrapper[4854]: I0103 07:06:51.879849 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-6pvh8" podUID="03a2de93-c858-46e8-ae42-a34d1d776b7c" containerName="registry-server" probeResult="failure" output=< Jan 03 07:06:51 crc kubenswrapper[4854]: timeout: failed to connect service ":50051" within 1s Jan 03 07:06:51 crc kubenswrapper[4854]: > Jan 03 07:06:52 crc kubenswrapper[4854]: I0103 07:06:52.085730 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="2d802db5-d336-4639-8264-e628fa15d820" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 03 07:06:52 crc kubenswrapper[4854]: I0103 07:06:52.155487 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xt8cr\" (UniqueName: \"kubernetes.io/projected/902965ab-fc30-41fe-864a-1f8275d1d87d-kube-api-access-xt8cr\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"902965ab-fc30-41fe-864a-1f8275d1d87d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 03 07:06:52 crc kubenswrapper[4854]: I0103 07:06:52.205207 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"902965ab-fc30-41fe-864a-1f8275d1d87d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 03 07:06:52 crc kubenswrapper[4854]: I0103 07:06:52.248246 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 03 07:06:52 crc kubenswrapper[4854]: I0103 07:06:52.647824 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-czgkw" Jan 03 07:06:52 crc kubenswrapper[4854]: I0103 07:06:52.720570 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-czgkw" Jan 03 07:06:52 crc kubenswrapper[4854]: E0103 07:06:52.908025 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5b3c9925f5f7e6441dbf0a0ee55b9643562516ab15c2af0af5a2c0f0efe1c5ae" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 03 07:06:52 crc kubenswrapper[4854]: E0103 07:06:52.911853 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5b3c9925f5f7e6441dbf0a0ee55b9643562516ab15c2af0af5a2c0f0efe1c5ae" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 03 07:06:52 crc kubenswrapper[4854]: E0103 07:06:52.913858 4854 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5b3c9925f5f7e6441dbf0a0ee55b9643562516ab15c2af0af5a2c0f0efe1c5ae" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 03 07:06:52 crc kubenswrapper[4854]: E0103 07:06:52.913896 4854 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-64d84f65b5-cnzjg" podUID="f0006564-0566-4941-983d-8e5c58889f7f" containerName="heat-engine" Jan 03 07:06:53 crc kubenswrapper[4854]: I0103 07:06:53.075252 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 03 07:06:53 crc kubenswrapper[4854]: I0103 07:06:53.537016 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"902965ab-fc30-41fe-864a-1f8275d1d87d","Type":"ContainerStarted","Data":"b71dbbd5ce818b41e968c21c3bd5fb19b4785e1fd8b9f03bc4659b9ee9cc2ee3"} Jan 03 07:06:53 crc kubenswrapper[4854]: I0103 07:06:53.539460 4854 generic.go:334] "Generic (PLEG): container finished" podID="b08d10d9-fc85-455b-a848-ae1109eee932" containerID="86662bd3a93b1430cb203d0ee17d2ea0b4f1cefadac58b3386a7bff731bfaf1a" exitCode=0 Jan 03 07:06:53 crc kubenswrapper[4854]: I0103 07:06:53.539834 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n9b9g" event={"ID":"b08d10d9-fc85-455b-a848-ae1109eee932","Type":"ContainerDied","Data":"86662bd3a93b1430cb203d0ee17d2ea0b4f1cefadac58b3386a7bff731bfaf1a"} Jan 03 07:06:54 crc kubenswrapper[4854]: I0103 07:06:54.055928 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s6gct" Jan 03 07:06:54 crc kubenswrapper[4854]: I0103 07:06:54.207316 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s6gct" Jan 03 07:06:54 crc kubenswrapper[4854]: I0103 07:06:54.207367 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-9mfrk" Jan 03 07:06:54 crc kubenswrapper[4854]: I0103 07:06:54.554923 4854 generic.go:334] "Generic (PLEG): container finished" podID="f0006564-0566-4941-983d-8e5c58889f7f" containerID="5b3c9925f5f7e6441dbf0a0ee55b9643562516ab15c2af0af5a2c0f0efe1c5ae" exitCode=0 Jan 03 07:06:54 crc kubenswrapper[4854]: I0103 07:06:54.555023 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-64d84f65b5-cnzjg" event={"ID":"f0006564-0566-4941-983d-8e5c58889f7f","Type":"ContainerDied","Data":"5b3c9925f5f7e6441dbf0a0ee55b9643562516ab15c2af0af5a2c0f0efe1c5ae"} Jan 03 07:06:54 crc kubenswrapper[4854]: I0103 07:06:54.555353 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-64d84f65b5-cnzjg" event={"ID":"f0006564-0566-4941-983d-8e5c58889f7f","Type":"ContainerStarted","Data":"dde0484f351c4da05b311ce6981584a23c70981f016f63d0220d4c22556dad32"} Jan 03 07:06:54 crc kubenswrapper[4854]: I0103 07:06:54.558713 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n9b9g" event={"ID":"b08d10d9-fc85-455b-a848-ae1109eee932","Type":"ContainerStarted","Data":"7d839f66dcb8884c2446672a1ef279e17e94e2499d9d225db0f31c4590ce9213"} Jan 03 07:06:54 crc kubenswrapper[4854]: I0103 07:06:54.577893 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-n9b9g" podStartSLOduration=2.8254817450000003 podStartE2EDuration="10.57787773s" podCreationTimestamp="2026-01-03 07:06:44 +0000 UTC" firstStartedPulling="2026-01-03 07:06:46.402272028 +0000 UTC m=+5184.728848590" lastFinishedPulling="2026-01-03 07:06:54.154668003 +0000 UTC m=+5192.481244575" observedRunningTime="2026-01-03 07:06:54.575763918 +0000 UTC m=+5192.902340520" watchObservedRunningTime="2026-01-03 07:06:54.57787773 +0000 UTC m=+5192.904454302" Jan 03 07:06:55 crc kubenswrapper[4854]: I0103 07:06:55.157187 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-czgkw"] Jan 03 07:06:55 crc kubenswrapper[4854]: I0103 07:06:55.157636 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-czgkw" podUID="8ff403f4-841a-4305-8f8d-4f5fd6b14765" containerName="registry-server" containerID="cri-o://79da304d6df2d02f77999b0c9d3cf1664be473b3c39ec1ca0be21b217d3dec57" gracePeriod=2 Jan 03 07:06:55 crc kubenswrapper[4854]: I0103 07:06:55.337183 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-n9b9g" Jan 03 07:06:55 crc kubenswrapper[4854]: I0103 07:06:55.337256 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-n9b9g" Jan 03 07:06:55 crc kubenswrapper[4854]: I0103 07:06:55.374247 4854 patch_prober.go:28] interesting pod/console-67666b4d85-nwx4t container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.138:8443/health\": dial tcp 10.217.0.138:8443: connect: connection refused" start-of-body= Jan 03 07:06:55 crc kubenswrapper[4854]: I0103 07:06:55.374293 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-67666b4d85-nwx4t" podUID="002174a6-3b57-4eba-985b-9fd7c492b143" containerName="console" probeResult="failure" output="Get \"https://10.217.0.138:8443/health\": dial tcp 10.217.0.138:8443: connect: connection refused" Jan 03 07:06:55 crc kubenswrapper[4854]: I0103 07:06:55.631705 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"902965ab-fc30-41fe-864a-1f8275d1d87d","Type":"ContainerStarted","Data":"b69a7eb32f4e2a769dfe7ad264a25b32854df83dd3b81c2f732df1dfb050ee49"} Jan 03 07:06:55 crc kubenswrapper[4854]: I0103 07:06:55.637243 4854 generic.go:334] "Generic (PLEG): container finished" podID="8ff403f4-841a-4305-8f8d-4f5fd6b14765" containerID="79da304d6df2d02f77999b0c9d3cf1664be473b3c39ec1ca0be21b217d3dec57" exitCode=0 Jan 03 07:06:55 crc kubenswrapper[4854]: I0103 07:06:55.637341 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-czgkw" event={"ID":"8ff403f4-841a-4305-8f8d-4f5fd6b14765","Type":"ContainerDied","Data":"79da304d6df2d02f77999b0c9d3cf1664be473b3c39ec1ca0be21b217d3dec57"} Jan 03 07:06:55 crc kubenswrapper[4854]: I0103 07:06:55.637861 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-64d84f65b5-cnzjg" Jan 03 07:06:55 crc kubenswrapper[4854]: I0103 07:06:55.653493 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=4.579563691 podStartE2EDuration="5.653476745s" podCreationTimestamp="2026-01-03 07:06:50 +0000 UTC" firstStartedPulling="2026-01-03 07:06:53.080259557 +0000 UTC m=+5191.406836129" lastFinishedPulling="2026-01-03 07:06:54.154172611 +0000 UTC m=+5192.480749183" observedRunningTime="2026-01-03 07:06:55.649487507 +0000 UTC m=+5193.976064099" watchObservedRunningTime="2026-01-03 07:06:55.653476745 +0000 UTC m=+5193.980053317" Jan 03 07:06:55 crc kubenswrapper[4854]: I0103 07:06:55.988629 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-czgkw" Jan 03 07:06:56 crc kubenswrapper[4854]: I0103 07:06:56.087559 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ff403f4-841a-4305-8f8d-4f5fd6b14765-catalog-content\") pod \"8ff403f4-841a-4305-8f8d-4f5fd6b14765\" (UID: \"8ff403f4-841a-4305-8f8d-4f5fd6b14765\") " Jan 03 07:06:56 crc kubenswrapper[4854]: I0103 07:06:56.087803 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ff403f4-841a-4305-8f8d-4f5fd6b14765-utilities\") pod \"8ff403f4-841a-4305-8f8d-4f5fd6b14765\" (UID: \"8ff403f4-841a-4305-8f8d-4f5fd6b14765\") " Jan 03 07:06:56 crc kubenswrapper[4854]: I0103 07:06:56.087889 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvmvd\" (UniqueName: \"kubernetes.io/projected/8ff403f4-841a-4305-8f8d-4f5fd6b14765-kube-api-access-nvmvd\") pod \"8ff403f4-841a-4305-8f8d-4f5fd6b14765\" (UID: \"8ff403f4-841a-4305-8f8d-4f5fd6b14765\") " Jan 03 07:06:56 crc kubenswrapper[4854]: I0103 07:06:56.088506 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ff403f4-841a-4305-8f8d-4f5fd6b14765-utilities" (OuterVolumeSpecName: "utilities") pod "8ff403f4-841a-4305-8f8d-4f5fd6b14765" (UID: "8ff403f4-841a-4305-8f8d-4f5fd6b14765"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 07:06:56 crc kubenswrapper[4854]: I0103 07:06:56.096599 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ff403f4-841a-4305-8f8d-4f5fd6b14765-kube-api-access-nvmvd" (OuterVolumeSpecName: "kube-api-access-nvmvd") pod "8ff403f4-841a-4305-8f8d-4f5fd6b14765" (UID: "8ff403f4-841a-4305-8f8d-4f5fd6b14765"). InnerVolumeSpecName "kube-api-access-nvmvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 07:06:56 crc kubenswrapper[4854]: I0103 07:06:56.109920 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ff403f4-841a-4305-8f8d-4f5fd6b14765-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8ff403f4-841a-4305-8f8d-4f5fd6b14765" (UID: "8ff403f4-841a-4305-8f8d-4f5fd6b14765"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 07:06:56 crc kubenswrapper[4854]: I0103 07:06:56.190663 4854 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ff403f4-841a-4305-8f8d-4f5fd6b14765-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 07:06:56 crc kubenswrapper[4854]: I0103 07:06:56.190699 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvmvd\" (UniqueName: \"kubernetes.io/projected/8ff403f4-841a-4305-8f8d-4f5fd6b14765-kube-api-access-nvmvd\") on node \"crc\" DevicePath \"\"" Jan 03 07:06:56 crc kubenswrapper[4854]: I0103 07:06:56.190711 4854 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ff403f4-841a-4305-8f8d-4f5fd6b14765-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 07:06:56 crc kubenswrapper[4854]: I0103 07:06:56.396969 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n9b9g" podUID="b08d10d9-fc85-455b-a848-ae1109eee932" containerName="registry-server" probeResult="failure" output=< Jan 03 07:06:56 crc kubenswrapper[4854]: timeout: failed to connect service ":50051" within 1s Jan 03 07:06:56 crc kubenswrapper[4854]: > Jan 03 07:06:56 crc kubenswrapper[4854]: I0103 07:06:56.648728 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-czgkw" event={"ID":"8ff403f4-841a-4305-8f8d-4f5fd6b14765","Type":"ContainerDied","Data":"c0f92cdd570d3278efbccf7c016d1fd04d43b4061c83b0896dbd520d3b060685"} Jan 03 07:06:56 crc kubenswrapper[4854]: I0103 07:06:56.648787 4854 scope.go:117] "RemoveContainer" containerID="79da304d6df2d02f77999b0c9d3cf1664be473b3c39ec1ca0be21b217d3dec57" Jan 03 07:06:56 crc kubenswrapper[4854]: I0103 07:06:56.648929 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-czgkw" Jan 03 07:06:56 crc kubenswrapper[4854]: I0103 07:06:56.690404 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-czgkw"] Jan 03 07:06:56 crc kubenswrapper[4854]: I0103 07:06:56.700305 4854 scope.go:117] "RemoveContainer" containerID="e8a0a26a317741ba9c1810c0688e02a82ec3277fa6e52fd3ad37f489c066129a" Jan 03 07:06:56 crc kubenswrapper[4854]: I0103 07:06:56.711677 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-czgkw"] Jan 03 07:06:56 crc kubenswrapper[4854]: I0103 07:06:56.727520 4854 scope.go:117] "RemoveContainer" containerID="509d94d1ad0ebc4bc528f4bfb11d681e27fd42b5b7f67f8097f92fa7cf645a4e" Jan 03 07:06:57 crc kubenswrapper[4854]: I0103 07:06:57.081684 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="2d802db5-d336-4639-8264-e628fa15d820" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 03 07:06:58 crc kubenswrapper[4854]: I0103 07:06:58.132610 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ff403f4-841a-4305-8f8d-4f5fd6b14765" path="/var/lib/kubelet/pods/8ff403f4-841a-4305-8f8d-4f5fd6b14765/volumes" Jan 03 07:07:00 crc kubenswrapper[4854]: I0103 07:07:00.783878 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9lzwf" podUID="7e9a4f28-3133-4df6-9ed3-fbae3e03d777" containerName="registry-server" probeResult="failure" output=< Jan 03 07:07:00 crc kubenswrapper[4854]: timeout: failed to connect service ":50051" within 1s Jan 03 07:07:00 crc kubenswrapper[4854]: > Jan 03 07:07:00 crc kubenswrapper[4854]: I0103 07:07:00.887787 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6pvh8" Jan 03 07:07:00 crc kubenswrapper[4854]: I0103 07:07:00.984800 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6pvh8" Jan 03 07:07:01 crc kubenswrapper[4854]: I0103 07:07:01.342518 4854 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 03 07:07:01 crc kubenswrapper[4854]: I0103 07:07:01.342584 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 03 07:07:01 crc kubenswrapper[4854]: I0103 07:07:01.342636 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 07:07:01 crc kubenswrapper[4854]: I0103 07:07:01.343663 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"5d857e54272793d26f7cdc626f49935abb53530d63176989e1deaea067cc9fc4"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Jan 03 07:07:01 crc kubenswrapper[4854]: I0103 07:07:01.343780 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://5d857e54272793d26f7cdc626f49935abb53530d63176989e1deaea067cc9fc4" gracePeriod=30 Jan 03 07:07:02 crc kubenswrapper[4854]: I0103 07:07:02.080973 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="2d802db5-d336-4639-8264-e628fa15d820" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 03 07:07:05 crc kubenswrapper[4854]: I0103 07:07:05.373650 4854 patch_prober.go:28] interesting pod/console-67666b4d85-nwx4t container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.138:8443/health\": dial tcp 10.217.0.138:8443: connect: connection refused" start-of-body= Jan 03 07:07:05 crc kubenswrapper[4854]: I0103 07:07:05.374182 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-67666b4d85-nwx4t" podUID="002174a6-3b57-4eba-985b-9fd7c492b143" containerName="console" probeResult="failure" output="Get \"https://10.217.0.138:8443/health\": dial tcp 10.217.0.138:8443: connect: connection refused" Jan 03 07:07:06 crc kubenswrapper[4854]: I0103 07:07:06.400400 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n9b9g" podUID="b08d10d9-fc85-455b-a848-ae1109eee932" containerName="registry-server" probeResult="failure" output=< Jan 03 07:07:06 crc kubenswrapper[4854]: timeout: failed to connect service ":50051" within 1s Jan 03 07:07:06 crc kubenswrapper[4854]: > Jan 03 07:07:07 crc kubenswrapper[4854]: I0103 07:07:07.086935 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="2d802db5-d336-4639-8264-e628fa15d820" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 03 07:07:09 crc kubenswrapper[4854]: I0103 07:07:09.377580 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9lzwf" Jan 03 07:07:09 crc kubenswrapper[4854]: I0103 07:07:09.385562 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7fdb976ccd-xpqws" Jan 03 07:07:09 crc kubenswrapper[4854]: I0103 07:07:09.456988 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9lzwf" Jan 03 07:07:10 crc kubenswrapper[4854]: I0103 07:07:10.365071 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-99h4j" Jan 03 07:07:12 crc kubenswrapper[4854]: I0103 07:07:12.091438 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="2d802db5-d336-4639-8264-e628fa15d820" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 03 07:07:12 crc kubenswrapper[4854]: I0103 07:07:12.947655 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-64d84f65b5-cnzjg" Jan 03 07:07:13 crc kubenswrapper[4854]: I0103 07:07:13.857465 4854 generic.go:334] "Generic (PLEG): container finished" podID="29a7524a-4f1c-4e10-ae41-8e05f91cbde6" containerID="bc39fe0ad3c89ce1ba2a6b062e69481645bb4f21f0b1d7997562058d716115ed" exitCode=1 Jan 03 07:07:13 crc kubenswrapper[4854]: I0103 07:07:13.857564 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qmphn" event={"ID":"29a7524a-4f1c-4e10-ae41-8e05f91cbde6","Type":"ContainerDied","Data":"bc39fe0ad3c89ce1ba2a6b062e69481645bb4f21f0b1d7997562058d716115ed"} Jan 03 07:07:13 crc kubenswrapper[4854]: I0103 07:07:13.858581 4854 scope.go:117] "RemoveContainer" containerID="bc39fe0ad3c89ce1ba2a6b062e69481645bb4f21f0b1d7997562058d716115ed" Jan 03 07:07:14 crc kubenswrapper[4854]: I0103 07:07:14.869778 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qmphn" event={"ID":"29a7524a-4f1c-4e10-ae41-8e05f91cbde6","Type":"ContainerStarted","Data":"351f30727bfba765d00d1bf5a9dd906299d268060ecfee69deb3e928a6b7fd70"} Jan 03 07:07:15 crc kubenswrapper[4854]: I0103 07:07:15.374709 4854 patch_prober.go:28] interesting pod/console-67666b4d85-nwx4t container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.138:8443/health\": dial tcp 10.217.0.138:8443: connect: connection refused" start-of-body= Jan 03 07:07:15 crc kubenswrapper[4854]: I0103 07:07:15.375197 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-67666b4d85-nwx4t" podUID="002174a6-3b57-4eba-985b-9fd7c492b143" containerName="console" probeResult="failure" output="Get \"https://10.217.0.138:8443/health\": dial tcp 10.217.0.138:8443: connect: connection refused" Jan 03 07:07:16 crc kubenswrapper[4854]: I0103 07:07:16.405696 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n9b9g" podUID="b08d10d9-fc85-455b-a848-ae1109eee932" containerName="registry-server" probeResult="failure" output=< Jan 03 07:07:16 crc kubenswrapper[4854]: timeout: failed to connect service ":50051" within 1s Jan 03 07:07:16 crc kubenswrapper[4854]: > Jan 03 07:07:17 crc kubenswrapper[4854]: I0103 07:07:17.099522 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="2d802db5-d336-4639-8264-e628fa15d820" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 03 07:07:22 crc kubenswrapper[4854]: I0103 07:07:22.089341 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="2d802db5-d336-4639-8264-e628fa15d820" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 03 07:07:25 crc kubenswrapper[4854]: I0103 07:07:25.375469 4854 patch_prober.go:28] interesting pod/console-67666b4d85-nwx4t container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.138:8443/health\": dial tcp 10.217.0.138:8443: connect: connection refused" start-of-body= Jan 03 07:07:25 crc kubenswrapper[4854]: I0103 07:07:25.376327 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-67666b4d85-nwx4t" podUID="002174a6-3b57-4eba-985b-9fd7c492b143" containerName="console" probeResult="failure" output="Get \"https://10.217.0.138:8443/health\": dial tcp 10.217.0.138:8443: connect: connection refused" Jan 03 07:07:25 crc kubenswrapper[4854]: I0103 07:07:25.401012 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-n9b9g" Jan 03 07:07:25 crc kubenswrapper[4854]: I0103 07:07:25.455962 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-n9b9g" Jan 03 07:07:27 crc kubenswrapper[4854]: I0103 07:07:27.082316 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="2d802db5-d336-4639-8264-e628fa15d820" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 03 07:07:28 crc kubenswrapper[4854]: I0103 07:07:28.960981 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n9b9g"] Jan 03 07:07:28 crc kubenswrapper[4854]: I0103 07:07:28.962104 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-n9b9g" podUID="b08d10d9-fc85-455b-a848-ae1109eee932" containerName="registry-server" containerID="cri-o://7d839f66dcb8884c2446672a1ef279e17e94e2499d9d225db0f31c4590ce9213" gracePeriod=2 Jan 03 07:07:29 crc kubenswrapper[4854]: I0103 07:07:29.107846 4854 generic.go:334] "Generic (PLEG): container finished" podID="b08d10d9-fc85-455b-a848-ae1109eee932" containerID="7d839f66dcb8884c2446672a1ef279e17e94e2499d9d225db0f31c4590ce9213" exitCode=0 Jan 03 07:07:29 crc kubenswrapper[4854]: I0103 07:07:29.108105 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n9b9g" event={"ID":"b08d10d9-fc85-455b-a848-ae1109eee932","Type":"ContainerDied","Data":"7d839f66dcb8884c2446672a1ef279e17e94e2499d9d225db0f31c4590ce9213"} Jan 03 07:07:29 crc kubenswrapper[4854]: I0103 07:07:29.925419 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n9b9g" Jan 03 07:07:30 crc kubenswrapper[4854]: I0103 07:07:30.012681 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b08d10d9-fc85-455b-a848-ae1109eee932-catalog-content\") pod \"b08d10d9-fc85-455b-a848-ae1109eee932\" (UID: \"b08d10d9-fc85-455b-a848-ae1109eee932\") " Jan 03 07:07:30 crc kubenswrapper[4854]: I0103 07:07:30.012923 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b08d10d9-fc85-455b-a848-ae1109eee932-utilities\") pod \"b08d10d9-fc85-455b-a848-ae1109eee932\" (UID: \"b08d10d9-fc85-455b-a848-ae1109eee932\") " Jan 03 07:07:30 crc kubenswrapper[4854]: I0103 07:07:30.013202 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hg6rt\" (UniqueName: \"kubernetes.io/projected/b08d10d9-fc85-455b-a848-ae1109eee932-kube-api-access-hg6rt\") pod \"b08d10d9-fc85-455b-a848-ae1109eee932\" (UID: \"b08d10d9-fc85-455b-a848-ae1109eee932\") " Jan 03 07:07:30 crc kubenswrapper[4854]: I0103 07:07:30.013864 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b08d10d9-fc85-455b-a848-ae1109eee932-utilities" (OuterVolumeSpecName: "utilities") pod "b08d10d9-fc85-455b-a848-ae1109eee932" (UID: "b08d10d9-fc85-455b-a848-ae1109eee932"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 07:07:30 crc kubenswrapper[4854]: I0103 07:07:30.018051 4854 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b08d10d9-fc85-455b-a848-ae1109eee932-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 07:07:30 crc kubenswrapper[4854]: I0103 07:07:30.030965 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b08d10d9-fc85-455b-a848-ae1109eee932-kube-api-access-hg6rt" (OuterVolumeSpecName: "kube-api-access-hg6rt") pod "b08d10d9-fc85-455b-a848-ae1109eee932" (UID: "b08d10d9-fc85-455b-a848-ae1109eee932"). InnerVolumeSpecName "kube-api-access-hg6rt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 07:07:30 crc kubenswrapper[4854]: I0103 07:07:30.121485 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hg6rt\" (UniqueName: \"kubernetes.io/projected/b08d10d9-fc85-455b-a848-ae1109eee932-kube-api-access-hg6rt\") on node \"crc\" DevicePath \"\"" Jan 03 07:07:30 crc kubenswrapper[4854]: I0103 07:07:30.123120 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n9b9g" Jan 03 07:07:30 crc kubenswrapper[4854]: I0103 07:07:30.143481 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n9b9g" event={"ID":"b08d10d9-fc85-455b-a848-ae1109eee932","Type":"ContainerDied","Data":"07ed526d54ad06fcbe7d8c369f6beb35c5cfab4fa1f960fee23239302de1bb16"} Jan 03 07:07:30 crc kubenswrapper[4854]: I0103 07:07:30.143537 4854 scope.go:117] "RemoveContainer" containerID="7d839f66dcb8884c2446672a1ef279e17e94e2499d9d225db0f31c4590ce9213" Jan 03 07:07:30 crc kubenswrapper[4854]: I0103 07:07:30.153712 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b08d10d9-fc85-455b-a848-ae1109eee932-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b08d10d9-fc85-455b-a848-ae1109eee932" (UID: "b08d10d9-fc85-455b-a848-ae1109eee932"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 07:07:30 crc kubenswrapper[4854]: I0103 07:07:30.172438 4854 scope.go:117] "RemoveContainer" containerID="86662bd3a93b1430cb203d0ee17d2ea0b4f1cefadac58b3386a7bff731bfaf1a" Jan 03 07:07:30 crc kubenswrapper[4854]: I0103 07:07:30.197956 4854 scope.go:117] "RemoveContainer" containerID="9d52bd6c98dcad3da82ca5a886fc8144733aa46b6e370461608024b65dfeb5b4" Jan 03 07:07:30 crc kubenswrapper[4854]: I0103 07:07:30.223581 4854 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b08d10d9-fc85-455b-a848-ae1109eee932-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 07:07:30 crc kubenswrapper[4854]: I0103 07:07:30.474855 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n9b9g"] Jan 03 07:07:30 crc kubenswrapper[4854]: I0103 07:07:30.489343 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-n9b9g"] Jan 03 07:07:32 crc kubenswrapper[4854]: I0103 07:07:32.099660 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="2d802db5-d336-4639-8264-e628fa15d820" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 03 07:07:32 crc kubenswrapper[4854]: I0103 07:07:32.141242 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b08d10d9-fc85-455b-a848-ae1109eee932" path="/var/lib/kubelet/pods/b08d10d9-fc85-455b-a848-ae1109eee932/volumes" Jan 03 07:07:32 crc kubenswrapper[4854]: I0103 07:07:32.167339 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/3.log" Jan 03 07:07:32 crc kubenswrapper[4854]: I0103 07:07:32.168941 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/2.log" Jan 03 07:07:32 crc kubenswrapper[4854]: I0103 07:07:32.170176 4854 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="5d857e54272793d26f7cdc626f49935abb53530d63176989e1deaea067cc9fc4" exitCode=137 Jan 03 07:07:32 crc kubenswrapper[4854]: I0103 07:07:32.170233 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"5d857e54272793d26f7cdc626f49935abb53530d63176989e1deaea067cc9fc4"} Jan 03 07:07:32 crc kubenswrapper[4854]: I0103 07:07:32.170292 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ebc8d07c3a598cdd7da41be0fe715ec6b499f04aab923253eea81cec6caed590"} Jan 03 07:07:32 crc kubenswrapper[4854]: I0103 07:07:32.170314 4854 scope.go:117] "RemoveContainer" containerID="5a0763e01c1342b89c8825637cbcf287d92d7340beb51666b69cf6ebf12fd3b9" Jan 03 07:07:33 crc kubenswrapper[4854]: I0103 07:07:33.183565 4854 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/3.log" Jan 03 07:07:35 crc kubenswrapper[4854]: I0103 07:07:35.373754 4854 patch_prober.go:28] interesting pod/console-67666b4d85-nwx4t container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.138:8443/health\": dial tcp 10.217.0.138:8443: connect: connection refused" start-of-body= Jan 03 07:07:35 crc kubenswrapper[4854]: I0103 07:07:35.374204 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-67666b4d85-nwx4t" podUID="002174a6-3b57-4eba-985b-9fd7c492b143" containerName="console" probeResult="failure" output="Get \"https://10.217.0.138:8443/health\": dial tcp 10.217.0.138:8443: connect: connection refused" Jan 03 07:07:36 crc kubenswrapper[4854]: I0103 07:07:36.791955 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 07:07:37 crc kubenswrapper[4854]: I0103 07:07:37.083324 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="2d802db5-d336-4639-8264-e628fa15d820" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 03 07:07:41 crc kubenswrapper[4854]: I0103 07:07:41.342188 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 07:07:41 crc kubenswrapper[4854]: I0103 07:07:41.351451 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 07:07:41 crc kubenswrapper[4854]: I0103 07:07:41.756111 4854 patch_prober.go:28] interesting pod/machine-config-daemon-qdhfx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 07:07:41 crc kubenswrapper[4854]: I0103 07:07:41.756201 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 07:07:42 crc kubenswrapper[4854]: I0103 07:07:42.090965 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="2d802db5-d336-4639-8264-e628fa15d820" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 03 07:07:42 crc kubenswrapper[4854]: E0103 07:07:42.139232 4854 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.102:44146->38.102.83.102:42659: write tcp 38.102.83.102:44146->38.102.83.102:42659: write: connection reset by peer Jan 03 07:07:42 crc kubenswrapper[4854]: I0103 07:07:42.306770 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 07:07:43 crc kubenswrapper[4854]: I0103 07:07:43.665837 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Jan 03 07:07:43 crc kubenswrapper[4854]: I0103 07:07:43.733971 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Jan 03 07:07:45 crc kubenswrapper[4854]: I0103 07:07:45.373674 4854 patch_prober.go:28] interesting pod/console-67666b4d85-nwx4t container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.138:8443/health\": dial tcp 10.217.0.138:8443: connect: connection refused" start-of-body= Jan 03 07:07:45 crc kubenswrapper[4854]: I0103 07:07:45.374134 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-67666b4d85-nwx4t" podUID="002174a6-3b57-4eba-985b-9fd7c492b143" containerName="console" probeResult="failure" output="Get \"https://10.217.0.138:8443/health\": dial tcp 10.217.0.138:8443: connect: connection refused" Jan 03 07:07:47 crc kubenswrapper[4854]: I0103 07:07:47.088337 4854 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="2d802db5-d336-4639-8264-e628fa15d820" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 03 07:07:47 crc kubenswrapper[4854]: I0103 07:07:47.088842 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 03 07:07:47 crc kubenswrapper[4854]: I0103 07:07:47.090106 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-scheduler" containerStatusID={"Type":"cri-o","ID":"4d3d8e36da92c4909ac59c6400fdc8382de92a3ce9e8dfa7e4f358e22357e734"} pod="openstack/cinder-scheduler-0" containerMessage="Container cinder-scheduler failed startup probe, will be restarted" Jan 03 07:07:47 crc kubenswrapper[4854]: I0103 07:07:47.090203 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="2d802db5-d336-4639-8264-e628fa15d820" containerName="cinder-scheduler" containerID="cri-o://4d3d8e36da92c4909ac59c6400fdc8382de92a3ce9e8dfa7e4f358e22357e734" gracePeriod=30 Jan 03 07:07:52 crc kubenswrapper[4854]: I0103 07:07:52.804475 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 03 07:07:52 crc kubenswrapper[4854]: I0103 07:07:52.807619 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ca1d3e35-8df0-4b19-891d-3f2aecc401ab" containerName="ceilometer-notification-agent" containerID="cri-o://aae61ed9f182e8e29eef552d76390e004d87bb0bd9362e48602c571c0c1ce11e" gracePeriod=30 Jan 03 07:07:52 crc kubenswrapper[4854]: I0103 07:07:52.808151 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ca1d3e35-8df0-4b19-891d-3f2aecc401ab" containerName="ceilometer-central-agent" containerID="cri-o://3dd84dedf8688d5ff3d1a5fbdcb29d9e3cd0b508fad6ac6b71c603d2fc526568" gracePeriod=30 Jan 03 07:07:52 crc kubenswrapper[4854]: I0103 07:07:52.808194 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ca1d3e35-8df0-4b19-891d-3f2aecc401ab" containerName="proxy-httpd" containerID="cri-o://2c0685c51559d572f3197dd8105f8a3a7d53eccdda8a3d791678eee0c10780ad" gracePeriod=30 Jan 03 07:07:52 crc kubenswrapper[4854]: I0103 07:07:52.808227 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ca1d3e35-8df0-4b19-891d-3f2aecc401ab" containerName="sg-core" containerID="cri-o://1a772472bda0bd83a0686fb5c46dd06624d554cb7005669d6131cd275f4d654d" gracePeriod=30 Jan 03 07:07:53 crc kubenswrapper[4854]: I0103 07:07:53.487179 4854 generic.go:334] "Generic (PLEG): container finished" podID="ca1d3e35-8df0-4b19-891d-3f2aecc401ab" containerID="1a772472bda0bd83a0686fb5c46dd06624d554cb7005669d6131cd275f4d654d" exitCode=2 Jan 03 07:07:53 crc kubenswrapper[4854]: I0103 07:07:53.487282 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca1d3e35-8df0-4b19-891d-3f2aecc401ab","Type":"ContainerDied","Data":"1a772472bda0bd83a0686fb5c46dd06624d554cb7005669d6131cd275f4d654d"} Jan 03 07:07:54 crc kubenswrapper[4854]: I0103 07:07:54.540818 4854 generic.go:334] "Generic (PLEG): container finished" podID="ca1d3e35-8df0-4b19-891d-3f2aecc401ab" containerID="3dd84dedf8688d5ff3d1a5fbdcb29d9e3cd0b508fad6ac6b71c603d2fc526568" exitCode=0 Jan 03 07:07:54 crc kubenswrapper[4854]: I0103 07:07:54.542965 4854 generic.go:334] "Generic (PLEG): container finished" podID="ca1d3e35-8df0-4b19-891d-3f2aecc401ab" containerID="2c0685c51559d572f3197dd8105f8a3a7d53eccdda8a3d791678eee0c10780ad" exitCode=0 Jan 03 07:07:54 crc kubenswrapper[4854]: I0103 07:07:54.541160 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca1d3e35-8df0-4b19-891d-3f2aecc401ab","Type":"ContainerDied","Data":"3dd84dedf8688d5ff3d1a5fbdcb29d9e3cd0b508fad6ac6b71c603d2fc526568"} Jan 03 07:07:54 crc kubenswrapper[4854]: I0103 07:07:54.543051 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca1d3e35-8df0-4b19-891d-3f2aecc401ab","Type":"ContainerDied","Data":"2c0685c51559d572f3197dd8105f8a3a7d53eccdda8a3d791678eee0c10780ad"} Jan 03 07:07:54 crc kubenswrapper[4854]: I0103 07:07:54.543092 4854 scope.go:117] "RemoveContainer" containerID="7def2b0ff7c9c47e2ab148307011319889a3bf934eaf31dbecc60b60a9497a0e" Jan 03 07:07:55 crc kubenswrapper[4854]: I0103 07:07:55.387894 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-67666b4d85-nwx4t" Jan 03 07:07:55 crc kubenswrapper[4854]: I0103 07:07:55.392311 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-67666b4d85-nwx4t" Jan 03 07:07:55 crc kubenswrapper[4854]: I0103 07:07:55.855771 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 03 07:07:55 crc kubenswrapper[4854]: I0103 07:07:55.985926 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 03 07:07:56 crc kubenswrapper[4854]: I0103 07:07:56.587698 4854 generic.go:334] "Generic (PLEG): container finished" podID="ca1d3e35-8df0-4b19-891d-3f2aecc401ab" containerID="aae61ed9f182e8e29eef552d76390e004d87bb0bd9362e48602c571c0c1ce11e" exitCode=0 Jan 03 07:07:56 crc kubenswrapper[4854]: I0103 07:07:56.588042 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca1d3e35-8df0-4b19-891d-3f2aecc401ab","Type":"ContainerDied","Data":"aae61ed9f182e8e29eef552d76390e004d87bb0bd9362e48602c571c0c1ce11e"} Jan 03 07:07:56 crc kubenswrapper[4854]: I0103 07:07:56.886894 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 03 07:07:56 crc kubenswrapper[4854]: I0103 07:07:56.978922 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 07:07:56 crc kubenswrapper[4854]: I0103 07:07:56.993411 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.122542 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca1d3e35-8df0-4b19-891d-3f2aecc401ab-log-httpd\") pod \"ca1d3e35-8df0-4b19-891d-3f2aecc401ab\" (UID: \"ca1d3e35-8df0-4b19-891d-3f2aecc401ab\") " Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.122638 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca1d3e35-8df0-4b19-891d-3f2aecc401ab-run-httpd\") pod \"ca1d3e35-8df0-4b19-891d-3f2aecc401ab\" (UID: \"ca1d3e35-8df0-4b19-891d-3f2aecc401ab\") " Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.122693 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca1d3e35-8df0-4b19-891d-3f2aecc401ab-scripts\") pod \"ca1d3e35-8df0-4b19-891d-3f2aecc401ab\" (UID: \"ca1d3e35-8df0-4b19-891d-3f2aecc401ab\") " Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.122764 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca1d3e35-8df0-4b19-891d-3f2aecc401ab-config-data\") pod \"ca1d3e35-8df0-4b19-891d-3f2aecc401ab\" (UID: \"ca1d3e35-8df0-4b19-891d-3f2aecc401ab\") " Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.122791 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca1d3e35-8df0-4b19-891d-3f2aecc401ab-ceilometer-tls-certs\") pod \"ca1d3e35-8df0-4b19-891d-3f2aecc401ab\" (UID: \"ca1d3e35-8df0-4b19-891d-3f2aecc401ab\") " Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.122820 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca1d3e35-8df0-4b19-891d-3f2aecc401ab-combined-ca-bundle\") pod \"ca1d3e35-8df0-4b19-891d-3f2aecc401ab\" (UID: \"ca1d3e35-8df0-4b19-891d-3f2aecc401ab\") " Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.122871 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ca1d3e35-8df0-4b19-891d-3f2aecc401ab-sg-core-conf-yaml\") pod \"ca1d3e35-8df0-4b19-891d-3f2aecc401ab\" (UID: \"ca1d3e35-8df0-4b19-891d-3f2aecc401ab\") " Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.122900 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5bsp\" (UniqueName: \"kubernetes.io/projected/ca1d3e35-8df0-4b19-891d-3f2aecc401ab-kube-api-access-f5bsp\") pod \"ca1d3e35-8df0-4b19-891d-3f2aecc401ab\" (UID: \"ca1d3e35-8df0-4b19-891d-3f2aecc401ab\") " Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.125743 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca1d3e35-8df0-4b19-891d-3f2aecc401ab-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ca1d3e35-8df0-4b19-891d-3f2aecc401ab" (UID: "ca1d3e35-8df0-4b19-891d-3f2aecc401ab"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.126504 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca1d3e35-8df0-4b19-891d-3f2aecc401ab-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ca1d3e35-8df0-4b19-891d-3f2aecc401ab" (UID: "ca1d3e35-8df0-4b19-891d-3f2aecc401ab"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.148017 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca1d3e35-8df0-4b19-891d-3f2aecc401ab-kube-api-access-f5bsp" (OuterVolumeSpecName: "kube-api-access-f5bsp") pod "ca1d3e35-8df0-4b19-891d-3f2aecc401ab" (UID: "ca1d3e35-8df0-4b19-891d-3f2aecc401ab"). InnerVolumeSpecName "kube-api-access-f5bsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.173997 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca1d3e35-8df0-4b19-891d-3f2aecc401ab-scripts" (OuterVolumeSpecName: "scripts") pod "ca1d3e35-8df0-4b19-891d-3f2aecc401ab" (UID: "ca1d3e35-8df0-4b19-891d-3f2aecc401ab"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.201371 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca1d3e35-8df0-4b19-891d-3f2aecc401ab-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ca1d3e35-8df0-4b19-891d-3f2aecc401ab" (UID: "ca1d3e35-8df0-4b19-891d-3f2aecc401ab"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.231890 4854 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ca1d3e35-8df0-4b19-891d-3f2aecc401ab-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.231921 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5bsp\" (UniqueName: \"kubernetes.io/projected/ca1d3e35-8df0-4b19-891d-3f2aecc401ab-kube-api-access-f5bsp\") on node \"crc\" DevicePath \"\"" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.231933 4854 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca1d3e35-8df0-4b19-891d-3f2aecc401ab-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.231942 4854 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca1d3e35-8df0-4b19-891d-3f2aecc401ab-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.231951 4854 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca1d3e35-8df0-4b19-891d-3f2aecc401ab-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.296012 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca1d3e35-8df0-4b19-891d-3f2aecc401ab-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "ca1d3e35-8df0-4b19-891d-3f2aecc401ab" (UID: "ca1d3e35-8df0-4b19-891d-3f2aecc401ab"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.329499 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca1d3e35-8df0-4b19-891d-3f2aecc401ab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ca1d3e35-8df0-4b19-891d-3f2aecc401ab" (UID: "ca1d3e35-8df0-4b19-891d-3f2aecc401ab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.337169 4854 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca1d3e35-8df0-4b19-891d-3f2aecc401ab-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.337199 4854 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca1d3e35-8df0-4b19-891d-3f2aecc401ab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.403679 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca1d3e35-8df0-4b19-891d-3f2aecc401ab-config-data" (OuterVolumeSpecName: "config-data") pod "ca1d3e35-8df0-4b19-891d-3f2aecc401ab" (UID: "ca1d3e35-8df0-4b19-891d-3f2aecc401ab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.448634 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca1d3e35-8df0-4b19-891d-3f2aecc401ab-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.602803 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.605835 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca1d3e35-8df0-4b19-891d-3f2aecc401ab","Type":"ContainerDied","Data":"a52aafe9c232646899bfb0ad5cd7041209f7ed8e30744fc0d3075b071e5b1d39"} Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.605880 4854 scope.go:117] "RemoveContainer" containerID="3dd84dedf8688d5ff3d1a5fbdcb29d9e3cd0b508fad6ac6b71c603d2fc526568" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.647060 4854 scope.go:117] "RemoveContainer" containerID="2c0685c51559d572f3197dd8105f8a3a7d53eccdda8a3d791678eee0c10780ad" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.669316 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.704770 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.721074 4854 scope.go:117] "RemoveContainer" containerID="1a772472bda0bd83a0686fb5c46dd06624d554cb7005669d6131cd275f4d654d" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.733678 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 03 07:07:57 crc kubenswrapper[4854]: E0103 07:07:57.757101 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b08d10d9-fc85-455b-a848-ae1109eee932" containerName="extract-utilities" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.757130 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="b08d10d9-fc85-455b-a848-ae1109eee932" containerName="extract-utilities" Jan 03 07:07:57 crc kubenswrapper[4854]: E0103 07:07:57.757152 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca1d3e35-8df0-4b19-891d-3f2aecc401ab" containerName="ceilometer-central-agent" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.757159 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca1d3e35-8df0-4b19-891d-3f2aecc401ab" containerName="ceilometer-central-agent" Jan 03 07:07:57 crc kubenswrapper[4854]: E0103 07:07:57.757173 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca1d3e35-8df0-4b19-891d-3f2aecc401ab" containerName="ceilometer-notification-agent" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.757190 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca1d3e35-8df0-4b19-891d-3f2aecc401ab" containerName="ceilometer-notification-agent" Jan 03 07:07:57 crc kubenswrapper[4854]: E0103 07:07:57.757211 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ff403f4-841a-4305-8f8d-4f5fd6b14765" containerName="extract-utilities" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.757216 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ff403f4-841a-4305-8f8d-4f5fd6b14765" containerName="extract-utilities" Jan 03 07:07:57 crc kubenswrapper[4854]: E0103 07:07:57.757225 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ff403f4-841a-4305-8f8d-4f5fd6b14765" containerName="registry-server" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.757231 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ff403f4-841a-4305-8f8d-4f5fd6b14765" containerName="registry-server" Jan 03 07:07:57 crc kubenswrapper[4854]: E0103 07:07:57.757247 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca1d3e35-8df0-4b19-891d-3f2aecc401ab" containerName="sg-core" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.757254 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca1d3e35-8df0-4b19-891d-3f2aecc401ab" containerName="sg-core" Jan 03 07:07:57 crc kubenswrapper[4854]: E0103 07:07:57.757266 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca1d3e35-8df0-4b19-891d-3f2aecc401ab" containerName="proxy-httpd" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.757271 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca1d3e35-8df0-4b19-891d-3f2aecc401ab" containerName="proxy-httpd" Jan 03 07:07:57 crc kubenswrapper[4854]: E0103 07:07:57.757281 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ff403f4-841a-4305-8f8d-4f5fd6b14765" containerName="extract-content" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.757286 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ff403f4-841a-4305-8f8d-4f5fd6b14765" containerName="extract-content" Jan 03 07:07:57 crc kubenswrapper[4854]: E0103 07:07:57.757317 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b08d10d9-fc85-455b-a848-ae1109eee932" containerName="registry-server" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.757324 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="b08d10d9-fc85-455b-a848-ae1109eee932" containerName="registry-server" Jan 03 07:07:57 crc kubenswrapper[4854]: E0103 07:07:57.757335 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b08d10d9-fc85-455b-a848-ae1109eee932" containerName="extract-content" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.757340 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="b08d10d9-fc85-455b-a848-ae1109eee932" containerName="extract-content" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.757643 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="b08d10d9-fc85-455b-a848-ae1109eee932" containerName="registry-server" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.757657 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca1d3e35-8df0-4b19-891d-3f2aecc401ab" containerName="ceilometer-notification-agent" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.757677 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca1d3e35-8df0-4b19-891d-3f2aecc401ab" containerName="sg-core" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.757687 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca1d3e35-8df0-4b19-891d-3f2aecc401ab" containerName="proxy-httpd" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.757702 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca1d3e35-8df0-4b19-891d-3f2aecc401ab" containerName="ceilometer-central-agent" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.757714 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca1d3e35-8df0-4b19-891d-3f2aecc401ab" containerName="ceilometer-central-agent" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.757725 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ff403f4-841a-4305-8f8d-4f5fd6b14765" containerName="registry-server" Jan 03 07:07:57 crc kubenswrapper[4854]: E0103 07:07:57.758001 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca1d3e35-8df0-4b19-891d-3f2aecc401ab" containerName="ceilometer-central-agent" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.758010 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca1d3e35-8df0-4b19-891d-3f2aecc401ab" containerName="ceilometer-central-agent" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.762832 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.764406 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.803383 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.804013 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.805286 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.878658 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/339ed5c0-8501-4811-aac6-0026301ea531-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"339ed5c0-8501-4811-aac6-0026301ea531\") " pod="openstack/ceilometer-0" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.878706 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/339ed5c0-8501-4811-aac6-0026301ea531-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"339ed5c0-8501-4811-aac6-0026301ea531\") " pod="openstack/ceilometer-0" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.878725 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/339ed5c0-8501-4811-aac6-0026301ea531-config-data\") pod \"ceilometer-0\" (UID: \"339ed5c0-8501-4811-aac6-0026301ea531\") " pod="openstack/ceilometer-0" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.878753 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/339ed5c0-8501-4811-aac6-0026301ea531-scripts\") pod \"ceilometer-0\" (UID: \"339ed5c0-8501-4811-aac6-0026301ea531\") " pod="openstack/ceilometer-0" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.878768 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzqfb\" (UniqueName: \"kubernetes.io/projected/339ed5c0-8501-4811-aac6-0026301ea531-kube-api-access-pzqfb\") pod \"ceilometer-0\" (UID: \"339ed5c0-8501-4811-aac6-0026301ea531\") " pod="openstack/ceilometer-0" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.878867 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/339ed5c0-8501-4811-aac6-0026301ea531-run-httpd\") pod \"ceilometer-0\" (UID: \"339ed5c0-8501-4811-aac6-0026301ea531\") " pod="openstack/ceilometer-0" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.878886 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/339ed5c0-8501-4811-aac6-0026301ea531-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"339ed5c0-8501-4811-aac6-0026301ea531\") " pod="openstack/ceilometer-0" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.878991 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/339ed5c0-8501-4811-aac6-0026301ea531-log-httpd\") pod \"ceilometer-0\" (UID: \"339ed5c0-8501-4811-aac6-0026301ea531\") " pod="openstack/ceilometer-0" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.886053 4854 scope.go:117] "RemoveContainer" containerID="aae61ed9f182e8e29eef552d76390e004d87bb0bd9362e48602c571c0c1ce11e" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.980773 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/339ed5c0-8501-4811-aac6-0026301ea531-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"339ed5c0-8501-4811-aac6-0026301ea531\") " pod="openstack/ceilometer-0" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.980828 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/339ed5c0-8501-4811-aac6-0026301ea531-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"339ed5c0-8501-4811-aac6-0026301ea531\") " pod="openstack/ceilometer-0" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.980852 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/339ed5c0-8501-4811-aac6-0026301ea531-config-data\") pod \"ceilometer-0\" (UID: \"339ed5c0-8501-4811-aac6-0026301ea531\") " pod="openstack/ceilometer-0" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.980890 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/339ed5c0-8501-4811-aac6-0026301ea531-scripts\") pod \"ceilometer-0\" (UID: \"339ed5c0-8501-4811-aac6-0026301ea531\") " pod="openstack/ceilometer-0" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.980910 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzqfb\" (UniqueName: \"kubernetes.io/projected/339ed5c0-8501-4811-aac6-0026301ea531-kube-api-access-pzqfb\") pod \"ceilometer-0\" (UID: \"339ed5c0-8501-4811-aac6-0026301ea531\") " pod="openstack/ceilometer-0" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.981043 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/339ed5c0-8501-4811-aac6-0026301ea531-run-httpd\") pod \"ceilometer-0\" (UID: \"339ed5c0-8501-4811-aac6-0026301ea531\") " pod="openstack/ceilometer-0" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.981072 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/339ed5c0-8501-4811-aac6-0026301ea531-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"339ed5c0-8501-4811-aac6-0026301ea531\") " pod="openstack/ceilometer-0" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.981205 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/339ed5c0-8501-4811-aac6-0026301ea531-log-httpd\") pod \"ceilometer-0\" (UID: \"339ed5c0-8501-4811-aac6-0026301ea531\") " pod="openstack/ceilometer-0" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.982031 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/339ed5c0-8501-4811-aac6-0026301ea531-log-httpd\") pod \"ceilometer-0\" (UID: \"339ed5c0-8501-4811-aac6-0026301ea531\") " pod="openstack/ceilometer-0" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.987370 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/339ed5c0-8501-4811-aac6-0026301ea531-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"339ed5c0-8501-4811-aac6-0026301ea531\") " pod="openstack/ceilometer-0" Jan 03 07:07:57 crc kubenswrapper[4854]: I0103 07:07:57.988221 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/339ed5c0-8501-4811-aac6-0026301ea531-run-httpd\") pod \"ceilometer-0\" (UID: \"339ed5c0-8501-4811-aac6-0026301ea531\") " pod="openstack/ceilometer-0" Jan 03 07:07:58 crc kubenswrapper[4854]: I0103 07:07:57.997821 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/339ed5c0-8501-4811-aac6-0026301ea531-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"339ed5c0-8501-4811-aac6-0026301ea531\") " pod="openstack/ceilometer-0" Jan 03 07:07:58 crc kubenswrapper[4854]: I0103 07:07:58.002037 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/339ed5c0-8501-4811-aac6-0026301ea531-scripts\") pod \"ceilometer-0\" (UID: \"339ed5c0-8501-4811-aac6-0026301ea531\") " pod="openstack/ceilometer-0" Jan 03 07:07:58 crc kubenswrapper[4854]: I0103 07:07:58.002516 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/339ed5c0-8501-4811-aac6-0026301ea531-config-data\") pod \"ceilometer-0\" (UID: \"339ed5c0-8501-4811-aac6-0026301ea531\") " pod="openstack/ceilometer-0" Jan 03 07:07:58 crc kubenswrapper[4854]: I0103 07:07:58.006748 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/339ed5c0-8501-4811-aac6-0026301ea531-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"339ed5c0-8501-4811-aac6-0026301ea531\") " pod="openstack/ceilometer-0" Jan 03 07:07:58 crc kubenswrapper[4854]: I0103 07:07:58.009119 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzqfb\" (UniqueName: \"kubernetes.io/projected/339ed5c0-8501-4811-aac6-0026301ea531-kube-api-access-pzqfb\") pod \"ceilometer-0\" (UID: \"339ed5c0-8501-4811-aac6-0026301ea531\") " pod="openstack/ceilometer-0" Jan 03 07:07:58 crc kubenswrapper[4854]: I0103 07:07:58.146855 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 07:07:58 crc kubenswrapper[4854]: I0103 07:07:58.194930 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca1d3e35-8df0-4b19-891d-3f2aecc401ab" path="/var/lib/kubelet/pods/ca1d3e35-8df0-4b19-891d-3f2aecc401ab/volumes" Jan 03 07:07:58 crc kubenswrapper[4854]: I0103 07:07:58.933624 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 03 07:07:59 crc kubenswrapper[4854]: I0103 07:07:59.679586 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"339ed5c0-8501-4811-aac6-0026301ea531","Type":"ContainerStarted","Data":"c711ed3800d1572bcdf7c3c735205439ddbc96b0e01ee54177ff66dd69b6fd78"} Jan 03 07:08:00 crc kubenswrapper[4854]: I0103 07:08:00.723904 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"339ed5c0-8501-4811-aac6-0026301ea531","Type":"ContainerStarted","Data":"675c88ea6c6600db71a40b6322eb4d87fb4105f8b167523b1cb0f7a059a3b4a8"} Jan 03 07:08:01 crc kubenswrapper[4854]: I0103 07:08:01.743293 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"339ed5c0-8501-4811-aac6-0026301ea531","Type":"ContainerStarted","Data":"7fae08b2d2291d8a3db0bab098ffa5e51db06ac897229a4284d507920e2dcca4"} Jan 03 07:08:02 crc kubenswrapper[4854]: I0103 07:08:02.757835 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"339ed5c0-8501-4811-aac6-0026301ea531","Type":"ContainerStarted","Data":"3ec7a5e97c3e54fbf671aed26b627253759ca8c8f7b2164671e6fe62220a46ef"} Jan 03 07:08:03 crc kubenswrapper[4854]: I0103 07:08:03.373136 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 03 07:08:03 crc kubenswrapper[4854]: I0103 07:08:03.772623 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"339ed5c0-8501-4811-aac6-0026301ea531","Type":"ContainerStarted","Data":"2fd01c94b39258f2b10c86d45a9d74670ecb90d0ac7bbf5c6fd2034fe8f1ed70"} Jan 03 07:08:03 crc kubenswrapper[4854]: I0103 07:08:03.773008 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="339ed5c0-8501-4811-aac6-0026301ea531" containerName="ceilometer-central-agent" containerID="cri-o://675c88ea6c6600db71a40b6322eb4d87fb4105f8b167523b1cb0f7a059a3b4a8" gracePeriod=30 Jan 03 07:08:03 crc kubenswrapper[4854]: I0103 07:08:03.773268 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 03 07:08:03 crc kubenswrapper[4854]: I0103 07:08:03.773576 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="339ed5c0-8501-4811-aac6-0026301ea531" containerName="proxy-httpd" containerID="cri-o://2fd01c94b39258f2b10c86d45a9d74670ecb90d0ac7bbf5c6fd2034fe8f1ed70" gracePeriod=30 Jan 03 07:08:03 crc kubenswrapper[4854]: I0103 07:08:03.773618 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="339ed5c0-8501-4811-aac6-0026301ea531" containerName="sg-core" containerID="cri-o://3ec7a5e97c3e54fbf671aed26b627253759ca8c8f7b2164671e6fe62220a46ef" gracePeriod=30 Jan 03 07:08:03 crc kubenswrapper[4854]: I0103 07:08:03.773649 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="339ed5c0-8501-4811-aac6-0026301ea531" containerName="ceilometer-notification-agent" containerID="cri-o://7fae08b2d2291d8a3db0bab098ffa5e51db06ac897229a4284d507920e2dcca4" gracePeriod=30 Jan 03 07:08:03 crc kubenswrapper[4854]: I0103 07:08:03.810300 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.274729989 podStartE2EDuration="6.809736061s" podCreationTimestamp="2026-01-03 07:07:57 +0000 UTC" firstStartedPulling="2026-01-03 07:07:59.547987172 +0000 UTC m=+5257.874563744" lastFinishedPulling="2026-01-03 07:08:03.082993244 +0000 UTC m=+5261.409569816" observedRunningTime="2026-01-03 07:08:03.803877746 +0000 UTC m=+5262.130454318" watchObservedRunningTime="2026-01-03 07:08:03.809736061 +0000 UTC m=+5262.136312633" Jan 03 07:08:04 crc kubenswrapper[4854]: I0103 07:08:04.788309 4854 generic.go:334] "Generic (PLEG): container finished" podID="339ed5c0-8501-4811-aac6-0026301ea531" containerID="2fd01c94b39258f2b10c86d45a9d74670ecb90d0ac7bbf5c6fd2034fe8f1ed70" exitCode=0 Jan 03 07:08:04 crc kubenswrapper[4854]: I0103 07:08:04.788568 4854 generic.go:334] "Generic (PLEG): container finished" podID="339ed5c0-8501-4811-aac6-0026301ea531" containerID="3ec7a5e97c3e54fbf671aed26b627253759ca8c8f7b2164671e6fe62220a46ef" exitCode=2 Jan 03 07:08:04 crc kubenswrapper[4854]: I0103 07:08:04.788395 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"339ed5c0-8501-4811-aac6-0026301ea531","Type":"ContainerDied","Data":"2fd01c94b39258f2b10c86d45a9d74670ecb90d0ac7bbf5c6fd2034fe8f1ed70"} Jan 03 07:08:04 crc kubenswrapper[4854]: I0103 07:08:04.788605 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"339ed5c0-8501-4811-aac6-0026301ea531","Type":"ContainerDied","Data":"3ec7a5e97c3e54fbf671aed26b627253759ca8c8f7b2164671e6fe62220a46ef"} Jan 03 07:08:11 crc kubenswrapper[4854]: I0103 07:08:11.805384 4854 patch_prober.go:28] interesting pod/machine-config-daemon-qdhfx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 07:08:11 crc kubenswrapper[4854]: I0103 07:08:11.806124 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 07:08:13 crc kubenswrapper[4854]: I0103 07:08:13.065586 4854 generic.go:334] "Generic (PLEG): container finished" podID="339ed5c0-8501-4811-aac6-0026301ea531" containerID="675c88ea6c6600db71a40b6322eb4d87fb4105f8b167523b1cb0f7a059a3b4a8" exitCode=0 Jan 03 07:08:13 crc kubenswrapper[4854]: I0103 07:08:13.065664 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"339ed5c0-8501-4811-aac6-0026301ea531","Type":"ContainerDied","Data":"675c88ea6c6600db71a40b6322eb4d87fb4105f8b167523b1cb0f7a059a3b4a8"} Jan 03 07:08:18 crc kubenswrapper[4854]: I0103 07:08:18.149260 4854 generic.go:334] "Generic (PLEG): container finished" podID="2d802db5-d336-4639-8264-e628fa15d820" containerID="4d3d8e36da92c4909ac59c6400fdc8382de92a3ce9e8dfa7e4f358e22357e734" exitCode=137 Jan 03 07:08:18 crc kubenswrapper[4854]: I0103 07:08:18.149812 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2d802db5-d336-4639-8264-e628fa15d820","Type":"ContainerDied","Data":"4d3d8e36da92c4909ac59c6400fdc8382de92a3ce9e8dfa7e4f358e22357e734"} Jan 03 07:08:18 crc kubenswrapper[4854]: I0103 07:08:18.149840 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2d802db5-d336-4639-8264-e628fa15d820","Type":"ContainerStarted","Data":"244693312cc1ce88df5afd07ee8c802018cee76a44e1ea0f5a5ea82e030fdb40"} Jan 03 07:08:18 crc kubenswrapper[4854]: I0103 07:08:18.149860 4854 scope.go:117] "RemoveContainer" containerID="8d605ff5812a1bb92720d9dfe6ed631408ba09c2d540278cddd1b7b5491d467b" Jan 03 07:08:22 crc kubenswrapper[4854]: I0103 07:08:22.061641 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 03 07:08:27 crc kubenswrapper[4854]: I0103 07:08:27.251527 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 03 07:08:28 crc kubenswrapper[4854]: I0103 07:08:28.149725 4854 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="339ed5c0-8501-4811-aac6-0026301ea531" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.1.72:3000/\": dial tcp 10.217.1.72:3000: connect: connection refused" Jan 03 07:08:34 crc kubenswrapper[4854]: I0103 07:08:34.375859 4854 generic.go:334] "Generic (PLEG): container finished" podID="339ed5c0-8501-4811-aac6-0026301ea531" containerID="7fae08b2d2291d8a3db0bab098ffa5e51db06ac897229a4284d507920e2dcca4" exitCode=137 Jan 03 07:08:34 crc kubenswrapper[4854]: I0103 07:08:34.376409 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"339ed5c0-8501-4811-aac6-0026301ea531","Type":"ContainerDied","Data":"7fae08b2d2291d8a3db0bab098ffa5e51db06ac897229a4284d507920e2dcca4"} Jan 03 07:08:34 crc kubenswrapper[4854]: I0103 07:08:34.376438 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"339ed5c0-8501-4811-aac6-0026301ea531","Type":"ContainerDied","Data":"c711ed3800d1572bcdf7c3c735205439ddbc96b0e01ee54177ff66dd69b6fd78"} Jan 03 07:08:34 crc kubenswrapper[4854]: I0103 07:08:34.376450 4854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c711ed3800d1572bcdf7c3c735205439ddbc96b0e01ee54177ff66dd69b6fd78" Jan 03 07:08:34 crc kubenswrapper[4854]: I0103 07:08:34.382120 4854 generic.go:334] "Generic (PLEG): container finished" podID="5899ebcd-eec0-44ae-9e07-98b443d209c1" containerID="f12fe622af614b41bd44ec6bb3c9b091e81f021cd35ea69f811ff6d066d06d2b" exitCode=0 Jan 03 07:08:34 crc kubenswrapper[4854]: I0103 07:08:34.382158 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-665fcf668f-65wrt" event={"ID":"5899ebcd-eec0-44ae-9e07-98b443d209c1","Type":"ContainerDied","Data":"f12fe622af614b41bd44ec6bb3c9b091e81f021cd35ea69f811ff6d066d06d2b"} Jan 03 07:08:34 crc kubenswrapper[4854]: I0103 07:08:34.382179 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-665fcf668f-65wrt" event={"ID":"5899ebcd-eec0-44ae-9e07-98b443d209c1","Type":"ContainerStarted","Data":"c03b7a8862371981d02160b025fb4f54f4abd66e2255bf8d0d3285b131ff0df2"} Jan 03 07:08:34 crc kubenswrapper[4854]: I0103 07:08:34.434878 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 07:08:34 crc kubenswrapper[4854]: I0103 07:08:34.441107 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/339ed5c0-8501-4811-aac6-0026301ea531-run-httpd\") pod \"339ed5c0-8501-4811-aac6-0026301ea531\" (UID: \"339ed5c0-8501-4811-aac6-0026301ea531\") " Jan 03 07:08:34 crc kubenswrapper[4854]: I0103 07:08:34.441168 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/339ed5c0-8501-4811-aac6-0026301ea531-combined-ca-bundle\") pod \"339ed5c0-8501-4811-aac6-0026301ea531\" (UID: \"339ed5c0-8501-4811-aac6-0026301ea531\") " Jan 03 07:08:34 crc kubenswrapper[4854]: I0103 07:08:34.441205 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/339ed5c0-8501-4811-aac6-0026301ea531-ceilometer-tls-certs\") pod \"339ed5c0-8501-4811-aac6-0026301ea531\" (UID: \"339ed5c0-8501-4811-aac6-0026301ea531\") " Jan 03 07:08:34 crc kubenswrapper[4854]: I0103 07:08:34.441245 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/339ed5c0-8501-4811-aac6-0026301ea531-sg-core-conf-yaml\") pod \"339ed5c0-8501-4811-aac6-0026301ea531\" (UID: \"339ed5c0-8501-4811-aac6-0026301ea531\") " Jan 03 07:08:34 crc kubenswrapper[4854]: I0103 07:08:34.441309 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/339ed5c0-8501-4811-aac6-0026301ea531-config-data\") pod \"339ed5c0-8501-4811-aac6-0026301ea531\" (UID: \"339ed5c0-8501-4811-aac6-0026301ea531\") " Jan 03 07:08:34 crc kubenswrapper[4854]: I0103 07:08:34.441397 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/339ed5c0-8501-4811-aac6-0026301ea531-log-httpd\") pod \"339ed5c0-8501-4811-aac6-0026301ea531\" (UID: \"339ed5c0-8501-4811-aac6-0026301ea531\") " Jan 03 07:08:34 crc kubenswrapper[4854]: I0103 07:08:34.441444 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzqfb\" (UniqueName: \"kubernetes.io/projected/339ed5c0-8501-4811-aac6-0026301ea531-kube-api-access-pzqfb\") pod \"339ed5c0-8501-4811-aac6-0026301ea531\" (UID: \"339ed5c0-8501-4811-aac6-0026301ea531\") " Jan 03 07:08:34 crc kubenswrapper[4854]: I0103 07:08:34.441461 4854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/339ed5c0-8501-4811-aac6-0026301ea531-scripts\") pod \"339ed5c0-8501-4811-aac6-0026301ea531\" (UID: \"339ed5c0-8501-4811-aac6-0026301ea531\") " Jan 03 07:08:34 crc kubenswrapper[4854]: I0103 07:08:34.446328 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/339ed5c0-8501-4811-aac6-0026301ea531-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "339ed5c0-8501-4811-aac6-0026301ea531" (UID: "339ed5c0-8501-4811-aac6-0026301ea531"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 07:08:34 crc kubenswrapper[4854]: I0103 07:08:34.446703 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/339ed5c0-8501-4811-aac6-0026301ea531-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "339ed5c0-8501-4811-aac6-0026301ea531" (UID: "339ed5c0-8501-4811-aac6-0026301ea531"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 07:08:34 crc kubenswrapper[4854]: I0103 07:08:34.453778 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/339ed5c0-8501-4811-aac6-0026301ea531-kube-api-access-pzqfb" (OuterVolumeSpecName: "kube-api-access-pzqfb") pod "339ed5c0-8501-4811-aac6-0026301ea531" (UID: "339ed5c0-8501-4811-aac6-0026301ea531"). InnerVolumeSpecName "kube-api-access-pzqfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 07:08:34 crc kubenswrapper[4854]: I0103 07:08:34.470997 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/339ed5c0-8501-4811-aac6-0026301ea531-scripts" (OuterVolumeSpecName: "scripts") pod "339ed5c0-8501-4811-aac6-0026301ea531" (UID: "339ed5c0-8501-4811-aac6-0026301ea531"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 07:08:34 crc kubenswrapper[4854]: I0103 07:08:34.501601 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/339ed5c0-8501-4811-aac6-0026301ea531-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "339ed5c0-8501-4811-aac6-0026301ea531" (UID: "339ed5c0-8501-4811-aac6-0026301ea531"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 07:08:34 crc kubenswrapper[4854]: I0103 07:08:34.543300 4854 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/339ed5c0-8501-4811-aac6-0026301ea531-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 03 07:08:34 crc kubenswrapper[4854]: I0103 07:08:34.543328 4854 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/339ed5c0-8501-4811-aac6-0026301ea531-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 03 07:08:34 crc kubenswrapper[4854]: I0103 07:08:34.543338 4854 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzqfb\" (UniqueName: \"kubernetes.io/projected/339ed5c0-8501-4811-aac6-0026301ea531-kube-api-access-pzqfb\") on node \"crc\" DevicePath \"\"" Jan 03 07:08:34 crc kubenswrapper[4854]: I0103 07:08:34.543347 4854 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/339ed5c0-8501-4811-aac6-0026301ea531-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 07:08:34 crc kubenswrapper[4854]: I0103 07:08:34.543355 4854 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/339ed5c0-8501-4811-aac6-0026301ea531-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 03 07:08:34 crc kubenswrapper[4854]: I0103 07:08:34.579158 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/339ed5c0-8501-4811-aac6-0026301ea531-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "339ed5c0-8501-4811-aac6-0026301ea531" (UID: "339ed5c0-8501-4811-aac6-0026301ea531"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 07:08:34 crc kubenswrapper[4854]: I0103 07:08:34.592276 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/339ed5c0-8501-4811-aac6-0026301ea531-config-data" (OuterVolumeSpecName: "config-data") pod "339ed5c0-8501-4811-aac6-0026301ea531" (UID: "339ed5c0-8501-4811-aac6-0026301ea531"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 07:08:34 crc kubenswrapper[4854]: I0103 07:08:34.602255 4854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/339ed5c0-8501-4811-aac6-0026301ea531-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "339ed5c0-8501-4811-aac6-0026301ea531" (UID: "339ed5c0-8501-4811-aac6-0026301ea531"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 07:08:34 crc kubenswrapper[4854]: I0103 07:08:34.645419 4854 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/339ed5c0-8501-4811-aac6-0026301ea531-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 07:08:34 crc kubenswrapper[4854]: I0103 07:08:34.645457 4854 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/339ed5c0-8501-4811-aac6-0026301ea531-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 07:08:34 crc kubenswrapper[4854]: I0103 07:08:34.645471 4854 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/339ed5c0-8501-4811-aac6-0026301ea531-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 03 07:08:35 crc kubenswrapper[4854]: I0103 07:08:35.408146 4854 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 07:08:35 crc kubenswrapper[4854]: I0103 07:08:35.490281 4854 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 03 07:08:35 crc kubenswrapper[4854]: I0103 07:08:35.507651 4854 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 03 07:08:35 crc kubenswrapper[4854]: I0103 07:08:35.521764 4854 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 03 07:08:35 crc kubenswrapper[4854]: E0103 07:08:35.522352 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="339ed5c0-8501-4811-aac6-0026301ea531" containerName="sg-core" Jan 03 07:08:35 crc kubenswrapper[4854]: I0103 07:08:35.522373 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="339ed5c0-8501-4811-aac6-0026301ea531" containerName="sg-core" Jan 03 07:08:35 crc kubenswrapper[4854]: E0103 07:08:35.522392 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="339ed5c0-8501-4811-aac6-0026301ea531" containerName="proxy-httpd" Jan 03 07:08:35 crc kubenswrapper[4854]: I0103 07:08:35.522398 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="339ed5c0-8501-4811-aac6-0026301ea531" containerName="proxy-httpd" Jan 03 07:08:35 crc kubenswrapper[4854]: E0103 07:08:35.522418 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="339ed5c0-8501-4811-aac6-0026301ea531" containerName="ceilometer-central-agent" Jan 03 07:08:35 crc kubenswrapper[4854]: I0103 07:08:35.522425 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="339ed5c0-8501-4811-aac6-0026301ea531" containerName="ceilometer-central-agent" Jan 03 07:08:35 crc kubenswrapper[4854]: E0103 07:08:35.522443 4854 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="339ed5c0-8501-4811-aac6-0026301ea531" containerName="ceilometer-notification-agent" Jan 03 07:08:35 crc kubenswrapper[4854]: I0103 07:08:35.522451 4854 state_mem.go:107] "Deleted CPUSet assignment" podUID="339ed5c0-8501-4811-aac6-0026301ea531" containerName="ceilometer-notification-agent" Jan 03 07:08:35 crc kubenswrapper[4854]: I0103 07:08:35.522745 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="339ed5c0-8501-4811-aac6-0026301ea531" containerName="sg-core" Jan 03 07:08:35 crc kubenswrapper[4854]: I0103 07:08:35.522767 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="339ed5c0-8501-4811-aac6-0026301ea531" containerName="proxy-httpd" Jan 03 07:08:35 crc kubenswrapper[4854]: I0103 07:08:35.522783 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="339ed5c0-8501-4811-aac6-0026301ea531" containerName="ceilometer-central-agent" Jan 03 07:08:35 crc kubenswrapper[4854]: I0103 07:08:35.522799 4854 memory_manager.go:354] "RemoveStaleState removing state" podUID="339ed5c0-8501-4811-aac6-0026301ea531" containerName="ceilometer-notification-agent" Jan 03 07:08:35 crc kubenswrapper[4854]: I0103 07:08:35.525110 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 07:08:35 crc kubenswrapper[4854]: I0103 07:08:35.531574 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 03 07:08:35 crc kubenswrapper[4854]: I0103 07:08:35.531601 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 03 07:08:35 crc kubenswrapper[4854]: I0103 07:08:35.531759 4854 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 03 07:08:35 crc kubenswrapper[4854]: I0103 07:08:35.553180 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 03 07:08:35 crc kubenswrapper[4854]: I0103 07:08:35.567150 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7389f2e6-4d1b-4228-b30b-29f73e5a95e2-log-httpd\") pod \"ceilometer-0\" (UID: \"7389f2e6-4d1b-4228-b30b-29f73e5a95e2\") " pod="openstack/ceilometer-0" Jan 03 07:08:35 crc kubenswrapper[4854]: I0103 07:08:35.567209 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7389f2e6-4d1b-4228-b30b-29f73e5a95e2-config-data\") pod \"ceilometer-0\" (UID: \"7389f2e6-4d1b-4228-b30b-29f73e5a95e2\") " pod="openstack/ceilometer-0" Jan 03 07:08:35 crc kubenswrapper[4854]: I0103 07:08:35.567269 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7389f2e6-4d1b-4228-b30b-29f73e5a95e2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7389f2e6-4d1b-4228-b30b-29f73e5a95e2\") " pod="openstack/ceilometer-0" Jan 03 07:08:35 crc kubenswrapper[4854]: I0103 07:08:35.567357 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7389f2e6-4d1b-4228-b30b-29f73e5a95e2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7389f2e6-4d1b-4228-b30b-29f73e5a95e2\") " pod="openstack/ceilometer-0" Jan 03 07:08:35 crc kubenswrapper[4854]: I0103 07:08:35.567376 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7389f2e6-4d1b-4228-b30b-29f73e5a95e2-scripts\") pod \"ceilometer-0\" (UID: \"7389f2e6-4d1b-4228-b30b-29f73e5a95e2\") " pod="openstack/ceilometer-0" Jan 03 07:08:35 crc kubenswrapper[4854]: I0103 07:08:35.567410 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7389f2e6-4d1b-4228-b30b-29f73e5a95e2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7389f2e6-4d1b-4228-b30b-29f73e5a95e2\") " pod="openstack/ceilometer-0" Jan 03 07:08:35 crc kubenswrapper[4854]: I0103 07:08:35.567433 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7389f2e6-4d1b-4228-b30b-29f73e5a95e2-run-httpd\") pod \"ceilometer-0\" (UID: \"7389f2e6-4d1b-4228-b30b-29f73e5a95e2\") " pod="openstack/ceilometer-0" Jan 03 07:08:35 crc kubenswrapper[4854]: I0103 07:08:35.567546 4854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knzs8\" (UniqueName: \"kubernetes.io/projected/7389f2e6-4d1b-4228-b30b-29f73e5a95e2-kube-api-access-knzs8\") pod \"ceilometer-0\" (UID: \"7389f2e6-4d1b-4228-b30b-29f73e5a95e2\") " pod="openstack/ceilometer-0" Jan 03 07:08:35 crc kubenswrapper[4854]: I0103 07:08:35.668196 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knzs8\" (UniqueName: \"kubernetes.io/projected/7389f2e6-4d1b-4228-b30b-29f73e5a95e2-kube-api-access-knzs8\") pod \"ceilometer-0\" (UID: \"7389f2e6-4d1b-4228-b30b-29f73e5a95e2\") " pod="openstack/ceilometer-0" Jan 03 07:08:35 crc kubenswrapper[4854]: I0103 07:08:35.668260 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7389f2e6-4d1b-4228-b30b-29f73e5a95e2-log-httpd\") pod \"ceilometer-0\" (UID: \"7389f2e6-4d1b-4228-b30b-29f73e5a95e2\") " pod="openstack/ceilometer-0" Jan 03 07:08:35 crc kubenswrapper[4854]: I0103 07:08:35.668295 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7389f2e6-4d1b-4228-b30b-29f73e5a95e2-config-data\") pod \"ceilometer-0\" (UID: \"7389f2e6-4d1b-4228-b30b-29f73e5a95e2\") " pod="openstack/ceilometer-0" Jan 03 07:08:35 crc kubenswrapper[4854]: I0103 07:08:35.668337 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7389f2e6-4d1b-4228-b30b-29f73e5a95e2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7389f2e6-4d1b-4228-b30b-29f73e5a95e2\") " pod="openstack/ceilometer-0" Jan 03 07:08:35 crc kubenswrapper[4854]: I0103 07:08:35.668398 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7389f2e6-4d1b-4228-b30b-29f73e5a95e2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7389f2e6-4d1b-4228-b30b-29f73e5a95e2\") " pod="openstack/ceilometer-0" Jan 03 07:08:35 crc kubenswrapper[4854]: I0103 07:08:35.668417 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7389f2e6-4d1b-4228-b30b-29f73e5a95e2-scripts\") pod \"ceilometer-0\" (UID: \"7389f2e6-4d1b-4228-b30b-29f73e5a95e2\") " pod="openstack/ceilometer-0" Jan 03 07:08:35 crc kubenswrapper[4854]: I0103 07:08:35.668444 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7389f2e6-4d1b-4228-b30b-29f73e5a95e2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7389f2e6-4d1b-4228-b30b-29f73e5a95e2\") " pod="openstack/ceilometer-0" Jan 03 07:08:35 crc kubenswrapper[4854]: I0103 07:08:35.668464 4854 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7389f2e6-4d1b-4228-b30b-29f73e5a95e2-run-httpd\") pod \"ceilometer-0\" (UID: \"7389f2e6-4d1b-4228-b30b-29f73e5a95e2\") " pod="openstack/ceilometer-0" Jan 03 07:08:35 crc kubenswrapper[4854]: I0103 07:08:35.669278 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7389f2e6-4d1b-4228-b30b-29f73e5a95e2-run-httpd\") pod \"ceilometer-0\" (UID: \"7389f2e6-4d1b-4228-b30b-29f73e5a95e2\") " pod="openstack/ceilometer-0" Jan 03 07:08:35 crc kubenswrapper[4854]: I0103 07:08:35.670294 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7389f2e6-4d1b-4228-b30b-29f73e5a95e2-log-httpd\") pod \"ceilometer-0\" (UID: \"7389f2e6-4d1b-4228-b30b-29f73e5a95e2\") " pod="openstack/ceilometer-0" Jan 03 07:08:35 crc kubenswrapper[4854]: I0103 07:08:35.674692 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7389f2e6-4d1b-4228-b30b-29f73e5a95e2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7389f2e6-4d1b-4228-b30b-29f73e5a95e2\") " pod="openstack/ceilometer-0" Jan 03 07:08:35 crc kubenswrapper[4854]: I0103 07:08:35.675068 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7389f2e6-4d1b-4228-b30b-29f73e5a95e2-scripts\") pod \"ceilometer-0\" (UID: \"7389f2e6-4d1b-4228-b30b-29f73e5a95e2\") " pod="openstack/ceilometer-0" Jan 03 07:08:35 crc kubenswrapper[4854]: I0103 07:08:35.675331 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7389f2e6-4d1b-4228-b30b-29f73e5a95e2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7389f2e6-4d1b-4228-b30b-29f73e5a95e2\") " pod="openstack/ceilometer-0" Jan 03 07:08:35 crc kubenswrapper[4854]: I0103 07:08:35.675398 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7389f2e6-4d1b-4228-b30b-29f73e5a95e2-config-data\") pod \"ceilometer-0\" (UID: \"7389f2e6-4d1b-4228-b30b-29f73e5a95e2\") " pod="openstack/ceilometer-0" Jan 03 07:08:35 crc kubenswrapper[4854]: I0103 07:08:35.692130 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7389f2e6-4d1b-4228-b30b-29f73e5a95e2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7389f2e6-4d1b-4228-b30b-29f73e5a95e2\") " pod="openstack/ceilometer-0" Jan 03 07:08:35 crc kubenswrapper[4854]: I0103 07:08:35.697835 4854 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knzs8\" (UniqueName: \"kubernetes.io/projected/7389f2e6-4d1b-4228-b30b-29f73e5a95e2-kube-api-access-knzs8\") pod \"ceilometer-0\" (UID: \"7389f2e6-4d1b-4228-b30b-29f73e5a95e2\") " pod="openstack/ceilometer-0" Jan 03 07:08:35 crc kubenswrapper[4854]: I0103 07:08:35.856518 4854 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 07:08:36 crc kubenswrapper[4854]: I0103 07:08:36.131763 4854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="339ed5c0-8501-4811-aac6-0026301ea531" path="/var/lib/kubelet/pods/339ed5c0-8501-4811-aac6-0026301ea531/volumes" Jan 03 07:08:36 crc kubenswrapper[4854]: W0103 07:08:36.365858 4854 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7389f2e6_4d1b_4228_b30b_29f73e5a95e2.slice/crio-cadcda3b8346c301ca2b3d6562e46270f0441e4766d9c1e855c9c340babc316c WatchSource:0}: Error finding container cadcda3b8346c301ca2b3d6562e46270f0441e4766d9c1e855c9c340babc316c: Status 404 returned error can't find the container with id cadcda3b8346c301ca2b3d6562e46270f0441e4766d9c1e855c9c340babc316c Jan 03 07:08:36 crc kubenswrapper[4854]: I0103 07:08:36.368892 4854 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 03 07:08:36 crc kubenswrapper[4854]: I0103 07:08:36.378170 4854 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 03 07:08:36 crc kubenswrapper[4854]: I0103 07:08:36.421049 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7389f2e6-4d1b-4228-b30b-29f73e5a95e2","Type":"ContainerStarted","Data":"cadcda3b8346c301ca2b3d6562e46270f0441e4766d9c1e855c9c340babc316c"} Jan 03 07:08:37 crc kubenswrapper[4854]: I0103 07:08:37.458818 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7389f2e6-4d1b-4228-b30b-29f73e5a95e2","Type":"ContainerStarted","Data":"85f34f8e2a282d8a17ecf19e697ee596758acf84dd0066f9d75f21d1670e5e80"} Jan 03 07:08:38 crc kubenswrapper[4854]: I0103 07:08:38.474527 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7389f2e6-4d1b-4228-b30b-29f73e5a95e2","Type":"ContainerStarted","Data":"5e790009a1d4871347b8828142db570711b8e8bfa14df7a145b15bca78f35630"} Jan 03 07:08:39 crc kubenswrapper[4854]: I0103 07:08:39.492597 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7389f2e6-4d1b-4228-b30b-29f73e5a95e2","Type":"ContainerStarted","Data":"ceacbeeb3db77ad3be4c298c6eae64faa673f1d5e5a59fcf017709472fb0b2d1"} Jan 03 07:08:41 crc kubenswrapper[4854]: I0103 07:08:41.538836 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7389f2e6-4d1b-4228-b30b-29f73e5a95e2","Type":"ContainerStarted","Data":"85ff3e33f15636a8db6f851e0a1162b586ee76a8cb707429b03251acd16afd30"} Jan 03 07:08:41 crc kubenswrapper[4854]: I0103 07:08:41.539392 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 03 07:08:41 crc kubenswrapper[4854]: I0103 07:08:41.568307 4854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.973435803 podStartE2EDuration="6.568280353s" podCreationTimestamp="2026-01-03 07:08:35 +0000 UTC" firstStartedPulling="2026-01-03 07:08:36.367995455 +0000 UTC m=+5294.694572047" lastFinishedPulling="2026-01-03 07:08:39.962840025 +0000 UTC m=+5298.289416597" observedRunningTime="2026-01-03 07:08:41.563593607 +0000 UTC m=+5299.890170199" watchObservedRunningTime="2026-01-03 07:08:41.568280353 +0000 UTC m=+5299.894856925" Jan 03 07:08:41 crc kubenswrapper[4854]: I0103 07:08:41.755479 4854 patch_prober.go:28] interesting pod/machine-config-daemon-qdhfx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 07:08:41 crc kubenswrapper[4854]: I0103 07:08:41.755548 4854 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 07:08:41 crc kubenswrapper[4854]: I0103 07:08:41.755599 4854 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" Jan 03 07:08:41 crc kubenswrapper[4854]: I0103 07:08:41.756363 4854 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c60d8ff55fa8b8084957ed5941319569012ea8db784d8f788a5306705491233b"} pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 03 07:08:41 crc kubenswrapper[4854]: I0103 07:08:41.756434 4854 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerName="machine-config-daemon" containerID="cri-o://c60d8ff55fa8b8084957ed5941319569012ea8db784d8f788a5306705491233b" gracePeriod=600 Jan 03 07:08:41 crc kubenswrapper[4854]: E0103 07:08:41.886479 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 07:08:42 crc kubenswrapper[4854]: I0103 07:08:42.527415 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-665fcf668f-65wrt" Jan 03 07:08:42 crc kubenswrapper[4854]: I0103 07:08:42.527705 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-665fcf668f-65wrt" Jan 03 07:08:42 crc kubenswrapper[4854]: I0103 07:08:42.552192 4854 generic.go:334] "Generic (PLEG): container finished" podID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" containerID="c60d8ff55fa8b8084957ed5941319569012ea8db784d8f788a5306705491233b" exitCode=0 Jan 03 07:08:42 crc kubenswrapper[4854]: I0103 07:08:42.552239 4854 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" event={"ID":"e8c88d7d-092b-44f7-b4c8-3540be3c0e8b","Type":"ContainerDied","Data":"c60d8ff55fa8b8084957ed5941319569012ea8db784d8f788a5306705491233b"} Jan 03 07:08:42 crc kubenswrapper[4854]: I0103 07:08:42.552317 4854 scope.go:117] "RemoveContainer" containerID="8eb267d56b3312877d76a39e133c47d96122041fd2c9482d4408e71d0d1ac7f0" Jan 03 07:08:42 crc kubenswrapper[4854]: I0103 07:08:42.553779 4854 scope.go:117] "RemoveContainer" containerID="c60d8ff55fa8b8084957ed5941319569012ea8db784d8f788a5306705491233b" Jan 03 07:08:42 crc kubenswrapper[4854]: E0103 07:08:42.554502 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 07:08:58 crc kubenswrapper[4854]: I0103 07:08:58.118979 4854 scope.go:117] "RemoveContainer" containerID="c60d8ff55fa8b8084957ed5941319569012ea8db784d8f788a5306705491233b" Jan 03 07:08:58 crc kubenswrapper[4854]: E0103 07:08:58.120388 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 07:09:02 crc kubenswrapper[4854]: I0103 07:09:02.534196 4854 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-665fcf668f-65wrt" Jan 03 07:09:02 crc kubenswrapper[4854]: I0103 07:09:02.541019 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-665fcf668f-65wrt" Jan 03 07:09:05 crc kubenswrapper[4854]: I0103 07:09:05.866878 4854 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 03 07:09:09 crc kubenswrapper[4854]: I0103 07:09:09.118847 4854 scope.go:117] "RemoveContainer" containerID="c60d8ff55fa8b8084957ed5941319569012ea8db784d8f788a5306705491233b" Jan 03 07:09:09 crc kubenswrapper[4854]: E0103 07:09:09.119982 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 07:09:24 crc kubenswrapper[4854]: I0103 07:09:24.118640 4854 scope.go:117] "RemoveContainer" containerID="c60d8ff55fa8b8084957ed5941319569012ea8db784d8f788a5306705491233b" Jan 03 07:09:24 crc kubenswrapper[4854]: E0103 07:09:24.120296 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 07:09:35 crc kubenswrapper[4854]: I0103 07:09:35.120472 4854 scope.go:117] "RemoveContainer" containerID="c60d8ff55fa8b8084957ed5941319569012ea8db784d8f788a5306705491233b" Jan 03 07:09:35 crc kubenswrapper[4854]: E0103 07:09:35.121902 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 07:09:47 crc kubenswrapper[4854]: I0103 07:09:47.121027 4854 scope.go:117] "RemoveContainer" containerID="c60d8ff55fa8b8084957ed5941319569012ea8db784d8f788a5306705491233b" Jan 03 07:09:47 crc kubenswrapper[4854]: E0103 07:09:47.124381 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 07:09:47 crc kubenswrapper[4854]: I0103 07:09:47.963750 4854 trace.go:236] Trace[682682564]: "Calculate volume metrics of storage for pod minio-dev/minio" (03-Jan-2026 07:09:46.874) (total time: 1088ms): Jan 03 07:09:47 crc kubenswrapper[4854]: Trace[682682564]: [1.088812072s] [1.088812072s] END Jan 03 07:09:58 crc kubenswrapper[4854]: I0103 07:09:58.119212 4854 scope.go:117] "RemoveContainer" containerID="c60d8ff55fa8b8084957ed5941319569012ea8db784d8f788a5306705491233b" Jan 03 07:09:58 crc kubenswrapper[4854]: E0103 07:09:58.122357 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 07:10:12 crc kubenswrapper[4854]: I0103 07:10:12.128323 4854 scope.go:117] "RemoveContainer" containerID="c60d8ff55fa8b8084957ed5941319569012ea8db784d8f788a5306705491233b" Jan 03 07:10:12 crc kubenswrapper[4854]: E0103 07:10:12.129360 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 07:10:25 crc kubenswrapper[4854]: I0103 07:10:25.118884 4854 scope.go:117] "RemoveContainer" containerID="c60d8ff55fa8b8084957ed5941319569012ea8db784d8f788a5306705491233b" Jan 03 07:10:25 crc kubenswrapper[4854]: E0103 07:10:25.120125 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 07:10:36 crc kubenswrapper[4854]: I0103 07:10:36.119544 4854 scope.go:117] "RemoveContainer" containerID="c60d8ff55fa8b8084957ed5941319569012ea8db784d8f788a5306705491233b" Jan 03 07:10:36 crc kubenswrapper[4854]: E0103 07:10:36.120697 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 07:10:48 crc kubenswrapper[4854]: I0103 07:10:48.117984 4854 scope.go:117] "RemoveContainer" containerID="c60d8ff55fa8b8084957ed5941319569012ea8db784d8f788a5306705491233b" Jan 03 07:10:48 crc kubenswrapper[4854]: E0103 07:10:48.118965 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 07:11:00 crc kubenswrapper[4854]: I0103 07:11:00.118480 4854 scope.go:117] "RemoveContainer" containerID="c60d8ff55fa8b8084957ed5941319569012ea8db784d8f788a5306705491233b" Jan 03 07:11:00 crc kubenswrapper[4854]: E0103 07:11:00.119500 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 07:11:11 crc kubenswrapper[4854]: I0103 07:11:11.119304 4854 scope.go:117] "RemoveContainer" containerID="c60d8ff55fa8b8084957ed5941319569012ea8db784d8f788a5306705491233b" Jan 03 07:11:11 crc kubenswrapper[4854]: E0103 07:11:11.123507 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 07:11:24 crc kubenswrapper[4854]: I0103 07:11:24.118951 4854 scope.go:117] "RemoveContainer" containerID="c60d8ff55fa8b8084957ed5941319569012ea8db784d8f788a5306705491233b" Jan 03 07:11:24 crc kubenswrapper[4854]: E0103 07:11:24.119913 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b" Jan 03 07:11:38 crc kubenswrapper[4854]: I0103 07:11:38.119365 4854 scope.go:117] "RemoveContainer" containerID="c60d8ff55fa8b8084957ed5941319569012ea8db784d8f788a5306705491233b" Jan 03 07:11:38 crc kubenswrapper[4854]: E0103 07:11:38.120563 4854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qdhfx_openshift-machine-config-operator(e8c88d7d-092b-44f7-b4c8-3540be3c0e8b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qdhfx" podUID="e8c88d7d-092b-44f7-b4c8-3540be3c0e8b"